‘Mind, the Gap’ Manuscript

Earlier this week I submitted a manuscript to Earth Surface Processes and Landforms with one of my former PhD advisors, John Wainwright. Provisionally entitled Mind, the Gap in Landscape-Evolution Modelling (we’ll see what the reviewers think of that one!), the manuscript argues that agent-based models (ABMs) are a useful tool for overcoming the limitations of existing, highly empirical approaches in geomorphology. This, we suggest, would be useful because despite an increasing recognition that human activity is currently the dominant force modifying landscapes geomorphically, and that this activity has been increasing through time, there has been little integrative work to evaluate human interactions with geomorphic processes.

In the manuscript we present two case studies of models that consider landscape change with the aid of an ABM – SPASIMv1 (developed during my PhD) and CybErosion (a model to simulate the dynamic interaction of prehistoric communities in Mediterranean environments John has developed). We evaluate the advantages and disadvantages of the ABM approach, and consider some of the major challenges to implementation. These challenges include potential process scale mis-matches, differences in perspective between investigators from different disciplines, and issues regarding model evaluation, analysis and interpretation.

I’ll post more here as the review process progresses. Hopefully progress with ESPL will be a little quicker than it has been for the manuscript I submitted to Environmental Modelling and Software detailing the biophysical component of SPASIMv1 (still yet to receive the review after 5 months!)…

Effective Modelling for Sustainable Forest Management

In many forest landscapes a desirable management objective is the sustainability of both economic productivity and healthy wildlife populations. Such dual-objective management requires a good understanding of the interactions between the many components and actors at several scales and across large extents. Computer simulation models have been enthusiastically developed by scientists to improve knowledge about the dynamics of forest growth and disturbance (for example by timber harvest or wildfire).

However, Papaik, Sturtevant and Messier write in their recent guest editorial for Ecology and Society that “models are constrained by persistent boundaries between scientific disciplines, and by the scale-specific processes for which they were created”. Consequently, they suggest that:

“A more integrated and flexible modeling framework is required, one that guides the selection of which processes to model, defines the scales at which they are relevant, and carefully integrates them into a cohesive whole”.


This new framework is illustrated by the papers in the Ecology and Society special feature ‘Crossing Scales and Disciplines to Achieve Forest Sustainability: A Framework for Effective Integrated Modeling’.

The papers in the special feature provide case studies that reflect two interacting themes:

  1. interdisciplinary approaches for sustainable forest landscape management, and
  2. the importance of scaling issues when integrating socioeconomic and ecological processes in the modeling of managed forest ecosystems.

These issues are well related to the project I’m currently working on that is developing an integrated ecological-economic model of a managed forest landscape in Michigan’s Upper Peninsula. One paper that caught my eye was by Sturtevant et al., entitled ‘A Toolkit Modeling Approach for Sustainable Forest Management Planning: Achieving Balance between Science and Local Needs’.

Sturtevant et al. suggest that forest managers are generally faced with a “devil’s choice” between using generic ‘off-the-shelf models’ where information flows primarily from researchers and planners down to local communities versus developing case-specific models designed for a specific purpose or locale and based on information from the local actors. To avoid this choice, which Sturtevant et al. believe will seldom result in a satisfactory management result, they outline their proposal for a hybrid ‘toolkit’ approach. Their alternative approach “builds on existing and readily adaptable modeling ‘tools’ that have been developed and applied to previous research and planning initiatives”.

Their toolkit approach is

  1. collaborative – including stakeholders and decision-makers
  2. a ‘meta-modelling’ approach – the model is derived from other models and tools.

They then illustrate their toolkit approach using a case study from Labrador, Canada, highlighting the stages of establishing the issues, developing a conceptual model, implementing the meta-model, and then refining the model iteratively. They conclude:

“A toolkit approach to SFM [Sustainable Forest Management] analytical support is more about perspectives on information flow than on technical details. Certainly expertise and enabling technology are required to allow a team to apply such a framework. However, the essence of this approach is to seek balance between top-down (off the shelf, science-driven) and bottom-up (case-specific, stakeholder-driven) approaches to SFM decision support. We aim to find a pivot point, with adequate information flow from local experts and stakeholders to scientists, while at the same time avoiding “reinventing the wheel” (e.g. Fig. 1) by making full use of the cumulative experience of scientists and tools they have constructed.”

Although this ‘meta-model’ approach may save time on the technical model building side of things, many resources (time, effort and money) will be required to build and maintain relationships and confidence between scientists, managers and local stakeholders. This approach is really a modelling toolkit for management, with very little emphasis on improving scientific understanding. In this case the modelling is the means to the end of integrative/participatory management of the forest landscape.

The authors continue:

“The mixture of local experts and stakeholders who understand how the tools work, scientists who are willing and able to communicate their science to stakeholders, and integrated analytical tools that can simulate complex spatial and temporal problems will provide powerful and efficient decision support for SFM.”

Unfortunately, unless the scientists in question have the explicit remit to offer their services for management purposes, this sort of modelling approach will not be very appealing to them. In a scientific climate of ‘publish or perish’, management outcomes alone are unlikely to be enough to lure the services of scientists. In some cases I’m sure I will be wrong and scientists will happily oblige. But more generally, unless funding bodies become less concerned with tangible outputs at specific points in time, and academic scientists are judged less strictly by their publishing output, this situation may be difficult to overcome.

This situation is one reason the two sides of the “devils’ choice” are more well developed to the expense of the ‘middle-ground’ toolkit approach. ‘Off-the-shelf’ models, such as LANDIS, are appealing to scientists as they allow the investigation of more abstract and basic science questions than asked by forest managers. The development of ‘customized’ models is appealing to scientists because they allow more detailed investigation of underlying processes and provide a framework for the collection of empirical data collection. No doubt the understanding gained from these approaches will eventually help forest managers – but not in the manner of direct decision-support as the toolkit modelling approach proposes.

As a case in point, the ‘customized’ Managed Forest Landscape Model for Michigan I am working on is raising questions about underlying relationships between deer and forest stand structure. I’m off into the field this week to get data collection started for just that purpose.

JASSS Paper Accepted

This week one of the papers I have been working on as a result of my PhD research has been accepted for publication in the Journal of Artificial Societies and Social Simulation (JASSS). The paper, written with Raúl Romero-Calcerrada, John Wainwright and George Perry, describes the agent-based model of agricultural land-use decision-making we constructed to represent SPA 56 in Madrid, Spain. We then present results from our use of the model to examine the importance of land tenure and land use on future land cover and the potential consequences for wildfire risk. The abstract is below, and I’ll post again here when the paper is published and online.

An Agent-Based Model of Mediterranean Agricultural Land-Use/Cover Change for examining Wildfire Risk

James D. A. Millington, Raúl Romero-Calcerrada, John Wainwright, George L.W. Perry
(Forthcoming) Journal of Artificial Societies and Social Simulation

Abstract
Humans have a long history of activity in Mediterranean Basin landscapes. Spatial heterogeneity in these landscapes hinders our understanding about the impacts of changes in human activity on ecological processes, such as wildfire. Use of spatially-explicit models that simulate processes at fine scales should aid the investigation of spatial patterns at the broader, landscape scale. Here, we present an agent-based model of agricultural land-use decision-making to examine the importance of land tenure and land use on future land cover. The model considers two ‘types’ of land-use decision-making agent with differing perspectives; ‘commercial’ agents that are perfectly economically rational, and ‘traditional’ agents that represent part-time or ‘traditional’ farmers that manage their land because of its cultural, rather than economic, value. The structure of the model is described and results are presented for various scenarios of initial landscape configuration. Land use/cover maps produced by the model are used to examine how wildfire risk changes for each scenario. Results indicate land tenure configuration influences trajectories of land use change. However, simulations for various initial land-use configurations and compositions converge to similar states when land-tenure structure is held constant. For the scenarios considered, mean wildfire risk increases relative to the observed landscape. Increases in wildfire risk are not spatially uniform however, varying according to the composition and configuration of land use types. These unexpected spatial variations in wildfire risk highlight the advantages of using a spatially-explicit ABM/LUCC.

Creating a Genuine Science of Sustainability

Previously, I wrote about Orrin Pilkey and Linda Pilkey-Jarvis’ book, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future. In a recent issue of the journal Futures, Jerome Ravetz reviews their book alongside David Waltner-Toews’ The Chickens Fight Back: Pandemic Panics and Deadly Diseases That Jump From Animals to Humans. Ravetz himself points out that the subject matter and approaches of the books are rather different, but suggests that “Read together, they provide insights about what needs to be done for the creation of a genuine science of sustainability”.

Ravetz (along with Silvio Funtowicz) has developed the idea of ‘post-normal’ science – a new approach to replace the reductionist, analytic worldview of ‘normal’ science. Post-normal science is a “systemic, synthetic and humanistic” approach, useful in cases where “facts are uncertain, values in dispute, stakes high and decisions urgent”. I used some of these ideas to experiment with some alternative model assessment criteria for the socio-ecological simulation model I developed during my PhD studies. Ravetz’s perspectives toward modelling, and science in general, shone through quite clearly in his review:

“On the philosophical side, the corruption of computer models can be understood as the consequence of a false metaphysics. Following on from the prophetic teachings of Galileo and Descartes, we have been taught to believe that Science is the sole and certain path to truth. And this Science is mathematical, using quantitative data and abstract reasonings. Such a science is not merely necessary for achieving genuine knowledge (an arguable position) but is also sufficient. We are all victims of the fantasy that once we have numerical data and mathematical argument (or computer programs), truth will inevitably follow. The evil consequences of this philosophy are quite familiar in neo-classical economics where partly true banalities about markets are dressed up in the language of the differential calculus to produce justifications for every sort of expropriation of the weak and vulnerable. ‘What you can’t count, doesn’t count’ sums it all up neatly. In the present case, the rule of models extends over nearly all the policy-relevant sciences, including those ostensibly devoted to the protection of the health of people and the environment.

We badly need an effective critical philosophy of mathematical science. … Now science has replaced religion as the foundation of our established order, and in it mathematical science reigns supreme. Systematic philosophical criticism is hard to find. (The late Imre Lakatos did pioneering work in the criticism of the dogmatism of ‘modern’ abstract mathematics but did not focus on the obscurities at the foundations of mathematical thinking.) Up to now, mathematical freethinking is mainly confined to the craftsmen, with their jokes of the ‘Murphy’s Law’ sort, best expressed in the acronym GIGO (Garbage In, Garbage Out). And where criticism is absent, corruption of all sorts, both deliberate and unaware, is bound to follow. Pseudo-mathematical reasonings about the unthinkable helped to bring us to the brink of nuclear annihilation a half-century ago. The GIGO sciences of computer models may well distract us now from a sane approach to coping with the many environmental problems we now face. The Pilkeys have done us a great service in providing cogent examples of the situation, and indicating some practical ways forward.”

Thus, Ravetz finds a little more value in the Useless Arithmetic book than I did. But equally, he highlights that the Pilkeys offer few, rather vague, solutions and instead turns to Waltner-Toews’ book for inspiration for the future:

Pilkey’s analysis of the corruptions of misconceived reductionist science shows us the depth of the problem. Waltner-Toews’ narrative about ourselves in our natural context (not always benign!) indicates the way to a solution.”

Using the outbreak of avian flu as an example of how to tackle complex environmental in the ‘risk society’ in which we now live, Waltner-Toews:

“… makes it very plain that we will never ‘conquer’ disease. Considering just a single sort of disease, the ‘zoonoses’ (deriving from animals), he becomes a raconteur of bio-social-cultural medicine …

What everyone learned, or should have learned, from the avian flu episode is that disease is a very complex entity. Judging from TV adverts for antiseptics, we still believe that the natural state of things is to be germ-free, and all we need to do is to find the germs and kill them. In certain limiting cases, this is a useful approximation to the truth, as in the case of infections of hospitals. But even there complexity intrudes … “

Complexity which demands an alternative perspective that moves beyond the next stage of ‘normal’ science to a post-normal science (to play on Kuhn’s vocabulary of paradigm shifts):

“That old simple ‘kill the germ’ theory may now be derided by medical authorities as something for the uneducated public and their media. But the practice of environmental medicine has not caught up with these new insights.

The complexity of zoonoses reflects the character of our interaction with all those myriads of other species. … the creatures putting us at risk are not always large enough to be fenced off and kept at a safe distance. … We can do all sorts of things to control our interactions with them, but one thing is impossible: to stamp them out, or even to kill the bad ones and keep the good ones.

Waltner-Toews is quite clear about the message, and about the sort of science that will be required, not merely for coexisting with zoonoses but also for sustainable living in general. Playing the philological game, he reminds us that the ancient Indo-European world for earth, dgghem, gave us, along with ‘humus’, all of ‘human’, ‘humane’ and ‘humble’. As he says, community by community, there is a new global vision emerging whose beauty and complexity and mystery we can now explore thanks to all our scientific tools.”

This global vision is a post-normal vision. It applies to far more than just avian flu – from coastal erosion and the disposal of toxic or radioactive waste (as the Pilekys discuss for example) to climate change. This post-normal vision focuses on uncertainty, value loading, and a plurality of legitimate perspectives that demands an “extended peer community” to evaluate the knowledge generated and decisions proposed.

In all fairness, it would not be easy to devise a conventional science-based curriculum in which Waltner-Toews’ insights could be effectively conveyed. For his vision of zoonoses is one of complexity, intimacy and contingency. To grasp it, one needs to have imagination, breadth of vision and humility, not qualities fostered in standard academic training. … “

This post-normal science won’t be easy and won’t be learned or fostered entirely within the esoteric confines of an ivory tower. Science, with its logical rigour, is important. It is still the best game in town. But the knowledge produced by ‘normal’ science is provisional and its march toward truth is seemingly Sisyphean when confronted faced with the immediacy of complex contemporary environmental problems. To contribute to the production a sustainable future, a genuine science of sustainability would do well to adopt a more post-normal stance toward its subject.

Creating our Future

“The future is ours, not to predict, but to create.”

– Al Gore, 16th June 2008

Hear, hear. Spoken in the context of climate change, this might be interpreted as a slight against Global Circulation Models used by scientists. Rather, I think this should be interpreted as an indication that Gore understands that we need to move past discussions about whether we can use such models to ‘prove’ whether climate change is actually happening, and instead act to mitigate against undesired change.

This does not mean computer simulations of earth systems become redundant however – they are still useful tools to improving our knowledge about systems that are so large (spatially) as to prevent empirical experimentation. But we do need to remember that in ‘open’, middle number systems (which the majority of global environmental systems are), proving the ‘truth’ of a model by comparing model results with empirical data is a logical fallacy. In such circumstances, a ‘post-normal’ approach to the use of computer simulation models and, the wider issue of climate change, would be more useful. This view is gaining recognition.

Prometheus has more detailed discussion about prediction, forecasting and decision-making of climate change.

Read the full transcript of Gore’s speech, or watch the section in which he addresses climate change below.

Model Types for Ecological Modelling

Sven Erik Jørgensen introduces a recent issue of Ecological Modelling that presents selected papers from the International Conference on Ecological Modelling in Yamaguchi, Japan (28 August – 1 September 2006). The paper provides an overview of the model types available for ecological modelling, briefly highlighting the shift from a dominance of bio-geo-chemical dynamic models and population dynamics models in the 1970s toward the application of a wider spectrum of models. The emergence of new model types has come as a response to questions such as:

  • How can we describe the spatial distribution which is often crucial to understand ecosystem reactions?
  • How do we model middle number systems?
  • How do we model hetergenous populations and databases (e.g. observations from many different ecosystems)?
  • How do we model ecosystems, when our knowledge is mainly based on a number of rules/properties/propositions?

Jørgensen suggests there are at least 10 types of model currently available for modelling ecological systems (purely mathematical and statistical aside):

  1. (Bio-geo-chemical and bio-energetics), dynamic models
  2. Static models
  3. Population dynamic models
  4. Structurally dynamic models
  5. Fuzzy models
  6. Artificial neural networks
  7. Individual-based models and cellular automata
  8. Spatial models
  9. Ecotoxicological models
  10. Stochastic models
  11. Hybrid models

Of these, my particular interest is in spatial models, individual-based models and cellular automata models (with a passing interest in population models). This is largely because of my background in geography and landscape ecology, but also because of the heterogeneity in patterns, processes and behaviour often exhibited in socio-ecological systems.

Jørgensen offers a short description of each type, before listing their advantages and disadvantages. Here are a couple with my comments in italics:

Individual-Based Models (IBMs)and Cellular Automata (CA)
First, counter to Jørgensen, I would argue that CA models should be placed with the ‘spatial models’ – the ability of CA to represent space for me outweighs their potential to represent (limited) heterogeneity between cells. This aside, their grouping does make sense when we consider that these models can be relatively easily combined to represent individuals’ interactions across space and with a heterogeneous environment (via the CA).

Advantages

  • Are able to account for individuality – agreed, especially for IBMs
  • Are able to account for adaptation within the spectrum of properties – yes
  • Software is available; although the choice is more limited than by bio-geo-chemical dynamic models – but excellent free modelling environments such as NetLogo make this type of modelling widely available
  • Spatial distribution can be covered – yes

Disadvantages

  • If many properties are considered, the models get very complex – and may require the adoption and development of new techniques to present/analyse/interpret output (e.g. POM, narratives)
  • Can be used to cover the individuality of populations; but they cannot cover mass and energy transfer based on the conservation principle – I see no reason why the principle of energy and mass conservation could not be achieved by models of these types
  • Require many data to calibrate and validate the models – yes, this often the case, and in some cases (again) may require new approaches and types of data to calibrate and evaluate models

Spatial Models
Advantages

  • Cover spatial distribution, that is often of importance in ecology – yes, particularly Landscape Ecology, an entire discipline that has arisen since the 1970s and ’80s
  • The results can be presented in many informative ways, for instance GIS – GIS is a means to organise and analyse data as well as present data

Disadvantages

  • Require usually a huge database, giving information about the spatial distribution – this can certainly give rise to the issue of ‘model but no data’ and increases the costs of performing ecological research by adding space to time. We have found that our large (~4,000 sq km) Upper Michigan study area demands high time and resources needed for data collection.
  • Calibration and validation are difficult and time-consuming – maybe more so than non-spatial models, but probably not as much as some individual-based models
  • A very complex model is usually needed to give a proper description of the spatial patterns – not necessarily. A model should be only as complex as the patterns and processes it seeks to examine and the inclusion of space does not imply patterns or processes any more complex than a system with less variables or interactions that is non-spatial.

This isn’t a bad review of the types of ecological modelling being done. However, more incisive and useful insight could have been made with respect to landscape ecology and those models that are now beginning to attempt to account for human activity in ecological systems. [And it definitely could have been better written.] Maybe I’ll stop criticising sometime and write one myself eh?

US-IALE 2008 – Summary


A brief and belated summary of the 23rd annual US-IALE symposium in Madison, Wisconsin.

The theme of the meeting was the understanding of patterns, causes, and consequences of spatial heterogeneity for ecosystem function. The three keynote lectures were given by Gary Lovett, Kimberly With and John Foley. I found John Foley’s lecture the most interesting and enjoyable of the three – he’s a great speaker and spoke on a broader topic than the the others; Agriculture, Land Use and the Changing Biosphere. Real wide-ranging, global sustainability stuff. He highlighted the difficulties of studying agricultural landscapes because of the human cultural and institutional factors, but also stressed the importance of tackling these tricky issues because ‘agriculture is the largest disturbance the biosphere has ever seen’ and because of its large contribution to greenhouse gas emissions.

Presentations I was particularly interested in were mainly in the ‘Landscape Patterns and Ecosystem Processes: The Role of Human Societies’, ‘Challenges in Modeling Forest Landscapes under Climate Change’ and ‘Cross-boundary Challenges to the Creation of Multifunctional Agricultural Landscapes’ sessions.

In the ‘human societies’ session, Richard Aspinall discussed the importance of considering human decision-making at a range of scales and Dan Brown again highlighted the importance of human agency in spatial landscape process models. In particular, with regards modelling these systems using agent-based approaches he discussed the difficulty of model calibration at the agent level and stressed that work is still needed on the justification and evaluation phases of agent-based modelling.

The ‘modeling forest landscapes’ session was focused largely around use of the LANDIS and HARVEST models that were developed in and around Wisconsin. In fact, I don’t think I saw any mention of the USFS FVS at the meeting whilst I was there, largely because (I think) FVS has large data demands and is not inherently spatial. LANDIS and HARVEST work at more coarse levels of forest representation (grid cell compared to FVS’ individual tree) allowing them to be spatially explicit and to run over large time and space extents. We’re confident we’ll be able to use FVS in a spatially explicit manner for our study area though, capitalising on the ability of FVS to directly simulate specific timber harvest and economic scenarios.

The ‘multifunctional agricultural landscapes’ session had an interesting talk by Joan Nassauer on stakeholder science and the challenges it presents. Specific issues she highlighted were:
1. the need for a precise, operational definition of ‘stakeholder’
2. ambiguous goals for the use of stakeholders
3. the lack of a canon of replicable methods
4. ambivalence toward the quantification of stakeholder results

Other interesting presentations were given by Richards Hobbs and Carys Swanwick. Richard spoke about the difficulties of ‘integrated research’ and the importance of science and policy in natural resource management. He suggested that policy-makers ‘don’t get’ systems thinking or modelling, and that some of this may be down to the psychological profiles of the types of people that go into policy making. Such a conclusion suggests scientists need to work harder to bridge the gap to policy makers and do a better job of explaining the emergent properties of the complex systems they study. Carys Swanwick talked about the landscape character assessment, which was interesting for me having moved from the UK to the US about a year ago. Whilst ‘wilderness’ is an almost alien concept in the UK (and Europe as a whole), landscape character is something that is distinctly absent in the new world agricultural landscapes. Carys talked about the use of landscape character as a tool for conservation and management (in Europe) and the European Landscape Convention. It was a refreshing change from many of the other presentations about agricultural landscape (possibly just because I enjoyed seeing a few pictures of Blighty!).

Unfortunately the weather during the conference was wet which meant that I didn’t get out to see as much of Madison as I would have liked. Despite the rain we did go on the Biking Fieldtrip. And yes, we did get soaked. It was also pretty miserable weather for the other fieldtrip to and International Crane Foundation center and the Aldo Leopold Foundation (more on that in a future blog), but interesting nevertheless.

Other highlights of the conference for me were meeting the former members of CSIS and eating dinner one night with Monica Turner. I also got to meet up with Don McKenzie and some of the other ‘fire guys’, and a couple of people from the Great Basin Landscape Ecology lab where I visited previously. And now I’m already looking forward to the meeting next year in Snowbird, Utah (where I enjoyed the snow this winter).

Forest Landscape Models: A Review

There’s a new forest landscape model classification and review out there, recently published in Forest Ecology and Management by Hong He. The paper assumes greater familiarity with the topic of forest and disturbance modelling than the paper I recently published with my former advisor, George Perry, and discussion focuses largely on models primarily developed for the study of temperate forest systems in the USA (e.g., JABOWA, SORTIE, LANDIS, ZELIG – exceptions include MAQUIS and FORMOSAIC).


Distinction between deterministic models and stochastic models

He suggests that, generally, ecological models fall in two seemingly exclusive categories, deterministic models and stochastic models, and that either category of model can use physical or empirical approaches, or a combination of both (see figure). However, the classification He presents in the paper is developed according to how models represent

  1. spatial processes,
  2. temporal processes,
  3. site-level succession, and
  4. the intended use of the model.

Models are classified by succession based on whether the model uses succession pathways (i.e., a Markov state-and-transition approach), vital attributes (as I utilised in my PhD modelling), or by coupling landscape models with more detailed stand-level vegetation succession models. The fourth classification criteria above highlights that there are numerous applications of forest landscape models, and that design is strongly related to the desired applications. He suggests applications of forest landscape models generally fall into one of three categories:

  1. spatiotemporal patterns of model objects,
  2. sensitivities of model object to input parameters, and
  3. comparisons of model simulation scenarios.

After developing and presenting the classification, the paper goes on to discuss two dilemmas facing those using forest landscape models. The first is the validation of model results, which has been discussed on numerous occasion elsewhere (including this blog). The discussion on circular reasoning is more novel however, (and related in some ways to what I have written with regards models of human agents):

“It is often difficult to separate expected results from emergent results. A caution against circular reasoning is the caveat often encountered in this situation, where researchers discuss biological or environmental forcing (causes) of their modeled results, whereas the forcing (causes) is actually built in the model formulation to derive such results. It should be pointed out that most model simulations do not lead to new understanding of the modeled processes themselves. The primary and subsequent results simply reflect the relationships used in building the models, which in turn reflect current understanding of the processes. The findings of these models are simply the spatiotemporal variations of the spatial process (discussed in Section 5.1), not the mechanisms that drive the potential changes of the spatial process. Emergent results are generally those resulted from the interactions and feedbacks of model objects.”


The paper concludes by summarizing likely development of forest landscape modelling in the future:

  1. Model development will move from the foci of theoretical and exploratory purposes to the foci of strategic and tactical purposes with increasing model realism, responding to the needs of forest management and planning.
  2. Multiple spatial and temporal resolutions will be implemented for different processes
  3. Standardized module components may emerge as handy utilities that are ready to be plugged into other models. Since component-based models provide non-developers or end users with access to model components, a component-based model can be more rigorously tested, evaluated, and modified than before, and thus, model development processes can be driven not solely by original developers, but by the broader scientific community
  4. Synchronization of multiple ecological processes can be made possible with multiple computer processors. This will help deal with the limitation that ecological processes are simulated in a sequential order as determined by the executable program.
  5. Model memorization will be improved so that a forest landscape model not only memorizes vegetation, disturbance, and management status at the current and previous model iteration, but also the entire temporal sequence. This would allow more effective studies of legacies of forested landscapes responding to various disturbance and management activities.


Here’s the full paper citation and abstract:

He (2008) Forest landscape models: Definitions, characterization, and classification Forest Ecology and Management 254 (3) Pages 484-498

Abstract
Previous model classification efforts have led to a broad group of models from site-scale (non-spatial) gap models to continental-scale biogeographical models due to a lack of definition of landscape models. Such classifications become inefficient to compare approaches and techniques that are specifically associated with forest landscape modeling. This paper provides definitions of key terminologies commonly used in forest landscape modeling to classify forest landscape models. It presents a set of qualitative criteria for model classification. These criteria represent model definitions and key model implementation decisions, including the temporal resolution, number of spatial processes simulated, and approaches to simulate site-level succession. Four approaches of simulating site level succession are summarized: (1) no site-level succession (spatial processes as surrogates), (2) successional pathway, (3) vital attribute, and (4) model coupling. Computational load for the first three approaches is calculated using the Big O Notation, a standard method. Classification criteria are organized in a hierarchical order that creates a dichotomous tree with each end node representing a group of models with similar traits. The classified models fall into various groups ranging from theoretical and empirical to strategic and tactical. The paper summarizes the applications of forest landscape models into three categories: (1) spatiotemporal patterns of model objects, (2) sensitivities of model object to input parameters, and (3) scenario analyses. Finally, the paper discusses two dilemmas related to the use of forest landscape models: result validation and circular reasoning.

Keywords Forest landscape models; Spatially explicit; Spatially interactive; Definitions; Model characterization; Model classification

Google Earth GeoData

Previously, I highlighted work my old colleague and friend Pete Webley has done using Google Earth to model volcanic ash plumes. Another former King’s College colleague (and teacher) has been also been working with Google Earth. Mark Mulligan has posted online a large collection of KML files for a wide variety of geodata including satellite data on cloud climatology, a database of global place names, urban climate data, tropical land use change data, and much more.


KML files are used in Google products, such as Google Earth or Google Maps, to display geographic data. The data Mark has posted on the King’s server are freely accessible to all for non-commercial use. you can visualise the data in Google Earth and, in many cases, links to the actual downloadable GIS files also provided. Many of the datasets are works in progress and new data will continue to be posted in the future, so keep checking back.

The availability of data such as these, and projects such as Pete’s, really show how Google Earth can be used for so much more than virtual tours of other places or previews of you next holiday destination… [Speaking of which, I’m off to Utah snowboarding next week so hopefully I’ll have some new pics to post on my own Google-enabled photos page.]

shift happens


I like this video. Less because of the message toward the end about the importance of ensuring western countries continue to train adaptable workforces in an increasingly flat world. More because of how it illustrates the speed and unpredictability of change. In hindsight it might seem obvious that this is how the world should end up – contingency matters in the real world after all. But in these contingent, historical, systems how do we generate a model for the future that we can trust with any useful degree of confidence?