Software Add-ins for Ecological Modelling

During my modelling antics these last couple of days I seem to have been using many of the add-ins I’ve have installed with the software I use regularly. I thought I’d highlight some of them here as they are really useful tools that can expand the modelling and data manipulation possibilities of a standard software install.

Much of the modelling I do is spatial, so I’m regularly using some form of GIS. I’m most familiar with the ESRI products, but have also tinkered with things like GRASS. Two free add-ins that are really useful if you use ArcMap regularly are the Patch Analyst and Hawth’s Tools. Patch Analyst facilitates the spatial pattern analysis (making use of FRAGSTATS) of landscape patches, and the modelling of attributes associated with patches. Hawth’s Tools is an extension for ArcMap that performs a number of spatial analyses and functions that you can’t do with the standard install of ArcMap. Most of the tools are written with ecological analyses in mind, but it’s also be useful for non-ecologists with functions such as conditional point sampling, kernel density estimation and vector layer editing.

Although it is generally frowned upon for statistics (use R – see below), Microsoft Excel isn’t a bad tool for organising small and medium-sized data sets and for doing basic systems modelling (spatial simulation is a little trickier). Developed by some guys at CSIRO, Pop Tools is a free add-in for PC versions of Excel that facilitates analysis of matrix population models and the simulation of stochastic processes. It was originally written to analyse ecological models, but has been used for studies of population dynamics, financial modelling, and the calculation of bootstrap and resampling statistics. Once installed, PopTools puts a new menu item in Excel’s main menu and adds over a hundred useful worksheet functions. Regardless of whether you intend to do any modelling in Excel or not, the ASAP Utilities add-in is a must for automating many frequently required tasks (including those you didn’t even know you wanted to do in Excel!). There are selection tools (such as ‘select cell with smallest number’), text tools (such as ‘insert before current value’), information tools (such as ‘Find bad cell references (#REF!)’) and many more.

If you’re going to be doing any serious statistical analyses the current software of choice is R, the open-source language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques. If you need to analyse or manipulate large datasets R is for you – you are only restricted by the memory available on your computer. For computationally-intensive tasks, C, C++ and Fortran code can be linked and called at run time. R is also highly extensible by installing optional packages that have been written by users from around the world.

Many of the packages I use are from the Spatial and Environmetrics Task views. For example, I use spdep for calculating spatial autocorrelation, VR for computing spatial correlograms or confidence intervals for model parameters, and hier.part for hierarchical partitioning. This week I started thinking about how I will use the yaImpute package to impute the stand vegetation data we have collected at specific points in our study area across the entire landscape ready to initialise our spatial simulation model. Download the R software and the individual packages from a CRAN mirror near you.

Of course, this is just the tip of the iceberg and only a few of the most useful add-ins for the most commonly used software. For a much more complete list of more technical software and programming tools for ecological and environmental modelling see Andrea Emilio Rizzoli’s ‘Collection of Modelling and Simulation Resources on the Internet‘ or the list of Ecological Modelling links by T. Legovic’ and J. Benz.

Forest Ecology and Management Special Issue: Forest Landscape Modeling

In June 2006 the China Natural Science Foundation and the International Association of Landscape Ecology sponsored an international workshop of forest landscape modelling. The aim of the workshop was to facilitate a discussion on the progress made in the theory and application of forest landscape models. A special issue of Forest Ecology and Management, entitled Forest Landscape Modeling – Approaches and Appplications [Vol. 253, Iss. 3], presents 12 papers resulting from that meeting. In their editorial, He et al. summarise the papers, organising them into three sections that describe current activities in forest landscape modelling: (1) effects of climate change on forest vegetation, (2) forest landscape model applications, and (3) model research and development.

The LANDIS model is used in several of the papers on climate and human management of forest systems. Advances in the representation of processes that propagate spatially, including fire and seed dispersal, are discussed in several of the papers examining model research and development. He et al. conclude their editorial by reiterating why landscape models are a vital tool for better understanding and managing forested regions of the world:

The papers represented in the special issue of forest landscape modeling highlight the advances and applications of forest landscape models. They show that forest landscape models are irreplaceable tools to conduct landscape-scale experiments while physical, financial, and human constraints make real-world experiments impossible. Most of the results presented in this issue would not have been possible without the use of forest landscape models. Forest landscape modeling is a rapidly developing field. Its development and application will continually be driven by the actual problems in forest management planning and landscape-scale research. We hope that the papers contained in this special issue will serve both researchers and managers who are struggling to incorporate large-scale and long-term landscape processes into their management planning or research.

Tom Veldkamp: Advances in Land Models

As I mentioned before, the Global Land Project website is experimenting with the use of webcasts to enable the wider network to “participate” and use the GLP webpage as a resource. For example, several presentations are available for viewing from the Third Land System Science (LaSyS) Workshop entitled ‘Handling complex series of natural and socio-economic processes’ and held in Denmark in October of 2007. One that caught my attention was by Tom Veldkamp, mainly because of its succinct title: Advances in Land Models [webcast works best in IE].


Presented in the context of other CHANS research, Veldkamp used an example from the south of Spain to discuss recent modelling approaches to examine the effects of human decisions on environmental processes and the feedbacks between human and natural systems. The Spanish example examined the interaction of human land-use decision making and soil erosion. A multi-scale erosion model, LAPSUS, represented the interactive natural and human processes occurring olive groves on steep hillslopes; gullying caused by extreme rainfall events and attempts to preserve soils and remove gullies by ploughing. Monte Carlo simulations were used to explore uncertainties in model results and highlighted the importance of path dependencies. As such, another example of the historical dimension of ‘open’ systems and the difficulties it presents for environmental modellers.

The LAPSUS model was coupled with the well known land use/cover change CLUE model to examine feedbacks between human land use and erosion. The coupled model was used to examine the potential implications of farmers adopting land use practices as a response to erosion. Interestingly, the model suggested that human adaptation strategy modelled would not lead to reduced erosion.

Veldkamp also discusses the issue of validating simulation models of self-organising processes, and suggests that ensemble and scenario approaches such as those used in global climate modelling are necessary for this class of models. However, rather than simply using ‘static’ scenarios that specify model boundary conditions, such as the IPCC SRES scenarios, scenarios that represent some form of feedback with the model itself will be more useful. Again, this comes back to his point about the importance of representing feedbacks in coupled human and natural systems.

For example, Veldkamp suggests the use of “Fuzzy Cognitive Maps” to generate ‘dynamic’ scenarios. Essentially, these fuzzy cognitive maps are produced by asking local stakeholders in the systems under study to quantify the effects of the different factors driving change. First, the appropriate components of the system are identified. Next, the feedbacks between these components are identified. Finally, the stakeholders are asked to estimate how strong these feedbacks are (on a scale of zero to one). This results in a semi-quantitative systems model that can be run for several iterations to examine the consequences of the feedbacks within the system. This method is still in development and Veldkamp highlighted several pros and cons:

Pros:

  • it is relatively easy and quick to do
  • it forces the stakeholders to be explicit
  • the emphasis is placed on the feedbacks within the system

Cons:

  • it is a semi-quantitative approach
  • often feedbacks are of incomparable units of measurement
  • time is ill defined
  • stakeholders are often more concerned with the exact values they put on an interaction rather than the relative importance of the feedbacks

I agree when Veldkamp suggests this ‘fuzzy cognitive mapping’ is a promising approach to scenario development and incorporation into simulation modelling. Indeed, during my PhD research I explored the use of an agent-based model of land use decision-making to provide scenarios of land use/cover change for a model of forest succession-disturbance dynamics (and which I am currently writing up for publication). ‘Dynamic’ model scenario approaches show real promise for representing feedbacks in coupled human natural systems. As Veldkamp concludes, these feedbacks, along with the non-linearities in system behaviour they produce, need to be explicitly represented and explored to improve our understanding of the interactions between humans and their environment.

Three New NetLogo Releases

The Center for Connected Learning and Computer-Based Modeling at Northwestern University had announced three new NetLogo releases:

4.0.2 and 3.1.5 are both bugfix releases, addressing various issues found in 4.0 and 3.1.4.

NetLogo 3D Preview 5 brings the NetLogo 3D series up to date with NetLogo 4.0.2. It includes the majority of NetLogo 4.0’s features and all of 4.0.2’s fixes.

Download these latest versions from the NetLogo homepage

Seeing the Wood for the Trees: Pattern-Oriented Modelling

A while back I wrote about the potentially misplaced preoccupation with statistical power in species distribution models. Our attempts at drawing out some relationships between our deer distribution data and descriptors of land cover is proving taxing – the relationships evident at a more coarse spatial resolution (e.g. county level) than we are considering aren’t found in our stand-level data. As a result we moving toward taking a modelling approach that is driven less by our empirical data and more by inferences based on multiple information sources. Particularly I’m drawn toward emphasising an approach I first encountered in my undergraduate landscape ecology class taught by George Perry – ‘Pattern-Oriented Modelling‘.

A prime example of the POM approach is its use to model the spread of rabies through central Europe. The rabies virus has been observed to spread in a wave-like manner, carried by foxes. Grimm et al. (1996) describe how they developed a cellular automate-type model that considers cells (of fox territory) to be in either a healthy, infected or empty state. Through an iterative model development process, their model was gradually refined (i.e. its assumptions and parameters modified) by comparing model results with empirical patterns.

The idea underpinning this iterative POM approach is

“… if we decide to use a pattern for model construction because we believe this pattern contains information about essential structures and processes, we have to provide a model structure which in principle allows the pattern observed to emerge Whether it does emerge depends on the hypotheses we have built into the model.”

This approach has been found particularly useful for the development of ‘bottom-up’ agent-based models. Often understanding of the fine-scale processes driving broad-scale system dynamics and patterns is poor, making it difficult to both structure and parameterise mechanistic models. However, whilst the logical fallacy of affirming the consequent remains, if a model of low-level interactions is able to reproduce higher-level patterns, we can be confident that our model is a better representation of the system mechanics than one that doesn’t. Furthermore, the more patterns at different scales that the model reproduces, the mode confident we can be in it. Thus, in POM

“multiple patterns observed in real systems at different hierarchical levels and scales are used systematically to optimize model complexity and to reduce uncertainty.”Grimm et al. (2005)

Grimm and Berger outline the general protocol of a pattern-oriented modelling approach (whilst reminding us that there is no standard recipe for model development):

  1. Formulate the question or problem
  2. Assemble hypotheses about essential processes and structures
  3. Assemble (observed) patterns
  4. Choose state variables, parameters and structures
  5. Construct the model
  6. Analyse, test and revise the model
  7. Use patterns for parameterisation
  8. Search for independent predictions

Several iterations of this process will be required to refine the model. In initial iterations, steps 2 and 4 may need to be largely inferential if the state of knowledge about the system is poor. However, by moving iteratively back through these steps, and in particular exploiting steps 6 and 7 to inform us about model performance relative to system behaviour, we can improve our knowledge about the system whilst simultaneously ensuring our model recreates observed patterns. For example, during the development of the landscape fire-succession model in my PhD, I compared the landscape-level model results of different sets of (unknown) flammability probabilities (parameters) of each vegetation type required by the model with empirically observed wildfire regime behaviour. By modifying parameters for individual vegetation types I was able to reproduce the appropriate wildfire frequency-area distribution for Mediterranean-type environments that had previously been found (I’m currently writing this up for publications – more soon).

But what does this all have to do with our model of the relationship between deer browse and timber harvest in Michigan’s Upper Pensinsula? Well, right now I think we’re at steps 2,3 and 4 (all at the same time). As our deer and land cover relationships are weak at the stand-level (which is the level we are considering so that we can integrate the model with an economic module), I am currently developing hypotheses (i.e. assumptions) about the structure of the system from previous research on different specific aspects of similar systems. Furthermore, we’re continuing to look for spatial patterns in both vegetation and deer distribution so that we can compare the results of our hypothetical model.

For example, one thing I’m struggling with right now is is how to establish the probability of which individual trees (or saplings) will be removed from a stand due to a given level of deer browse (which in turn is dependent upon a deer density). This is not something that has been explicitly studied (and would be very difficult to study at the landscape level). Therefore we need to parameterise this process in order for the model to function. We should be able to do this by comparing several different parameterisations to empirically observed patterns such as spatial configuration of forest types classified by age class or age/species distributions at the stand-level. That’s the idea anyway – we’ll see how it goes over the next months…

In the meantime, next week I head back to the study area for the first stage of our seedling experiment. We’re planting seedlings now across a gradient of browse and site conditions with the intention of returning in the spring to see what has been browsed and count deer pellets. This should improve our understanding of the link between pellet counts and browse pressure and provide us with some more empirical patterns which we can use in our ongoing model development.

What’s your model?

In their feature Formulae for the 21st Century, Edge ask ‘What is your formula? Your equation?’ Scientists, Philosophers, Artists and Writers have replied. Some gave their favourite, or what they thought to be the most important, formulas from their fields.

But many gave their models of the world. I think that’s why I like these so much – they’re models, simplifications, abstractions, essences of an aspect of life or thought. From Happiness (Danny Kahneman, Jonathan Haidt) and Creativity (Geoffrey Miller, Richard Foreman), through Cognition (Steven Pinker, Ernst Poppel), Economics (Matt Ridley), Society (Doug Rushkoff, John Horgan), Science (Richard Dawkins, Neil Shubin), Life (Alison Gopnik, Tor Nørretranders) and the Universe (Michael Shermer, Dimitar D. Sasselov) all the way (full circle maybe) to Metaphysics (Paul Bloom).

My favourites are the most simple – model parsimony, Occam’s Razor and all that. Here are a couple (click for larger images).

This got me thinking about why I like quotes so much too – because they’re models. Take the essence of an idea and express it as elegantly as possible. That’s what scientists and mathematicians do, but equally it’s what writers and artists do. Take it far enough, and being a bit of critical realist, I would say that all human perception is a model. But these elegant models are more useful than our sensory apparatus alone (which, along with our subconscious does plenty of filtering already) – they observe whilst simultaneously interpreting and synthesizing.

So what’s my model? I’m not sure – it would have to involve change. My personal models are continually changing, vacillating. Sometimes I believe time has an arrow, sometimes it doesn’t. Sometimes the world is equations and energy, sometimes it’s story and sentiment. Sometimes life is light, sometimes life is heavy. Even when my model is relatively stable it’s usually paradoxical (or should that be hypocritical?) and ironic. I’ll try to parse it down to it’s most parsimonious state and then find some words and symbols to express it elegantly. Then I’ll post it here. I can’t guarantee that will be any time soon mind you…

In the meantime, what’s your model?

An Integrated Fire Research Framework

Integrated, multi- and inter-disciplinary studies are becoming increasingly demanded and required to understand the consequences of human activity on the natural environment. In a recent paper, Sandra Lavorel and colleagues highlight the importance of considering the feedbacks and interactions between several systems when examining landscape vulnerabilities to fire. They present a framework for integrated fire research that considers the fire regime as the central subsystem (FR in the figure below) and two feedback loops, the first with consequences for atmospheric and biochemical systems (F1) and the second that represents ecosystems services and human activity (F2). It is this second feedback loop in their framework that my research focuses.


To adequately quantify the fire-related vulnerability of different regions of the world the authors suggest that a better understanding of the relative contributions of climate, vegetation and human activity to the fire regime is required. For example, they suggest that an examination of the statistical relationships between spatio-temporal patterns evident in wildfire regimes and data on ecosystem structure, land use and other socio-economic factors. We made a very similar point in our PNAS paper and hope to continue to use the exponent (Beta) of the power-law frequency-area relationship that is evident in many model and empirical wildfire regimes to examine these interactions. One statistical relationship that might be investigated is between Beta and the level of forest fragmentations, thought to be a factor confounding research on the effects of fire suppression of wildfire regimes.

But the effects of landscape fragmentation can also be examined in a more mechanistic fashion using dynamic simulation models. Lavorel et al. mention the impacts of agricultural abandonment on the connectivity of fuels in Mediterranean landscapes which are attributed, in conjunction with a drier than average climate, to the exceptionally large fires that burned there during the 1990s. My PhD research examined the impacts of agricultural land abandonment on wildfire regimes in central Spain. I’m currently working on writing this work up for publication, but I plan on continuing to develop the model to more explicitly represent the F2 feedbacks loop shown in the figure above.

The potential socio-economic consequences of changing fire regimes are an area with a lot of room to improve our understanding. For example, some regions of the world, such as the Canadian boreal forest, are transitioning from a net sink for carbon to a net source (due to emission during burning). If carbon sinks are considered in future emission trading systems, regions such as are losing a potential future economic commodity. Lavorel et al. also discuss the interesting subject of potential land conflict due to mismatches in the time scales between land planning and fire occurrence. In Indonesia for example, years which burn large areas force re-allocation of land development plans by local government. Often however the processes of developing these plans is not fast enough to forestall the exploitation by local residents of the new land available for occupation and use.

The need for increased research in this area is highlighted by the case studies for Alaskan and African savannah ecosystems presented by Lavorel et al. Whilst discussion of the wildfire regime and atmospheric/biochemical feedbacks can be discussed in detail, poor understanding of the ecosystem services/human activity feedbacks prevents such detailed discussion.

The framework Lavorel et al. present is a very useful way to conceptualise and plan for future research in this field. They suggest (p.47-48) that “Assessments of vulnerability of land systems to fire demand regional studies that use a systemic approach that focuses on the feedback loops described here” and “… will require engaging a collection of multiscale and interdisciplinary regional studies”. In many respects, I expect my future work to contribute to this framework, particularly with regards the human activity (F2) feedback loop.

CHANS and the Risks of Modelling

In their recent review of Coupled Human and Natural Systems (CHANS), Liu et al highlight several facets of the integrated study of these systems;

  • Reciprocal Effects and Feedback Loops
  • Nonlinearity and Thresholds
  • Surprises
  • Legacy Effects and Time Lags
  • Resilience

Whilst the emphasis of the paper is on the emergence of complex patterns and processes not evident when human and natural systems are studied independently by social or natural scientists, for me the issue that should be highlighted is the importance of surprises and legacy effects when studying these systems. This goes back to what I have written before about the open, middle-numbered nature of these systems. In these systems history matters and events that occur outside the bounds of the system being studied can have an influence on system dynamics.

With this in mind, when I was recently asked where the risks lie in ecological-economic modelling (modelling that specifically considers the interactions of ecological and economic systems) I suggested we might consider three areas of risk:

  1. The production of a integrated model that is not accepted or valued by those we hope it would (whether that be other scientists, decision-makers or members of the society we are modelling). For example, the nature of producing a model that lies somewhere between ecology and economics and/or between science and management has the potential to be accepted by neither party in these dichotomies (as it is not perceived by others to be ‘real ecology’ or ‘real science’ for example). However, this can be avoided by ensuring continued collaboration between economists and ecologists, and between scientists and managers, throughout the modelling process to ensure understanding or model structure.
  2. The production of a model that is not fully integrated but is rather an ecological model used to examine various economic scenarios. In this case, the study remains integrated (examining the interactions between economic and ecological systems) but the model is not (as feedbacks back from the ecological systems into the economic system, for example in terms of prices and costs, are not fully accounted for). Alternatively, if the modelling process is understood to be iterative, then this initial reduced version of the model may simply be a single step in the complete ecological-economic modeling process.
  3. Because of legacy effects, surprises etc, a misplaced confidence in what the model can accurately predict may arise. This is also related to the question of the limited capacity to validate models of complex ecological systems given limited empirical data. Again, this may be prevented by continued collaboration between scientist and manager to ensure the structure and limitations of a model are understood, and if a range of model results are predicted for different scenarios (in order to demonstrate the variability in potential outcomes).

The study of CHANS will become increasingly important in the future. But if political decisions are to be made based on the outcome of the knowledge gained, the risks present in the study (and specifically the modelling) of these systems must be minimized and accounted for.

The Tyranny of Power?

The past week or two I’ve been wrestling with the data we have on white-tailed deer density and vegetation in Michigan’s Upper Peninsula in an attempt to find some solid statistical relationships that we might use in our ecological-economic simulation model. However, I seem to be encountering similar issues to previous researchers, notably (as Weisberg and Bugmann put it) “the weak signal-to noise ratio that is characteristic of ungulate-vegetation systems”, that “multiple factors need to be considered, if we are to develop a useful, predictive understanding of ungulate-vegetation relationships”, and that “ungulate-vegetation interactions need to be better understood over multiple scales”.

Hobbs suggests that one of the problems slowing species distribution research is a preoccupation with statistical power that he calls “the tyranny of power”. This tyranny arises, he suggests, because traditional statistical methods that are powerful at smaller scales become less useful at larger extents. There are at least three reasons for this including,

  1. small things are more amenable to study by traditional methods than large things
  2. variability increases with scale (extent)
  3. potential for bias increases with scale (extent)

“The implication of the tyranny of power is that many of the traditionally sanctioned techniques for ecological investigation are simply not appropriate at large-scales… This means that inferences at large-scales are likely to require research designs that bear little resemblance to the approaches many of us learned in graduate school.” Hobbs p.230

However, this tyranny may simply be because, as Fortin and Dale point out, “most study areas contain more than one ecological process that can act at different spatial and temporal scales”. That is, the processes are non-stationary in time and space. Leaving time aside for now, spatial non-stationarity has already been found to be present in our study area with regards the processes we’re considering. For example, Shi and colleagues found that Geographically Weighted Regression (GWR) models are better at predicting white-tailed deer densities than an ordinary least-squares regression model for the entirety of our study area.

Hobbs’ argument suggests that it’s often useful analyse ecological data from large regions by partitioning them into smaller, more spatially homogenous areas. The idea is that these smaller patches are more likely to be governed by the same ecological process. But how should these smaller regions be selected? A commonly used geographical division is the ecoregion. Ecoregions divide land into areas of similar characteristics such as climate, soils, vegetation and topography. For our study area we’ve found that relationships between deer densities and predictor variables do indeed vary by Albert’s ecoregions. But we think that there might be more useful ways to divide our study area that take into account variables that are commonly believed to strongly influence spatial deer distributions. In Michigan’s UP the prime example is the large snow fall is received each winter and which hinders deer movement and foraging.

We’re beginning to examine how GWR and spatial boundary analysis might be used to delineate these areas (at different scales) in the hope of refining our understanding about the interaction of deer and vegetation across our large (400,000 ha) landscape. In turn we should be able to better quantify some of these relationships for use in our model.

Modeling Disturbance Spatially using the FVS

We plan to use the Forest Vegetation Simulator (FVS), developed by the USFS over the previous couple of decades, in our ecological-economic model of a managed forest landscape. This week I’ve been thinking a lot about how best to link a representation of white-tailed deer browse with the FVS.

Two good examples I’ve found of the modelling of forest disturbance using FVS are the Fire and Fuels Extension (FFE) developed at the USFS Rocky Mountain Research Station in collaboration with other parties, and the Westwide Pine Beetle Model developed by the Forest Health Technology Enterprise Team (FHTET).

The Fire and Fuels Extension to the Forest Vegetation Simulator (FFE-FVS) links the existing FVS, models that represent fire and fire-effects, and fuel dynamics and crowning submodels. The overall model is currently calibrated for northern Idaho, western Montana, and northeastern Washington. More details on the FFE-FVS can be found here, where you can also download this video about the extension:


The Westwide Pine Beetle Model simulates impacts of mountain beetle (Dendroctonus ponderosae Hokpins), western pine beetle (D. brevicomis Leconte), and Ips species for which western pines are a host. The model simulates the movement of beetles between the forest stands in the landscape using the Parallel Processor Extension (PPE) to represent multiple forest stands in FVS.

A recent paper by Ager and colleagues in Landscape and Urban Planning presents work that links both the FFE and the WPBM to FVS using the PPE:

We simulated management scenarios with and without thinning over 60 years, coupled with a mountain pine beetle outbreak (at 30 years) to examine how thinning might affect bark beetle impacts, potential fire behavior, and their interactions on a 16,000-ha landscape in northeastern Oregon. We employed the Forest Vegetation Simulator, along with sub-models including the Parallel Processing Extension, Fire and Fuels Extension, and Westwide Pine Beetle Model (WPBM). We also compared responses to treatment scenarios of two bark beetle-caused tree mortality susceptibility rating systems. As hypothesized, thinning treatments led to substantial reduction in potential wildfire severity over time. However, contrary to expectations, the WPBM predicted higher bark beetle-caused mortality from an outbreak in thinned versus unthinned scenarios. Likewise, susceptibility ratings were also higher for thinned stands. Thinning treatments favored retention of early seral species such as ponderosa pine, leading to increases in proportion and average diameter of host trees. Increased surface fuel loadings and incidence of potential crown fire behavior were predicted post-outbreak; however, these effects on potential wildfire behavior were minor relative to effects of thinning. We discuss apparent inconsistencies between simulation outputs and literature, and identify improvements needed in the modeling framework to better address bark beetle-wildfire interactions.

Whilst I’m still in the early stages of working out how our model will all fit together, it seems like an approach that takes a similar approach will be suitable for our purposes. We’ll need to develop a model that is able to represent the spatial distribution of the deer population across the landscape and that can specify the impact of those deer densities on the vegetation for given age-height classes (for each veg species). This model would likely then be linked with FVS via the the PPE. So concurrently over the next few months I’m going to be working on developing a model of deer density and browse impacts, coding this model into a structure that will link with FVS-PPE, and acquiring and developing data for model initialization.

Reference
Ager, A.A., McMahan, A., Hayes, J.L. and Smith, E.L. (2007) Modeling the effects of thinning on bark beetle impacts and wildfire potential in the Blue Mountains of eastern Oregon Landscape and Urban Planning 80:3 p.301-311