Social Network Analysis

As I mentioned in a tweet earlier this week, Prof. Ken Frank was ‘visiting’ CSIS this week. Ken studies organizational change and innovation using, amongst other methods, Social Network Analysis (SNA). SNA examines how the structure of ties between people affects individuals’ behaviour, at how social network structure and composition influences the social norms of a group, and how resources (for example, of information) flow through a social network. This week Ken organised a couple of seminars on the use of SNA to investigate natural resource decision-making (for example, in small-scale fisheries) and I joined a workshop he ran on how we actually go about doing SNA, learning about software like p2 and KliqueFinder. Ken showed us the two main models; the selection model and the influence model. The former addresses network formation and examines individuals’ networks and how they chose it. The latter examines how individuals are influenced by the people in their network and the consequences for their behaviour. As an example of how SNA might be used, take a look at this executive summary [pdf] of the thesis of a recent graduate students from MSU Fisheries and Wildlife.

On Friday, after having been introduced through the week to what SNA is, I got to chat with Ken about how it might relate to the agricultural decision-making modelling I did during my PhD. In my agent-based model I used a spatial neighbourhood rule to represent the influence of social norms (i.e. whether a farmer is ‘traditional’ or ‘commercial’ in my categories). However, the social network of farmers is not solely determined by spatial relationshps – farmers have kinship ties and might meet other individuals at the market or in the local cerveceria. We discussed how I might be able to use SNA to better represent the influences of other farmers on an indiviuals’ decision-making in my model. I don’t have the network data needed to do this right now but it’s something to think about for the future.

If I’d been more aware of SNA previously I may have incorporated some discussion of it into the book chapter I re-wrote recently for Environmental Modelling. In that chapter I focused on the increasing importance of behavioural economics for investigating and modelling the relationships between human activity and the environment. SNA is certainy something to add to the toolbox and seems to be on the rise in natural resources research. Something else I missed whilst working on re-writing that that chapter was the importance of behavioural economics to David Cameron‘s ‘Big Society’ idea. He seems to be aware of the lessons we’ve started learning from things like social network analysis and behavioural economics – now he’s in charge maybe we’ll start seeing some direct application of those lessons to UK public policy.

Bayesian Modelling in Biogeography

Recently I was asked to write a review of the current state-of-the-art of model selection and Bayesian approaches to modelling in biogeography for the Geography Compass journal. The intended audience for the paper will be interested but non-expert, and the paper will “…summarize important research developments in a scholarly way but for a non-specialist audience”. With this in mind, the structure I expect I will aim for will look something like this:

i) Introduction to the general issue of model inference (i.e., “What is the best model to use?”). This section will likely discuss the modelling philosophy espoused by Burnham and Anderson and also highlight some of the criticisms of null-hypothesis testing using p-values. Then I might lead into possible alternatives (to standard p-value testing) such as:

ii) AIC approaches (to find the ‘best approximating model’)

iii) Bayesian approaches (including Bayesian Model Averaging, as I’ve discussed on this blog previously)

iv) Some applied examples (including my deer density modelling for example)

vi) A brief summary

I also expect I will try to offer some practical hint and tips, possibly using boxes with example R code (maybe for the examples in iv). Other published resources I’ll draw on will likely include the excellent books by Ben Bolker and Michael McCarthy. As things progress I may post more, and I’ll be sure to post again when the paper is available to read in full.

Bird Occupancy Modelling

Birds have been given short shrift in my posts blog posts about the Michigan UP ecological-economic modelling project. It’s not that we have forgotten about them, it’s just that before we got to incoporating them into our modelling there were other things to deal with first. Now that we’ve made progress on modelling deer distribution it’s time to turn our attention to how we can represent the potential impacts of forest management on bird habitat so that we might better understand the tradeoffs that will need to be negotiated to achieve both economic and ecological sustainability.

Ovenbird (Seiurus aurocapillus)
Ovenbird (Seiurus aurocapillus)

One of the things we want to do is link our bird-vegetation modelling with Laila Racevskis‘ assessment of the economic value of bird species she did during her PhD research. Laila assessed local residents’ willingess-to-pay for ensuring the conservation of several bird species of concern in our study area. If we can use our model to examine the effects of different timber management plans (each yielding different timber volumes) on the number of bird species present in an area we can use Laila’s data to examine the economic tradeoffs between different management approaches. The first thing we need to do to achieve this is be able to estimate how many bird species would be present in a given forest stand.

Right now the plan is to estimate the presence of songbird species of concern in forest stands by using the data Ed Laurent collected during his PhD research at MSU. To this end I’ve been doing some reading on the latest occupancy modelling approaches and reviewing the literature on its application to birds in managed forests. Probably the most popular current approach was developed recently by Darryl Mackenzie and colleagues – it allows the the estimation of whether a site is occupied by a given species or not when we know that our detection is imperfect (i.e. when we know we have false negative observations in our bird presence data). The publication of some nice overviews of this approach (e.g. Mackenzie 2006) plus the development of software to perform the analyses are likely to be at the root of this popularity.

The basic idea of the approach is that if we are able to make multiple observations at a site (and if we assume that bird populations and habitat do not change between these observations) we can use the probability of each bird observation history at a site across all the sites to form a model likelihood. This likelihood can then be used to estimate the parameters using any likelihood-based estimation procedure. Covariates can be used to model both the probability of observation and detection (i.e. we can account for factors that may have hindered bird observation such a wind strength or the time of day). I won’t go into further detail here because there’s an excellent online book that will lead you through the modelling process, and you can download the software and try it yourself.

Two recent papers have used this approach to investigate bird species presence given different forest conditions. DeWan et al. 2009 used Mackenzie’s occupancy modelling approach to examine impacts of urbanization on forest birds in New York State (they do a good job of explaining how they apply Mackenzie’s approach to their data and study area). DeWan considered landscape variables such as perimeter-area ratios of habitat patches and proximity to urban area to create occupancy models for 9 birds species at ~100 sites. They found that accounting for imperfect bird detection was important and that habitat patch “perimeter-area ratio had the most consistent influence on both detection probability and occupancy” (p989).

In a slightly different approach Smith et al. 2008 estimated site occupancy of the black-throated blue warbler (Dendroica caerulescens) and ovenbird (Seiurus aurocapillus) in 20 northern hardwood-conifer forest stands in Vermont. At each bird observation site they had also collected stand structure variables including basal area, understory density and tree diameters (in contrast to DeWan et al who only considered landscape-level variables). Smith et al. write their results “demonstrate that stand-level forest structure can be used to predict the occurrence of forest songbirds in northern hardwood-conifer forests” (p43) and “suggest that the role of stand-level vegetation may have been underestimated in the past” (p36).

Our approach will take the best aspects from both these studies; the large sample size of DeWan et al. with the consideration of stand-level variables like Smith et al. More on this again soon I expect.

‘Mind, the Gap’ paper in press

I hoped it would be quicker than previous papers, but the review process of the ‘Mind, the Gap’ manuscript I worked on with John Wainwright hasn’t been particularly fast. I guess that’s just how it goes with special issues. I’ll discuss some of the topics we touch on in the paper in a future post. For now here’s the abstract – look out for the full paper on the ESPL website in the next couple of months.

Mind, the Gap in Landscape-Evolution Modelling
John Wainwright and James Millington
Earth Surface Processes and Landforms (Forthcoming)

Abstract
Despite an increasing recognition that human activity is currently the dominant force modifying geomorphic landscapes, and that this activity has been increasing through the Holocene, there has been little integrative work to evaluate human interactions with geomorphic processes. We argue that agent-based models (ABMs) are a useful tool for overcoming the limitations of existing, highly empirical approaches. In particular, they allow the integration of decision-making into process-based models and provide a heuristic way of evaluating the compatibility of knowledge gained from a wide range of sources, both within and outwith the discipline of geomorphology. The application of ABMs to geomorphology is demonstrated from two different perspectives. The SPASIMv1 (Special Protection Area SIMulator version 1) model is used to evaluate the potential impacts of land-use change – particularly in relation to wildfire and subsequent soil conditions – over a decadal timescale from the present day to the mid-21st century. It focuses on the representation of farmers with traditional versus commercial perspectives in central Spain, and highlights the importance of land-tenure structure and historical contingencies of individuals’ decision making. CYBEROSION, on the other hand, considers changes in erosion and deposition over the scale of at least centuries. It represents both wild and domesticated animals and humans as model agents, and investigates the interactions of them in the context of early agriculturalists in southern France in a prehistoric context. We evaluate the advantages and disadvantages of the ABM approach, and consider some of the major challenges. These challenges include potential process scale mis-matches, differences in perspective between investigators from different disciplines, and issues regarding model evaluation, analysis and interpretation. If the challenges can be overcome, this fully-integrated approach will provide geomorphology a means to conceptualize soundly the study of human-landscape interactions.

Holiday Publications!

Update January 2010: This paper is now online with doi 10.1016/j.foreco.2009.12.020.

I received some good news this morning as I prepared to head back to the UK for the holidays. The paper I started writing back in January examining the white-tailed deer distribution in our managed forest landscape (the analysis for which inspired posts on Bayesian and ensemble modelling) has been accepted for publication and is ‘In Press’! I’ve copied the abstract below.

Another piece of publications news I received a while back is that the paper I co-authored with Raul Romero-Calcerrada and others modelling socioeconomic data to understand patterns of human-caused wildfire ignition risk has now officially been published in Ecological Modelling.

Happy Holidays everyone!

Effects of local and regional landscape characteristics on wildlife distribution across managed forests (In Press) Millington, Walters, Matonis, and Liu Forest Ecology and Management

Abstract
Understanding impacts of local and regional landscape characteristics on spatial distributions of wildlife species is vital for achieving ecological and economic sustainability of forested landscapes. This understanding is important because wildlife species such as white-tailed deer (Odocoileus virginianus) have the potential to affect forest dynamics differently across space. Here, we quantify the effects of local and regional landscape characteristics on the spatial distribution of white-tailed deer, produce maps of estimated deer density using these quantified relationships, provide measures of uncertainty for these maps to aid interpretation, and show how this information can be used to guide co-management of deer and forests. Specifically, we use ordinary least squares and Bayesian regression methods to model the spatial distribution of white-tailed deer in northern hardwood stands during the winter in the managed hardwood-conifer forests of the central Upper Peninsula of Michigan, USA. Our results show that deer density is higher nearer lowland conifer stands and in areas where northern hardwood trees have small mean diameter-at-breast-height. Other factors related with deer density include mean northern hardwood basal area (negative relationship), proportion of lowland conifer forest cover (positive relationship), and mean daily snow depth (negative relationship).The modeling methods we present provide a means to identify locations in forest landscapes where wildlife and forest managers may most effectively co-ordinate their actions.

Keywords: wildlife distribution; landscape characteristics; managed forest; ungulate herbivory; northern hardwood; lowland conifer; white-tailed deer

Putting decison-making in context

A while back I wrote about how it takes all sorts to make a world and why we need to account for those different sorts in our models of it. One of the things that I highlighted in that post was the need for mainstream economics to acknowledge and use more of the findings from behavioural economists.

One of the examples I used in the draft of the book chapter I have been writing for the second edition of Wainwright and Mulligan’s Environmental Modelling was the paper by Tversky and Kahneman, The Framing of Decisions and the Psychology of Choice. They showed how the way in which a problem is framed can influence human decision-making and causes problems for rational choice theory. In one experiment Tversky and Kahneman asked people if they would buy a $10 ticket on arriving at the theatre when finding themselves in two different situations:

i) they find they have lost $10 on the way to the theatre,
ii) they find they have lost their pre-paid $10 ticket.

In both situations the person has lost the value of the ticket ($10) and under neoclassical economic assumptions should behave the same when deciding whether to buy a ticket when arriving at the theatre. However, Tversky and Kahneman found that people were more likely to buy a ticket in the first situation (88%) than buying a (replacement) ticket in the second (46%). They suggest this behaviour is due to human ‘psychological accounting’, in which we mentally allocate resources to different purposes. In this case people are less willing to spend money again on something they have already allocated to their ‘entertainment account’ than if they have lost money which they allocate to their ‘general expenses account’.

More recently, Galinsky and colleagues examined how someone else’s irrational thought processes can influence our own decision-making. In their study they asked college students to take over decision-making for a fictitious person they had never met (the students were unaware the person was fictitious).

In one experiment, the volunteers watched the following scenario play out via text on a computer screen: the fictitious decision-maker tried to outbid another person for a prize of 356 points, which equaled $4.45 in real money. The decision-maker started out with 360 points, and every time the other bidder upped the ante by 40 points, the decision-maker followed suit. Volunteers were told that once the decision-maker bid over 356 points, he or she would begin to lose some of the $12 payment for participating in the study.

When the fictitious decision-maker neared this threshold, the volunteers were asked to take over bidding. Objectively, the volunteers should have realized that – like the person who makes a bad investment in a ‘fixer-upper’ – the decision-maker would keep throwing good money after bad. But the volunteers who felt an identification with the fictitious player (i.e., those told by the researchers that they shared the same month of birth or year in school) made almost 60% more bids and were more likely to lose money than those who didn’t feel a connection.

Are we really surprised that neoclassical economic models often fall down? Accounting for seemingly irrational human behaviour may make the representation of human decision-making more difficult, but increasingly it seems irrational not to do so.

Initial Michigan Forest Simulation Output

It’s taken a while but finally the model that I came to Michigan State to develop is producing what seems to be sensible output. Just recently we’ve brought all the analyses on the data that were collected in the field into a coherent whole. We’ll use this integrated model to investigate best approaches for forest and wildlife management to ensure ecological and economic sustainability. This post is a quick overview of what we’ve got at the moment and where we might take it. The image below provides a simplified view of the relationship of the primary components the model considers (a more detailed diagram is here).


The main model components I’ve been working on are the deer distribution, forest gap regeneration and tree growth and harvest sub-models. Right now we’re still in the model testing and verification stage but soon we hope to be able start putting it to use. Here’s a flow chart representing the current sequence of model execution (click for larger image):


As I’ve posted several times about the deer distribution modelling (here, here, and here for example) and because the integration of FVS with our analyses is more a technical than scientific issue, I’ll focus on the forest gap regeneration sub-model.

Most of the forest gap regeneration analyses used the data Megan Matonis collected during her two summers in the field (i.e., forest). During her fieldwork Megan measured gap and tree regeneration attributes such as gap size, soil and moisture regime, time since harvest, deer density, and sapling heights, density and species composition. Megan is writing up her thesis right now but we’ve also managed to find time to do some extra analyses on her data for the gap regeneration sub-model. Here’s the flow chart representing the model sequence to estimate initial regeneration in gaps created by a selection harvest in a forest stand (click for larger image):


In our gap regeneration sub-model we take a probabilistic approach to estimate the number and species of the first trees to reach 7m (this is the height at which we pass the trees to FVS to grow). The interesting equations for this are Eqs. 6 – 9 as they are responsible for estimating regeneration stocking (i.e. number of trees that regenerate) and the species composition of the regenerating trees. Through time the effects of the results of these equations will drive future forest composition and structure and the amount of standing timber available for harvest.

The probability that any trees regenerate in a gap is modelled using a generalized linear mixed model with a stand-level random intercept drawn from a normal distribution. The probability is a function of canopy gap area and deer browse category (high or low; calculated as a function of deer density in the stand).

If there are some regenerating trees in the gap, we use a logistic regression to calculate the probability that the gap contains as many (or more) trees as could fit in the gap when all the trees are 7m (and is therefore ‘fully stocked’). The probability is a function of canopy openness (calculated as a function of canopy gap area), soil moisture and nutrient conditions and deer density. If the gap is not fully stocked we sample the number of trees using from a uniform distribution.

Finally, we assign each tree to a species by estimating the relative species composition of the gap. We do this by assuming there are four possible species mixes (derived from our empirical data) and we use a logistic regression to calculate the probability that the gap has each of these four mixes. The probability of each mix is a function of soil moisture and nutrient conditions, canopy gap area, and stand-level basal area of Sugar Maple Ironwood. Currently we have parameterised the model to represent five species (Sugar Maple, Red Maple, White Ash, Black Cherry and Ironwood).

As the flow chart suggests, there is a little more to it than these three equations alone but hopefully this gives you a general idea about how we’ve approached this and what the important variables are (look out for publications in the future with all the gory details). For example, at subsequent time-steps in the simulation model we grow the regenerating trees until they reach 7m and also represent the coalescence of the canopy gaps. I haven’t integrated the economic sub-model into the program yet but that’s the next step.

So what can we use the model for? One question we might use the model to address is, ‘how does change in the deer population influence northern hardwood regeneration, timber revenue and deer hunting value?’ For example, in one set of initial model runs I varied the deer population to test how it affects regeneration success (defined as the number of trees that regenerate as a percentage of the maximum possible). Here’s a plot that shows how regeneration success decreases with increasing deer population (as we would expect given the model structure):


Because we are linking the ecological sub-models with economic analyses we can look at how these differences will play out through time to examine potential tradeoffs between ecological and economic values. For example, because we know (from our analyses) how the spatial arrangement of forest characteristics influences deer distribution we can estimate how different forest management approaches in different locations influences regeneration through time. The idea is that if we can reduce deer numbers in a given area immediately after timber harvest we can give trees a chance to survive and grow above the reach of deer – moving deer spatially does not necessarily mean reducing the total population (which would reduce hunting opportunities, an important part of the local economy). The outcomes may look something like this:


We plan to use our model to examine scenarios like this quantitatively. But first, I need to finish testing the model…

Challenges for Ecological Modelling

Pressing contemporary ecological issues emphasise questions about how we should go about modelling ecological systems. In their preface to the latest volume of Ecological Modelling, Solidoro et al. suggest three main challenges for modellers with regards to applied environmental problems:

“A first challenge is to meet the legitimate expectations of the scientific community and society, providing solid expertise, reliable tools and critical interpretation of model results. Many questions need an answer here and now, and sometime[s] there is no point in saying ‘there are not enough data, information, knowledge’. To ask for more time, or to declare that no rigorous scientific conclusion can be drawn, will simply made those people needing an answer turn and look for someone else – qualified or not – willing to provide a suggestion. We have to be rigorous, to remind of limits and approximations implicit in any model and of uncertainties (and errors) implicit in any prediction. Nevertheless, if a model has to be made and/or used, ‘who if not us’, and ‘when if not now?’

A second challenge is neither generating false expectations, by promising what cannot be achieved, nor permitting others to do that, or to put such expectations on modelling. Within a society which regards magicians more than scientists, sometimes it might seem a good idea to wear a magician hat. However, modellers are not magicians, and models are not crystal bowls. And, once lost, it would be very hard to gain scientific credibility again.

A third point to remember is that the goal is knowledge, and models are only instruments. Even if its role in science is more central than in the past, ecological modelling should keep on staying open to contamination and to interbreeding with other scientific fields. Obviously, this includes confrontation with data and with the knowledge of people who collect them. Surely, it is true that reality is not the data but what data stand for, however experimental observations still remain the only link between theory and reality.”

The first point above is largely consistent with those I highlighted in my recent book review for Landscape Ecology (now in print); when data and understanding are sparse, modellers may just need to scale-back their modeling aims and objectives. When faced with pressing environmental issues we may need to settle for models that work – models that we can use to help make decisions rather than those that ‘prove’ (quantitatively) specific aspects of system function or ecological theory. In such a situation it may well be the case that ‘no rigorous scientific conclusion’ can be made in the short-term (when decisions are required) and, as the second point above implies, we shouldn’t try to disguise that. But that doesn’t mean people ‘needing an answer’ should be forced to look elsewhere (unless of course the answer they are looking for is 42).

Rather than focusing on the scientific results (numbers) of the model as a product, modellers in this situation might seek to captialise on the use of the process of modelling as a means to facilitate consensus-building and decision-making by providing a platform for communication about (potentially complex) systems interactions. Alternatively, they may use a model to foster better understanding about potential outcomes by examining how modelled systems behave qualitatively under different scenarios. Accurate quantitative predictions can be very persuasive, but when resources are in short supply we may not have the luxury of being able to produce them.

Solidoro et al. (2009) Challenges for ecological modelling in a changing world: Global Changes, Sustainability and Ecosystem Based Management Ecological Modelling 220(21) 2825-2827 doi:10.1016/j.ecolmodel.2009.08.018

Synthetic Trees

When testing and using simulation models we often need to use synthetic data. This might be because we want to examine the effects of different initial conditions on our model output, or simply because we have insufficient data to examine a system at the scale we would like to. The ecological-economic modelling project I’m currently working on is in both these situations, and over the last week or two I’ve been working on generating synthetic tree-level data so that we can initialize our model of forest stand change for testing and scenario development. Here’s a brief overview of how I’ve approached the task of producing a ‘treelist generator’ from the empirical data we have for over 60,000 trees in Northern Hardwood stands across Upper Michigan.

One of the key measures we can use to characterise forest stands is basal area (BA). We can assume that for each stand we generate a treelist for there is some ‘target BA’ that we are aiming to produce. As well as hitting a target BA, we also need to make sure that the tree diameter-at-breast-height (DBH) size-class distribution and species composition are representative of the stands in our empirical data. Therefore, our the first step is to look at the diameter size-class distribution of the stands we want to emulate. We can do this by plotting histograms of the frequency of trees of different diameter for each stand. In the empirical data we see two characteristic distributions (Fig 1).


Fig 1. Example stand tree count histograms

The distribution on the left has very many more trees in the smaller size classes as a result of stands self-thinning (as larger trees compete for finite resources). The second distribution, in which the smallest size classes are under-represented and larger size classes have relatively more trees, does not fit so well with the theoretical, self-thinning DBH size-class distribution. Stands with a distribution like this have probably been influenced by other factors (for example deer browse on the smaller trees). However, it turns out that both these DBH size-class distributions can be pretty well described by the gamma probability distribution (Fig 2).


Fig 2. Example stand gamma probability distributions for data in Fig 1

The gamma distribution has two parameters, a shape parameter we will call alpha and a scale parameter we will call beta. Interestingly, in the stands I examined (dominated by Sugar Maple and Ironwood) there are two different linear relationships between the parameters. The relationship between alpha and beta for 80% of stands represents the ‘self-thinning’ distribution, and the other 20% represent distributions in which small DBH classes are under-represented. We use these relationships – along with the fact that the range of values of alpha for all stands has a log-normal distribution – to generate characteristic DBH size-class distributions;

  1. sample a value of alpha from a normal distribution (subsequently reconvert using 10alpha),
  2. for the two different relationships use Bayesian linear regression to find mean and 95% credible intervals for the slope and intercept of a regression line between alpha and beta,
  3. use the value of alpha with the regression parameters to produce a value of beta.

So now for each stand we have a target basal area, and parameters for the DBH size class distribution. The next step is to add trees to the stand with diameters specified by the probability distribution. Each time we add a tree, basal area is added to the stand. The basal area for a tree is calculated by:

TreeBA = TreeDensity * (0.005454* diameter2)

[Tree density can be calculated for each tree because we know the sampling strategy used to collect empirical data on our timber cruise, whether on a fixed area plot, n-tree or with a prism].

Once we get within 1% of our target BA we stop adding trees to the stand [we’ll satisfy ourselves with a 1% accuracy because the size of tree that we allocate each time is sampled from a probability distribution and so we it is unlikely we will be able to hit our target exactly]. The trees in our (synthetic) stand should now (theoretically) have the appropriate DBH size-class distribution and basal area.

With a number of trees in now in our synthetic stand, each with a DBH value, the next step is to assign each tree to a species so that the stand has a representative species composition. For now, the two species we are primarily interested in are Sugar Maple and Ironwood. However, we will also allow trees in our stands to be Red Maple, White Ash, Black Cherry or ‘other’ (these are the next most common species in stands dominated by Sugar Maple and Ironwood). First we estimate the proportion of the trees in each species. In stands with Sugar Maple and Ironwood deer selectively browse Sugar Maple, allowing Ironwood a competitive advantage. Correspondingly, in the empirical data we observe a strong linear and inverse relationship between the abundance of Sugar Maple and Ironwood (Fig 3).


Fig 3. Relationship between stand Sugar Maple and Ironwood abundance

To assign species proportions we first estimate the proportion of Sugar Maple from the empirical data. Next, using the strong inverse relationship above we estimate the corresponding proportion of Ironwood (sampled using normal distribution with mean and standard deviation from from Bayesian linear regression). The remaining species proportions are assigned according to the frequency of their presence in the empirical data.

Now we use these proportions to assign a species to individual trees. Because physiology varies between species, the probability that a tree is of a given size also varies between species. For example, Ironwood very seldom reach DBH greater than 25 cm and the vast majority (almost 99% in our data) are smaller than 7.6 cm (3 inches) in diameter. Consequently, first we assign the appropriate number Ironwood to trees according to their empirical size-class distribution, before then assigning all other trees to the remaining species (using a uniform distribution).

The final step in generating our treelist is to assign each tree a height and a canopy ratio. We do this using empirical relationships between diameter and height for each species that are available in the literature (e.g. Pacala et al. 1994). And we’re done!

In the model I’m developing, these stands can be assigned a spatial location either using a pre-existing empirical map or using a synthetic land cover map with known characteristics (generated for example using the modified random clusters method, as the SIMMAP 2.0 software does). In either case we can now run the model multiple times to investigate the dynamics and consequences of different initial conditions. More on that in the future.

It takes all sorts

Neoclassical economics, both its assumptions and its ability to forecast future economic activity, has been taking a bit of a panning recently. Back near the start of this most recent economic downturn, Jean-Philippe Bouchard argued that neoclassical economists need to develop more pragmatic and realistic representations of what actually happens in ‘wild’ and messy free markets. And at the start of this year I highlighted how Niall Ferguson has stressed the importance of considering history in economic markets and decision-making. In both cases the criticism is that some economists have been blinded by the beauty of their elegant models and have failed to see where their assumptions and idealizations fail to match what’s happening in the real world. Most recently, Paul Krugman argued that ‘flaws-and-frictions economics’ (emphasizing imperfect decision-making and rejecting ideas of a perfectly free ‘friction-less’ market) must become more important. Krugman (‘friend’ of Niall Ferguson) suggests that mainstream economics needs to become more ‘behavioural’, and follow the lead of the behavioural economists that incorporate social, cognitive and emotional factors into their analyses of human decision-making.

The view from the Nature editors on all this is that in the future agent-based modelling will be an important tool to inform economic policy. In many ways agent-based modelling is very well suited to build more ‘behaviour’ into economics. For example, agent-based modelling provides the ability to represent several types of agent each with their own rules for decision-making, potentially based on their own life-histories and circumstances (this in contrast to the single perfectly rational ‘representative agent’ of neoclassical economics). Farmer and Foley, in their opinon piece of the same issue of Nature, are keen:

“Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. … To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.”

At the very least, our agent-based models need to improve upon the homogenizing assumptions of neoclassical economics. It takes all sorts to make a world — we need to do a better job of accounting for those different sorts in our models of it.