The second symposium I spent time in at US-IALE 2009, other than the CHANS workshop, was the Global Land Project Symposium on Agent-Based Modeling of Land Use Effects on Ecosystem Processes and Services. My notes for this symposium aren’t quite as extensive as for the CHANS workshop (and I had leave the discussion part-way through to give another presentation) but below I outline the main questions and issues raised and addressed by the symposium (drawing largely on Gary Polhill’s summary presentation).
The presentations highlighted the broad applicability of agent-based models (ABMs) across many places, landscapes and cultures using a diverse range of methodologies an populations. Locations and subjects of projects ranged from potential impacts of land use planning on the carbon balance in Michigan and rangeland management in the Greater Yellowstone Ecosystem, through impacts of land use change on wildfire regimes in Spain and water quality management in Australia, to conflicts between livestock and reforestation efforts in Thailand and the resilience of pastoral communities to drought Kenya. It was suggested that this diversity is a testament to the flexibility and power of the agent-based modelling approach. Methodologies used and explored by the projects in the symposium included:
- model coupling
- laboratory experiments (with humans and computers)
- approaches to decision-making representation
- scenario analysis
- visualisation of model output and function
- approaches to validation
- companion modelling
Applied questions that were raised by these projects included:
- how do we get from interviews to agent-behaviours?
- how well do our models work? (and how do we assess that?)
- how sensitive is land use change to planning policies?
- how (why) do we engage with stakeholders?
In our discussion following the presentation it was interesting to have some social scientists join in the discussion that was dominated by computer scientists and modellers. Most interestingly was the viewpoint of a social scientist (a political scientist I believe) who suggested that one reason social scientists may be skeptical of the power of ABMs is that social science inherently understands that ‘some agents are more important than others’ and that this is not often well reflected (or at least analysed) in recent agent-based modelling.
Possibly the most important question raised in discussion was ‘what are we [as agent-based modellers] taking back to science more generally?’ There were plenty of examples in the projects about issues that have wider scientific applicability; scale issues, the intricacies of (very) large scale simulation with millions of agents, the integration of social and ecological complexity, forest transition theory, edge effects in models, and the presence of provenance (path-dependencies) in model dynamics. Agent-based modellers clearly deal with many interesting problems encountered and investigated in other areas of science, but whether we are doing a good job at communicating our experiences of these issues to the wider scientific community is certainly something open to debate (and was in the symposium).
A related question, recently raised on the SIMSOC listserv (but not in the GLP sumposium) is ‘what are ABMs taking back to policy-making and policy-makers’? Specifically, Scott Moss asked the question; ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast? His reasoning behind this question is as follows:
“In relation to policy, it is common for social scientists (including but not exclusively economists) to use some a priori reasoning (frequently driven by a theory) to propose specific policies or to evaluate the benefits of alternative policies. In either case, the presumption must be that the benefits or relative benefits of the specified policies can be forecast. I am not aware of any successful tests of this presumption and none of my colleagues at the meeting of UK agent-based modelling experts could point me to a successful test in the sense of a well documented correct forecast of any policy benefit.
The importance of the question: If there is no history or, more weakly, no systematic history of successful forecasts of policy impacts, then is the standard approach to theory-driven policy advice defensible? If so, on what grounds? If not, then is an alternative approach to policy analysis and an alternative role for policy modelling indicated?”
“… the best description I have heard of ‘policy’ in the sense you are using was by Peter Allen who described it “at best policy is a perturbation on the fitness landscape“. Making predictions of the outcome of any policy intervention therefore requires a detailed understanding of the shape of the mophogenetic landscape. Most often a perturbation will just nudge the system up a wall of the valley it is in, only for it to return back into the same valley and no significant lasting effect will be seen. On occasion a perturbation will nudge the trajectory over a pass into a neighbouring valley and some kind of change will result, but unless you have a proper understanding of the shape of this landscape you wont necessarily be able to say in advance what the new trajectory will be.
What this way of thinking about things implies is that what we need to understand is the shape of the fitness landscape. With that understanding we would be able to say how much of a nudge is needed (say the size of a tax incentive) to get over a pass. We would also know what direction the neighbouring ‘valleys’ might take the system,
and this would allow predictions of the kind you want.”
“I was at the meeting where Scott raised this issue. Alan Wilson said that his company GMAP was built on developing spatial interaction models for predicting short term shifts in retailing activity which routinely produced predictions that were close to the mark. There are no better examples than the large retail units that routinely – every week – run their models to make predictions in retail markets and reportedly they produce good predictions. These are outfits like Tesco, Asda, M[orrisons] and S[ainsbury’s] and so on. I cant give you chapter and verse of where these predictions have been verified and documented because I am an academic and dont have access to this sort of material. The kinds of models that I am referring to are essentially land use transport models which began in the 1960s and are still widely used today. Those people reading this post who arent familiar with these models because they are not agent based models can get a quick view by looking at my early book which is downloadable …
I think that the problem with this debate is that it is focussed on academia and academics don’t traditionally revisit their models to see if longer term predictions work out. In fact for the reasons Alan [Penn] says one would probably not expect them to work out as we cant know the future. However there is loads of evidence about how well some models such as the ones I have referred to can fit existing data – ie in terms of their calibration. My book and lots of other work with these models shows that can predict the baseline rather well. In fact too well and the problem has been that although they predict the baseline well, they can often be quite deficient at predicting short term change well and often this arises from their cross sectional static nature and a million other problems that have been raised over the last 30 or more years.”
In response to Batty, Moss wrote:
“It is by no means unusual for model-based forecasts to be sufficiently accurate that the error is less than the value of the variable and perhaps much less. What systematically does not happen (and I know of no counterexample at all) is correct forecasting of volatile episodes such as big shifts in market shares in retail sales, macroeconomic recessions or recoveries, the onset of bear or bull phases in financial markets.
Policy initiatives are usually intended to change something from what has gone on before. Democratic governments — executive and legislative branches — typically investigate the reasons for choosing one policy rather than another or, at least, justify a proposed policy before implementation. Sometimes these justifications are based on forecasts of impacts derived from models. Certainly this is happening now in relation to the current recession. So the question is not whether there are ever correct forecasts. Certainly on the minimal criteria I suggested, there are many. The question is strictly about forecasts of policy impacts which, I conjecture, are rather like other major shifts in social trend and stability.
I believe this particular question is important because I don’t understand the point of policy modelling if we cannot usefully inform policy formation. If the usefulness we claim is that we can evaluate policy impacts and, in point of fact, we systematically (or always) produce incorrect forecasts of the direction and/or timing of intended changes, then it seems hard to argue that this is a useful exercise.”
But is focussing on the accuracy of forecasts of the future the only, or indeed best, way of using models to inform policy? In recent times some policy-makers (e.g. Tony Blair and New Labour) have come to see science (and it’s tools of modelling and predictions) as some kind of a ‘policy saviour’, leading to what is known as evidence-based policy-making. In this framework, science sits upstream of policy-making providing evidence about the real state of the world that then trickles down to steer policy discourse. This may be fine when the science is solving puzzles, but there are many instances (climate change for instance) where science has not solved the problem and rather has merely demonstrated more clearly our ignorance and uncertainty about the material state of the world.
Thus, when (scientific) models are developed to represent ‘open’ systems, as most real world systems are (e.g. the atmosphere, the global economy), I would argue that model forecasts or predictions are not the best way to inform policy formation. I have discussed such a perspective previously. Models and modelling are useful for understanding the world and making decisions, but they do not provide this utility by making accurate predictions about it. I argue that modelling is useful because it forces us to make explicit our implicitly held ‘mental models’ providing others with the opportunity to scrutinise the logic and coherence of that model and discuss its implications. Modelling helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states.
Science, generally, is about finding out about how the material world is. Policy, generally, is about deciding and making how the world how it ought to be. In many instances science can only provide an incomplete picture of how the world is, and even when it is confident about the material state of the world, there is only some much it can provide to an argument about how we think the world should be (which is what policy-making is all about). Emphasising the use of scientific models and modelling as a discussant, not a predictor, may be the best way to inform policy formulation.
In a paper submitted with one of my PhD advisors we discuss this sort of thing with reference to ‘participatory science’. The GLP ABM symposium is planning on publishing a special issue of Land Use Science containing papers from the meeting – in the manuscript I will submit I plan to follow-up in more detail on some of these participatory and ‘model as discussant’ issues with reference to my own agent-based modelling.