Conference Deadlines

Those interested in landscape modelling might want to be aware of the deadlines for LANDMOD 2010 and US-IALE 2010.

LANDMOD 2010
LANDMOD 2010 will be held at SupAgro in Montpellier, France, February 3rd to 5th 2010.

The 2010 international conference on integrative landscape modelling will gather leading scientists in each of the main disciplines dealing with ecosystems and landscape simulation and management, complex dynamic modelling and assessment of vulnerability, resilience and adaptation of agro- and eco-systems under human influence.

The main objectives of the conference are:

  • To discuss the objectives, priorities and expectations when modelling the functioning of landscapes;
  • To share experience about landscape modelling and to identify major existing conceptual and technological gaps;
  • To release a ‘state of the art’ about landscape modelling and simulation;
  • To start building an international network on integrative ecosystems and landscape modelling.

Deadlines
October 31st : deadline for submission of extended abstracts
November, 30th: notification of acceptation of talks and posters
December, 31st : deadline for registration and payment

Website: http://www.umr-lisah.fr/rtra-projects/landmod2010

US-IALE 2010
The 25th annual meeting of US-IALE (US Regional Association, International Association for Landscape Ecology) will be held in Athens, Georgia, from April 5-9, 2010. One of the unique aspects of the 25th annual meeting is to reflect upon progress made in the past 25 years and to chart an even more productive course for landscape ecology over the next quarter century. The meeting will include special sessions at which past presidents of US-IALE and other leading landscape ecologists will provide retrospectives on and perspectives for landscape ecology.

Approximately 20 NASA-MSU Awards and 10 CHANS Fellowships will be available to support students, postdoctoral associates, junior faculty and other junior researchers to attend the meeting.

Deadlines
October 15, 2009: Proposals for symposia and workshops
December 15, 2009: Abstracts for oral and poster presentations
December 15, 2009: NASA-MSU Awards Applications
December 15, 2009: CHANS Fellowship Applications

Website: http://www.usiale.org/athens2010/

Interdisciplinarity, Sustainability and Critical Realism

I have a new paper to add to my collection of favourites. Hidden in the somewhat obscure Journal of Critical Realism it touches on several issues that I often find myself thinking about and studying: Interdisciplinarity, Ecology and Scientific Theory.

Karl Høyer and Petter Naess also have plenty to say about sustainability, planning and decision-making and, although they use the case of sustainable urban development, much of what they discuss is relevant to broader issues in the study of coupled human and natural systems. Their perspective resonates with my own.

For example, they outline some of the differences between studying open and closed systems (interestingly with reference to some Nordic writers I have not previously encountered);

… The principle of repetitiveness is crucial in these kinds of [reductionist] science [e.g. atomic physics, chemistry] and their related technologies. But such repetitiveness only takes place in closed systems manipulated by humans, as in laboratories. We will never find it in nature, as strongly emphasised by both Kvaløy and Hägerstrand within the Nordic school. In nature there are always open, complex systems, continuously changing with time. This understanding is in line with key tenets of critical realism. Many of our most serious ecological problems can be explained this way: technologies, their products and substances, developed and tested in closed systems under artificial conditions that generate the illusion of generalised repetitiveness, are released in the real nature of open systems and non-existing repetitiveness. We are always taken by surprise when we experience new, unexpected ecological effects. But this ought not to be surprising at all; under these conditions such effects will necessarily turn up all the time.

At the same time, developing strategies for a sustainable future relies heavily on the possibility of predicting the consequences of alternative solutions with at least some degree of precision. Arguably, a number of socio-technical systems, such as the spatial structures of cities and their relationships with social life and human activities, make up ‘pseudo-closed’ systems where the scope for prediction of outcomes of a proposed intervention is clearly lower than in the closed systems of the experiments of the natural sciences, but nevertheless higher than in entirely open systems. Anticipation of consequences, which is indispensable in planning, is therefore possible and recommendable, although fallible.

The main point of their paper, however, is the important role critical realism [see also] might play as a platform for interdisciplinary research. Although Høyer and Naess do highlight some of the more political reasons for scientific and academic disciplinarity, their main points are philosophical;

…the barriers to interdisciplinary integration may also result from metatheoretical positions explicitly excluding certain types of knowledge and methods necessary for a multidimensional analysis of sustainability policies, or even rejecting the existence of some types of impacts and/or the entities causing these impacts.

These philosophical (metatheoretical) barriers include staunchly positivist and strong social constructionist perspectives;

According to a positivist view, social science research should emulate research within the natural sciences as much as possible. Knowledge based on research where the observations do not lend themselves to mathematical measurement and analysis will then typically be considered less valid and perhaps be dismissed as merely subjective opinions. Needless to say, such a view hardly encourages natural scientists to integrate knowledge based on qualitative social research or from the humanities. Researchers adhering to an empiricist/naive realist metatheory will also tend to dismiss claims of causality in cases where the causal powers do not manifest themselves in strong and regular patterns of events – although such strong regularities are rare in social life.

On the other hand, a strong social constructionist position implies a collapsing of the existence of social objects to the participating agents’ conception or understanding of these objects. …strong social constructionism would typically limit the scope to the cultural processes through which certain phenomena come to be perceived as environmental problems, and neglecting the underlying structural mechanisms creating these phenomena as well as their impacts on the physical environment. At best, strong social constructionism is ambivalent as to whether we can know anything at all about reality beyond the discourses. Such ‘empty realism’, typical of dominant strands of postmodern thought, implies that truth is being completely relativised to discourses on the surface of reality, with the result that one must a priori give up saying anything about what exists outside these discourses. At worst, strong social constructionism may pave the way for the purely idealist view that there is no such reality.

At opposite ends of the positivist-relativist spectrum neither of these perspectives seem to be the most useful for interdisciplinary research. Something that sits between these two extremes – critical realism – might be more useful [I can’t do this next section justice in an abridged version – and this is the main point of the article – so here it is in its entirety];

The above-mentioned examples of shortcomings of reductionist metatheories do not imply that research based on these paradigms is necessarily without value. However, reductionist paradigms tend to function as straitjackets preventing researchers from taking into consideration phenomena and factors of influence not compatible with or ignored in their metatheory. In practice, researchers have often deviated from the limitations prescribed by their espoused metatheoretical positions. Usually, such deviations have tended to improve research rather than the opposite.

However, for interdisciplinary research, there is an obvious need for a more inclusive metatheoretical platform. According to Bhaskar and Danermark, critical realism provides such a platform, as it is ontologically characterised doubly by inclusiveness greater than competing metatheories: it is maximally inclusive in terms of allowing causal powers at different levels of reality to be empirically investigated; and it is maximally inclusive in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks.

Arguably, many of the ecologists and ecophilosophers referred to earlier in this paper have implicitly based their work on the same basic assumptions as critical realism. Some critical realist thinkers have also addressed ecological and environmental problems explicitly. Notably, Ted Benton and Peter Dickens have demonstrated the need for an epistemology that recognises social mediation of knowledge but also the social and material dimensions of environmental problems, and how the absence of an interdisciplinary perspective hinders essential understanding of nature/society relationships.

According to critical realism, concrete things or events in open systems must normally be explained ‘in terms of a multiplicity of mechanisms, potentially of radically different kinds (and potentially demarcating the site of distinct disciplines) corresponding to different levels or aspects of reality’. As can be seen from the above, the objects involved in explanations of the (un)sustainability of urban development belong partially to the natural sciences, partially to the social sciences, and are partially of a normative or ethical character. They also belong to different geographical or organisational scales. Thus, similar to (and arguably to an even higher extent than) what Bhaskar and Danermark state about disability research, events and processes influencing the sustainability of urban development must be understood in terms of physical, biological, socioeconomic, cultural and normative kinds of mechanisms, types of contexts and characteristic effects.

According to Bhaskar, social life must be seen in the depiction of human nature as ‘four-planar social being’, which implies that every social event must be understood in terms of four dialectically interdependent planes: (a) material transactions with nature, (b) social interaction between agents, (c) social structure proper, and (d) the stratification of embodied personalities of agents. All these categories of impacts should be addressed in research on sustainable urban development. Impacts along the first dimension, category (a), typically include consequences of urban development for the physical environment. Consequences in terms of changing location of activities and changing travel- ling patterns are examples of impacts within category (b). But this category also includes the social interaction between agents leading to changes in, among others, the spatial and social structures of cities. Relevant mechanisms at the level of social structure proper (category [c]) might include, for exam- ple, impacts of housing market conditions on residential development projects and consequences of residential development projects for the overall urban structure. The stratified personalities of agents (category [d]) include both influences of agents on society and the physical environment and influences of society and the physical environment on the agents. The latter sub-category includes physical impacts of urban development, such as unwholesome noise and air pollution, but also impacts of the way urban planning and decision- making processes are organised, for example, in terms of effects on people’s self esteem, values, opportunities for personal growth and their motivation for participating in democratic processes. The influence of discourses on the population’s beliefs about the changes necessary to bring about sustainable development and the conditions for implementing such changes also belongs to this sub-category. The sub-category of influences of agents on society and the physical environment includes the exercise of power by individual and corporate agents, their participation in political debates, their contribution to knowledge, and their practices in terms of, for example, type and location of residence, mobility, lifestyles more generally, and so on.

Regarding issues of urban sustainability, the categories (a)–(d) are highly interrelated. If this is the case, we are facing what Bhaskar and Danermark characterise as a ‘laminated’ system, in which case explanations involving mechanisms at several or all of these levels could be termed ‘laminated expla- nations’. In such situations, monodisciplinary empirical studies taking into consideration only those factors of influence ‘belonging’ to the researcher’s own discipline run a serious risk of misinterpreting these influences. Examples of such misinterpretations are analyses where increasing car travel in cities is explained purely in terms of prevailing attitudes and lifestyles, addressing neither political-economic structures contributing to consumerism and car-oriented attitudes, nor spatial-structural patterns creating increased needs for individual motorised travel.

Moreover, the different strata of reality and their related mechanisms (that is, physical, biological, socio-economic, cultural and normative kinds of mechanisms) involved in urban development cannot be understood only in terms of categories (a)–(d) above. They are also situated in macroscopic (or overlying) and less macroscopic (or underlying) kinds of structures or mechanisms. For research into sustainable urban development issues, such scale-awareness is crucial. Much of the disagreement between proponents of the ‘green’ and the ‘compact’ models of environmentally sustainable urban development can probably be attributed to their focus on problems and challenges at different geographical scales: whereas the ‘compact city’ model has focused in particular on the impacts of urban development on the surrounding environment (ranging from the nearest countryside to the global level), proponents of the ‘green city’ model have mainly been concerned about the environment within the city itself. A truly environmentally sustainable urban development would require an integration of elements both from the former ‘city within the ecology’ and the latter ‘ecology within the city’ approaches. Similarly, analyses of social aspects of sustainable development need to include both local and global effects, and combine an understanding of practices within particular groups with an analysis of how different measures and traits of development affect the distribution of benefits and burdens across groups.

Acknowledging that reality consists of different strata, that multiple causes are usually influencing events and situations in open systems, and that a pluralism of research methods is recommended as long as they take the ontological status of the research object into due consideration, critical realism appears to be particularly well suited as a metatheoretical platform for interdisciplinary research. This applies not least to research into urban sustainability issues where, as has been illustrated above, other metatheoretical positions tend to limit the scope of analysis in such a way that sub-optimal policies within a particular aspect of sustainability are encouraged at the cost of policies addressing the challenges of sustainable urban development in a comprehensive way.

In conclusion; critical realism can play a very important role as an underlabourer of interdisciplinarity, with its maximal inclusiveness both in terms of allowing causal powers at different levels of reality to be empirically investigated and in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks

I’m going to have to spend some time thinking about this but there seems to be plenty to get ones teeth into here with regards the study of coupled human and natural systems and the use of agent-based modelling approaches. For example, agent-based modelling seems to offer a means to represent Bhaskar‘s four planes but there are plenty of questions about how to do this appropriately. I also need to think more carefully about how these four planes are manifested in the systems I study. Generally however, it seems that critical realism offers a useful foundation from which to build interdisciplinary studies of the interaction of humans and their environment for the exploration of potential pathways to ensure sustainable landscapes.

Reference
Høyer, K.G and Naess, P. 2008 Interdisciplinarity, ecology and scientific theory: The case of sustainable urban development Journal of Critical Realism 7(2) 179-207 doi: 10.1558/jocr.v7i2.179

CHANS-Net at AAG 2010

Details of plans for CHANS-Net activities at the 2010 annual meeting of the Association of American Geographers (AAG) in Washington, D.C. have been posted.

Presentations and a workshop are expected to synthesise across CHANS research projects, potentially leading to publication. The CHANS-Net website also indicates there are opportunities for junior scholars to receive financial assistance.

Deadline for abstract submission to the AAG meeting is 28th October 2009 (submissions to the CHANS events are due by 20th October).

Disturbance and Landscape Dynamics in a Changing World

Experimentation can be tricky for landscape ecologists, especially if we’re considering landscapes at the human scale (it’s a bit easier at the beetle scale [pdf]). The logistic constraints of studies at large spatial and temporal scales mean we frequently use models and modelling. However, every-now-and-then certain events afford us the opportunity for a ‘natural experiment’ – situations that are not controlled by an experimenter but approximate controlled experimental conditions. In her opening plenary at ESA 2009, Prof. Monica Turner used one such natural experiment – the Yellowstone fires of 1988 – as an exemple to discuss how disturbance affects landscape dynamics and ecosystem processes. Although this is a great example for landscapes with limited human activity, it is not such a useful tool for considering human-dominated landscapes.


Landsat satellite image of the Yellowstone fires on 23rd August 1988. The image is approximately 50 miles (80 km) across and shows light from the green, short-wave infrared, and near infrared bands of the spectrum. The fires glow bright pink, recently burned land is dark red, and smoke is light blue.

Before getting into the details, one of the first things Turner did was to define disturbance (drawing largely on Pickett and White) and an idea that she views as critical to landscape dynamics – the shifting mosaic steady state. The shifting mosaic steady state, as described by Borman and Likens, is a product of the processes of vegetation disturbance and succession. Although these processes mean that vegetation will change through time at individual points, when measured over a larger area the proportion of the landscape in each seral stage (of succession) remains relatively constant. Consequently, over large areas and long time intervals the landscape can be considered to be in equilibrium (but this isn’t necessarily always the case).

Other key ideas Turner emphasised were:

  • disturbance is a key component in ecosystems across many scales,
  • disturbance regimes are changing rapidly but the effects are difficult to predict,
  • disturbance and heterogeneity have reciprocal effects.

Landscape Dynamics
In contrast to what you might expect, very large disturbances generally increase landscape heterogeneity. For example, the 1988 Yellowstone fires burned 1/3 of the park in all forest types and ages but burn severity varied spatially. Turner highlighted that environmental thresholds may determine whether landscape pattern constrains fire spread. For instance, in very dry years spatial pattern will likely have less effect than years where rainfall has produced greater spatial variation in fuel conditions.

Turner and her colleagues have also found that burn severity, patch size and geographic location affected early succession in the years following the Yellowstone fires. Lodgepole pine regeneration varied enormously across the burned landscape because of the spatial variation in serotiny and burn severity. Subsequently, the size, shape and configuration of disturbed patches influenced succession trajectories. Turner also highlighted that succession is generally more predictable in small patches, when disturbances are infrequent, and when disturbance severity/intensity is low (and vice versa).

Ecosystem Processes
One of the questions landscape ecologists have been using the Yellowstone fires to examine is; do post-disturbance patterns affect ecosystem processes? Net Primary Production varies a lot with tree density (e.g., density of lodgepole pine following fire) and the post-fire patterns of tree density have produced a landscape mosaic of ecosystem process rates. For example, Kashian and colleagues found spatial legacy effects of the post-fire mosaic can last for centuries. Furthermore, this spatial variation in ecosystem process rates is greater than temporal variation and the fires produced a mosaic of different functional trajectories (a ‘functional mosaic’).

Another point Turner was keen to make was that the Yellowstone fires were not the result of fire suppression as is commonly attributed, but instead they were driven by climate (particularly hot and dry conditions). Later in the presentation she used the ecosystem process examples above to argue that the Yellowstone fires were not an ecological disaster and that the ecosystem has proven resilient. However, she stressed that fire will continue to be an important disturbance and that the fire regimes is likely to change rapidly if climate does. For example, Turner highlighted the study by Westerling and colleagues that showed that increased fire activity in the western US in recent decades is a result of increasing temperatures, earlier spring snowmelt and subsequent increases in vegetation moisture deficit. If climate change projections of warming are realised, by 2100 the climate of 1988 (which was extreme) could become the norm and events like the Yellowstone fires will be much more frequent. For example, using a spatio-temporal state-space diagram (seebelow), Turner and colleagues [pdf] found that fires in Yellowstone during the 15 years previous to 1988 had relatively little impact on landscape dynamics (shown in green in the lower left of the diagram). However, the extent of the 1988 fires pushed the disturbance regime up into an area of the state-space not characteristic of a shifting-mosaic steady state (shown in red).


The spatio-temporal state-space diagram used by Turner and colleagues [pdf] to describe potential landscape disturbance dynamics. On the horizontal x-axis is the ratio of disturbance extent (area) to the landscape area and on the vertical y-axis is the ratio of disturbance interval (time) to recovery interval. Landscapes in the upper left of the diagram will appear to an observer as relatively constant in time with little disturbance impact; those in the lower right are dominated by disturbance.

Remaining Questions
Turner finished her presentation by highlighting what she sees as key questions for studying disturbance and landscape dynamics in a changing world:

  • How will disturbance interact with one another?
  • How will disturbances interact with other drivers?
  • What conditions will cause qualitative shifts in disturbance regimes (like that shown in the diagram above)?

It was comforting to hear that a leader in the field identified these points as important as many of them relate closely to what I’ve been working on thinking about. For example, the integrated ecological-economic forest modelling project I’m working on here in Michigan explicitly considers the interaction of two disturbances – human timber harvest and deer herbivory. The work I initiated during my PhD relates to the second question – how does human land use/cover change interact and drive changes in the wildfire regime of a landscape in central Spain? And recently, I reviewed a new book on threshold modelling in ecological restoration for Landscape Ecology.

Much of Turner’s presentation and discussion applied to American landscapes with limited human activity. This not surprising of course, given the context of the presentation (at the Ecological Society of America) and the location of her study areas (all in the USA). But although natural experiments like the 1988 Yellowstone fires may be useful as an analogue to understand processes and dynamics in similar systems, it is also interesting (and important) to think about how other systems potentially differ from this examplar. For example, the Yellowstone fires natural experiment has little to say about disturbance in human-dominated landscapes that are prevalent in many areas of the world (such as the Mediterranean Basin). In the future, research and models of landscape succession-disturbance dynamics will need to focus as much attention on human drivers of change as environmental drivers.

Turner concluded her plenary by emphasising that ecologists must increase their efforts to understand and anticipate the effects of changing disturbance regimes. This is important not only in the context of climate as driver of change, but also because of the influence of a growing human population.

Challenges and Opportunities in CHANS Research

The discussion forum is now up and running at CHANS-Net. I have just posted some questions regarding challenges and opportunities in CHANS research that arose from the CHANS workshop at US-IALE 2009. The topics include;

  • Abstract vs. Applied Research
  • Communication in CHANS Research
  • Conceptualizing Human-Environment Relationships
  • Pattern and Process in CHANS Research
  • Spider Diagrams
  • Future Directions for CHANS Research

Register in the CHANS-Net Forum, read the questions and post your replies there.

Accuracy 2010


I’ve mentioned uncertainty several times on this blog in the past (examples one and two), so it seems appropriate to highlight the next International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences. The ninth occurrence of this biennial meeting will be hosted by the University of Leicester, UK, 20th – 23rd July 2010. Oral presentations, posters and discussion will address topics including:

Semantic uncertainty and vagueness
Modelling uncertainty using geostatistics
Propagation of uncertainty in GIS
Visualizing spatial uncertainty
Uncertainty in Remote Sensing
Spatiotemporal uncertainty
Accuracy and uncertainty of DEMs
Modelling scale in environmental systems
Positional uncertainty

The deadline for abstract submission is 28th September 2009.

ESA 2009 Agenda

I’ve just arrived in Albuquerque, New Mexico, for the Ecological Society of American meeting. Before heading out to explore town I’ve been putting the final touches to my presentation (Monday, 4.40pm, Sendero Ballroom III) and working out what I’m going to do this week. Here’s what I think I’ll be doing:

i) Importantly, on Monday at 2.30 I’ll be going to support Megan Matonis as she talks about the work she’s been doing on our UP project: ‘Gap-, stand-, and landscape-scale factors affecting tree regeneration in harvest gaps’.

ii) Monday morning I think I’ll attend the special session ‘What is Sustainability Science and Can It Make Us Sustainable?’ [“What is sustainability science and can it make us sustainable? If sustainability science requires interdisciplinarity, how do these diverse disciplines integrate the insights that each brings? How do we reconcile differing basic assumptions to solve an urgent and global problem? How do we ensure that research outputs of ecology and other disciplines lead toward sustainability?”]

iii) Tuesday, amongst other things, I’ll check out the symposium entitled; ‘Global Sustainability in the Face of Uncertainty: How to More Effectively Translate Ecological Knowledge to Policy Makers, Managers, and the Public’. [“The basic nature of science, as well as life, is that there will always be uncertainty. We define uncertainty as a situation in which a decision-maker (scientist, manager, or policy maker) has neither certainty nor reasonable probability estimates available to make a decision. In ecological science we have the added burden of dealing with the inherent complexity of ecological systems. In addition, ecological systems are greatly affected by chance events, further muddying our ability to make predictions based on empirical data. Therefore, one of the most difficult aspects of translating ecological and environmental science into policy is the uncertainty that bounds the interpretation of scientific results.”]

iv) Wednesday I plan on attending the symposium ‘What Should Ecology Education Look Like in the Year 2020?’ [“How should ecology education be structured to meet the needs of the next generation, and to ensure that Americans prioritize sustainability and sound ecological stewardship in their actions? What balance between virtual and hands-on ecology should be taught in a cutting-edge ecological curriculum? How can we tackle the creation versus evolution controversy that is gaining momentum?”]

v) Being a geographer (amongst other things) on Thursday I’d like to participate in the discussion regarding place; ‘The Ecology of Place: Charting a Course for Understanding the Planet’ [“The diversity, complexity, and contingency of ecological systems both bless and challenge ecologists. They bless us with beauty and endless fascination; our subject is never boring. But they also challenge us with a difficult task: to develop general and useful understanding even though the outcomes of our studies typically depend on a host of factors unique to the focal system as well as the particular location and time of the study. Ecologists address this central methodological dilemma in various ways. … Given the pressing environmental challenges facing the planet, it is critical that ecologists develop an arsenal of effective strategies for generating knowledge useful for solving real-world problems. This symposium inaugurates discussion of one such strategy – The Ecology of Place.”]

vi) Also on Thursday I think I’ll see what’s going on in the session; ‘Transcending Tradition to Understand and Model Complex Interactions in Ecology’. [“Ecology intersects with the study of complex systems, and our toolboxes must grow to meet interdisciplinary needs.”]

vii) Not sure about Friday yet…

What is the point… of social simulation modelling?

Previously, I mentioned a thread on SIMSOC initiated by Scott Moss. He asked ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast?. Following on to the responses to that question, earlier this week he started a new thread entitled ‘What’s the Point?:

“We already know that economic recessions and recoveries have probably never been forecast correctly — at least no counter-examples have been offered. Similarly, no financial market crashes or recoveries or significant shifts in market shares have ever, as far as we know, been forecast correctly in real time.

I believe that social simulation modelling is useful for reasons I have been exploring in publications for a number of years. But I also recognise that my beliefs are not widely held.

So I would be interested to know why other modellers think that modelling is useful or, if not useful, why they do it.”

After reading others’ responses I decided to reply with my own view:

“For me prediction of the future is only one facet of modelling (whether agent-based or any other kind) and not necessarily the primary use, especially with regards policy modelling. This view stems party from the philosophical difficulties outlined by Oreskes et al. (1994), amongst others. I agree with Mike that the field is still in the early stages of development, but I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world’. As Pablo suggested, if we are to predict the future the inherent uncertainties will be best highlighted and accounted for by ensuring predictions are tied to a probability.”

I also highlighted the reasons offered by Epstein and outlined a couple of other reasons I think ABM are useful.

There was a brief response to mine then and then another, more assertive, response that (I think) highlights a common confusion of the different uses of prediction in modelling:

“If models of economic policy are fundamentally unable to at some point predict the effects of policy — that is, to in some measure predict the future — then, to be blunt, what good are they? If they are unable to be predictive then they have no empirical, practical, or theoretical value. What’s left? I ask that in all seriousness.

Referring to Epstein’s article, if a model is not sufficiently grounded to show predictive power (a necessary condition of scientific results), then how can it be said to have any explanatory power? Without prediction as a stringent filter, any amount of explanation from a model becomes equivalent to a “just so” story, at worst giving old suppositions the unearned weight of observation, and at best hitting unknowably close to the mark by accident. To put that differently, if I have a model that provides a neat and tidy explanation of some social phenomena, and yet that model does not successfully replicate (and thus predict) real-world results to any degree, then we have no way of knowing if it is more accurate as an explanation than “the stars made it happen” or any other pseudo-scientific explanation. Explanations abound; we have never been short of them. Those that can be cross-checked in a predictive fashion against hard reality are those that have enduring value.

But the difficulty of creating even probabalistically predictive models, and the relative infancy of our knowledge of models and how they correspond to real-world phenomena, should not lead us into denying the need for prediction, nor into self-justification in the face of these difficulties. Rather than a scholarly “the dog ate my homework,” let’s acknowledge where we are, and maintain our standards of what modeling needs to do to be effective and valuable in any practical or theoretical way. Lowering the bar (we can “train practitioners” and “discipline policy dialogue” even if we have no way of showing that any one model is better than another) does not help the cause of agent-based modeling in the long run.

I felt this required a response – it seemed to me that difference between logical prediction and temporal prediction was being missed:

“In my earlier post I wrote: “I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world'”. I was careful about how I worded this [more careful than ensuring correct formatting of the post it seems – my original post is below in a more human-readable format] and maybe some clarification in the light of Mike’s comments would be useful. Here goes…

Precisely predicting the future state of an ‘open’ system at a particular instance in time does not imply we have explained or understand it (due to the philosophical issues of affirming the consequent, equifinality, underdetermination, etc.). To be really useful for explanation and to have enduring value model predictions of any system need to be cross-checked against hard reality *many times*, and in the case of societies probably also in many places (and should ideally be produced by models that are consistent with other theories). Producing multiple accurate predictions will be particularly tricky for things like the global economy for which only have one example (but of course will be easier where experimental replication more ogistically feasible).

My point is two-fold:
1) a single, precise prediction of a future does not really mean much with regard our understanding of an open system,
2) multiple precise predictions are more useful but will be more difficult to come by.

This doesn’t necessarily mean that we will never be able to consistently predict the future of open systems (in Scott’s sense of correctly forecasting of the timing and direction of change of specified indicators). I just think it’s a ways off yet, that there will always be uncertainty, and that we need to deal with this uncertainty explicitly via probabilistic output from model ensembles and other methods.Rather than lowering standards, a heuristic use of models demands we think more closely about *how* we model and what information we provide to policy makers (isn’t that the point of modelling policy outcomes in the end?).

Let’s be clear, the heuristic use of models does not allow us to ignore the real world – it still requires us to compare our model output with empirical data. And as Mike rightly pointed out, many of Epstein’s reasons to model – other than to predict – require such comparisons. However, the scientific modelling process of iteratively comparing model output with empirical data and then updating our models is a heuristic one – it does not require that precise prediction at specific point in the future is the goal before all others.

Lowering any level of standards will not help modelling – but I would argue that understanding and acknowledging the limits of using modelling in different situations in the short-term will actually help to improve standards in the long run. To develop this understanding we need to push models and modelling to their limits to find our what works, what we can do and what we can’t – that includes iteratively testing the temporal predictions of models. Iteratively testing models, understanding the philosophical issues of attempting to model social systems, exploring the use of models and modelling qualitatively (as a discussant, and a communication tool, etc.) should help modellers improve the information, the recommendations, and the working relationships they have with policy-makers.

In the long run I’d argue that both modellers and policy-makers will benefit from a pragmatic and pluralistic approach to modelling – one that acknowledges there are multiple approaches and uses of models and modelling to address societal (and environmental) questions and problems, and that [possibly self evidently] in different situations different approaches will be warranted. Predicting the future should not be the only goal of modelling social (or environmental) systems and hopefully this thread will continue to throw up alternative ideas for how we can use models and the process of modelling.”

Note that I didn’t explicitly point out the difference between the two different uses of prediction (that Oreskes and other have previously highlighted). It took Dan Olner a couple of posts later to explicitly describe the difference:

“We need some better words to describe model purpose. I would distinguish two –

a. Forecasting (not prediction) – As Mike Sellers notes, future prediction is usually “inherently probabalistic” – we need to know whether our models can do any better than chance, and how that success tails off as time passes. Often when we talk about “prediction” this is what we mean – prediction of a more-or-less uncertain future. I can’t think of a better word than forecasting.

b. Ontological prediction (OK, that’s two words!) – a term from Gregor Betz, Prediction Or Prophecy (2006). He gives the example of the prediction of Neptune’s existence from Newton’s laws – Uranus’ orbit implied that another body must exist. Betz’s point is that an ontological prediction is “timeless” – the phenomenon was always there. Einstein’s predictions about light bending near the sun is another: something that always happened, we just didn’t think to look for it. (And doubtless Eddington wouldn’t have considered *how* to look, without the theory.)

In this sense forecasting (my temporal prediction) is distinctly temporal (or spatial) and demands some statement about when (or where) an event or phenomena will occur. In contrast, ontological prediction (my logical prediction) is independent of time and/or space and is often used in closed system experiments searching for ‘universal’ laws. I wrote more about this in a series of blog posts I wrote a while back on the validation of models of open systems.

This discussion is ongoing on SIMSOC and Scott Moss has recently posted again suggesting a summary of the early responses:

“I think a perhaps extreme summary of the common element in the responses to my initial question (what is the point?, 9/6/09) is this:

**The point of modelling is to achieve precision as distinct from accuracy.**

That is, a model is a more or less complicated formal function relating a set of inputs clearly to a set of outputs. The formal inputs and outputs should relate unambiguously to the semantics of policy discussions or descriptions of observed social states and/or processes.

This precision has a number of virtues including the reasons for modelling listed by Josh Epstein. The reasons offered by Epstein and expressed separately by Lynne Hamill in her response to my question include the bounding and informing of policy discussions.

I find it interesting that most of my respondents do not consider accuracy to be an issue (though several believe that some empirically justified frequency or even probability distributions can be produced by models). And Epstein explicitly avoids using the term validation in the sense of confirmation that a model in some sense accurately describes its target phenomena.

So the upshot of all this is that models provide a kind of socially relevant precision. I think it is implicit in all of the responses (and the Epstein note) that, because of the precision, other people should care about the implications of our respective models. This leads to my follow-on questions:

Is precision a good enough reason for anyone to take seriously anyone else’s model? If it is not a good enough reason, then what is?

And so arises the debate about the importance of accuracy over precision (but the original ‘What is the point’ thread continues also). In hindsight, I think it may have been more appropriate for me to use the word accurate than precise in my postings. All this debate may seem to be just semantics and navel-gazing to many people, but as I argued in my second post, understanding the underlying philosophical basis of modelling and representing reality (however we might measure or perceive it) gives us a better chance of improving models and modelling in the long run…

US-IALE 2009: GLP Agent-Based Modelling Symposium

The second symposium I spent time in at US-IALE 2009, other than the CHANS workshop, was the Global Land Project Symposium on Agent-Based Modeling of Land Use Effects on Ecosystem Processes and Services. My notes for this symposium aren’t quite as extensive as for the CHANS workshop (and I had leave the discussion part-way through to give another presentation) but below I outline the main questions and issues raised and addressed by the symposium (drawing largely on Gary Polhill’s summary presentation).

The presentations highlighted the broad applicability of agent-based models (ABMs) across many places, landscapes and cultures using a diverse range of methodologies an populations. Locations and subjects of projects ranged from potential impacts of land use planning on the carbon balance in Michigan and rangeland management in the Greater Yellowstone Ecosystem, through impacts of land use change on wildfire regimes in Spain and water quality management in Australia, to conflicts between livestock and reforestation efforts in Thailand and the resilience of pastoral communities to drought Kenya. It was suggested that this diversity is a testament to the flexibility and power of the agent-based modelling approach. Methodologies used and explored by the projects in the symposium included:

  • model coupling
  • laboratory experiments (with humans and computers)
  • approaches to decision-making representation
  • scenario analysis
  • visualisation of model output and function
  • approaches to validation
  • companion modelling

Applied questions that were raised by these projects included:

  • how do we get from interviews to agent-behaviours?
  • how well do our models work? (and how do we assess that?)
  • how sensitive is land use change to planning policies?
  • how (why) do we engage with stakeholders?

In our discussion following the presentation it was interesting to have some social scientists join in the discussion that was dominated by computer scientists and modellers. Most interestingly was the viewpoint of a social scientist (a political scientist I believe) who suggested that one reason social scientists may be skeptical of the power of ABMs is that social science inherently understands that ‘some agents are more important than others’ and that this is not often well reflected (or at least analysed) in recent agent-based modelling.

Possibly the most important question raised in discussion was ‘what are we [as agent-based modellers] taking back to science more generally?’ There were plenty of examples in the projects about issues that have wider scientific applicability; scale issues, the intricacies of (very) large scale simulation with millions of agents, the integration of social and ecological complexity, forest transition theory, edge effects in models, and the presence of provenance (path-dependencies) in model dynamics. Agent-based modellers clearly deal with many interesting problems encountered and investigated in other areas of science, but whether we are doing a good job at communicating our experiences of these issues to the wider scientific community is certainly something open to debate (and was in the symposium).

A related question, recently raised on the SIMSOC listserv (but not in the GLP sumposium) is ‘what are ABMs taking back to policy-making and policy-makers’? Specifically, Scott Moss asked the question; ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast? His reasoning behind this question is as follows:

“In relation to policy, it is common for social scientists (including but not exclusively economists) to use some a priori reasoning (frequently driven by a theory) to propose specific policies or to evaluate the benefits of alternative policies. In either case, the presumption must be that the benefits or relative benefits of the specified policies can be forecast. I am not aware of any successful tests of this presumption and none of my colleagues at the meeting of UK agent-based modelling experts could point me to a successful test in the sense of a well documented correct forecast of any policy benefit.

The importance of the question: If there is no history or, more weakly, no systematic history of successful forecasts of policy impacts, then is the standard approach to theory-driven policy advice defensible? If so, on what grounds? If not, then is an alternative approach to policy analysis and an alternative role for policy modelling indicated?”

The two most interesting replies were from Alan Penn and Mike Batty. Penn suggested [my links added]:

“… the best description I have heard of ‘policy’ in the sense you are using was by Peter Allen who described it “at best policy is a perturbation on the fitness landscape“. Making predictions of the outcome of any policy intervention therefore requires a detailed understanding of the shape of the mophogenetic landscape. Most often a perturbation will just nudge the system up a wall of the valley it is in, only for it to return back into the same valley and no significant lasting effect will be seen. On occasion a perturbation will nudge the trajectory over a pass into a neighbouring valley and some kind of change will result, but unless you have a proper understanding of the shape of this landscape you wont necessarily be able to say in advance what the new trajectory will be.

What this way of thinking about things implies is that what we need to understand is the shape of the fitness landscape. With that understanding we would be able to say how much of a nudge is needed (say the size of a tax incentive) to get over a pass. We would also know what direction the neighbouring ‘valleys’ might take the system,
and this would allow predictions of the kind you want.”

Batty:

“I was at the meeting where Scott raised this issue. Alan Wilson said that his company GMAP was built on developing spatial interaction models for predicting short term shifts in retailing activity which routinely produced predictions that were close to the mark. There are no better examples than the large retail units that routinely – every week – run their models to make predictions in retail markets and reportedly they produce good predictions. These are outfits like Tesco, Asda, M[orrisons] and S[ainsbury’s] and so on. I cant give you chapter and verse of where these predictions have been verified and documented because I am an academic and dont have access to this sort of material. The kinds of models that I am referring to are essentially land use transport models which began in the 1960s and are still widely used today. Those people reading this post who arent familiar with these models because they are not agent based models can get a quick view by looking at my early book which is downloadable

I think that the problem with this debate is that it is focussed on academia and academics don’t traditionally revisit their models to see if longer term predictions work out. In fact for the reasons Alan [Penn] says one would probably not expect them to work out as we cant know the future. However there is loads of evidence about how well some models such as the ones I have referred to can fit existing data – ie in terms of their calibration. My book and lots of other work with these models shows that can predict the baseline rather well. In fact too well and the problem has been that although they predict the baseline well, they can often be quite deficient at predicting short term change well and often this arises from their cross sectional static nature and a million other problems that have been raised over the last 30 or more years.”

In response to Batty, Moss wrote:

“It is by no means unusual for model-based forecasts to be sufficiently accurate that the error is less than the value of the variable and perhaps much less. What systematically does not happen (and I know of no counterexample at all) is correct forecasting of volatile episodes such as big shifts in market shares in retail sales, macroeconomic recessions or recoveries, the onset of bear or bull phases in financial markets.

Policy initiatives are usually intended to change something from what has gone on before. Democratic governments — executive and legislative branches — typically investigate the reasons for choosing one policy rather than another or, at least, justify a proposed policy before implementation. Sometimes these justifications are based on forecasts of impacts derived from models. Certainly this is happening now in relation to the current recession. So the question is not whether there are ever correct forecasts. Certainly on the minimal criteria I suggested, there are many. The question is strictly about forecasts of policy impacts which, I conjecture, are rather like other major shifts in social trend and stability.

I believe this particular question is important because I don’t understand the point of policy modelling if we cannot usefully inform policy formation. If the usefulness we claim is that we can evaluate policy impacts and, in point of fact, we systematically (or always) produce incorrect forecasts of the direction and/or timing of intended changes, then it seems hard to argue that this is a useful exercise.”

But is focussing on the accuracy of forecasts of the future the only, or indeed best, way of using models to inform policy? In recent times some policy-makers (e.g. Tony Blair and New Labour) have come to see science (and it’s tools of modelling and predictions) as some kind of a ‘policy saviour’, leading to what is known as evidence-based policy-making. In this framework, science sits upstream of policy-making providing evidence about the real state of the world that then trickles down to steer policy discourse. This may be fine when the science is solving puzzles, but there are many instances (climate change for instance) where science has not solved the problem and rather has merely demonstrated more clearly our ignorance and uncertainty about the material state of the world.

Thus, when (scientific) models are developed to represent ‘open’ systems, as most real world systems are (e.g. the atmosphere, the global economy), I would argue that model forecasts or predictions are not the best way to inform policy formation. I have discussed such a perspective previously. Models and modelling are useful for understanding the world and making decisions, but they do not provide this utility by making accurate predictions about it. I argue that modelling is useful because it forces us to make explicit our implicitly held ‘mental models’ providing others with the opportunity to scrutinise the logic and coherence of that model and discuss its implications. Modelling helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states.

Science, generally, is about finding out about how the material world is. Policy, generally, is about deciding and making how the world how it ought to be. In many instances science can only provide an incomplete picture of how the world is, and even when it is confident about the material state of the world, there is only some much it can provide to an argument about how we think the world should be (which is what policy-making is all about). Emphasising the use of scientific models and modelling as a discussant, not a predictor, may be the best way to inform policy formulation.

In a paper submitted with one of my PhD advisors we discuss this sort of thing with reference to ‘participatory science’. The GLP ABM symposium is planning on publishing a special issue of Land Use Science containing papers from the meeting – in the manuscript I will submit I plan to follow-up in more detail on some of these participatory and ‘model as discussant’ issues with reference to my own agent-based modelling.