Recursion in society and simulation

This week I visited one of my former PhD advisors, Prof John Wainwright, at Durham University. We’ve been working on a manuscript together for a while now and as it’s stalled recently we thought it time we met up to re-inject some energy into it. The manuscript is a discussion piece about how agent-based modelling (ABM) can contribute to understanding and explanation in geography. We started talking about the idea in Pittsburgh in 2011 at a conference on the Epistemology of Modeling and Simulation. I searched through this blog to see where I’d mentioned the conference and manuscript before, but to my surprise, before this post I hadn’t.

In our discussion of what we can learn through using ABM, John highlighted the work of Kurt Godel and his incompleteness theorems. Not knowing all that much about that stuff I’ve been ploughing my way through Douglas Hofstadter’s tome ‘Godel, Escher and Bach: An Eternal Golden Braid’ – heavy going in places but very interesting. In particular, his discussion of the concept of recursion has taken my notice, as it’s something I’ve been identifying elsewhere.

The general concept of recursion involved nesting, like Russian dolls, stories within stories (like in Don Quixote) and images within images:


Computer programmers of take advantage of recursion in their code, calling a given procedure from within that same procedure (hence their love of recursive acronyms like PHP [PHP Hypertext Processor]). An example of how this works is in Saura and Martinez-Millan’s modified random clusters method for generating land cover patterns with given properties. I used this method in the simulation model I developed during my PhD and have re-coded the original algorithm for use in NetLogo [available online here]. In the code (below) the grow-cover_cluster procedure is called from within itself, allowing clusters of pixels to ‘grow themselves’.


However, rather than get into the details of the use of recursion in programming, I want to highlight two other ways in which recursion is important in social activity and its simulation.

The first, is in how society (and social phenomena) has a recursive relationship with the people (and their activities) composing it. For example, Anthony Gidden’s theory of structuration argues that the social structures (i.e., rules and resources) that constrain or prompt individuals’ actions are also ultimately the result of those actions. Hence, there is a duality of structure which is:

“the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and ‘exists’ in the generating moments of this constitution”. (p.5 Giddens 1979)

Another example comes from Andrew Sayer in his latest book ‘Why Things Matter to People’ which I’m also progressing through currently. One of Sayer’s arguments is that we humans are “evaluative beings: we don’t just think and interact but evaluate things”. For Sayer, these day-to-day evaluations have a recursive relationship with the broader values that individuals hold, values being ‘sedimented’ valuations, “based on repeated particular experiences and valuations of actions, but [which also tend], recursively, to shape subsequent particular valuations of people and their actions”. (p.26 Sayer 2011)

However, while recursion is often used in computer programming and has been suggested as playing a role in different social processes (like those above), its examination in social simulation and ABM has not been so prominent to date. This was a point made by Paul Thagard at the Pittsburgh epistemology conference. Here, it seems, is an opportunity for those seeking to use simulation methods to better understand social patterns and phenomena. For example, in an ABM how do the interactions between individual agents combine to produce structures which in turn influence future interactions between agents?

Second, it seems to me that there are potentially recursive processes surrounding any single simulation model. For if those we simulate should encounter the model in which they are represented (e.g., through participatory evaluation of the model), and if that encounter influences their future actions, do we not then need to account for such interactions between model and modelee (i.e., the person being modelled) in the model itself? This is a point I raised in the chapter I helped John Wainwright and Dr Mark Mulligan re-write for the second edition of their edited book “Environmental Modelling: Finding Simplicity in Complexity”:

“At the outset of this chapter we highlighted the inherent unpredictability of human behaviour and several of the examples we have presented may have done little to persuade you that current models of decision-making can make accurate forecasts about the future. A major reason for this unpredictability is because socio-economic systems are ‘open’ and have a propensity to structural changes in the very relationships that we hope to model. By open, we mean that the systems have flows of mass, energy, information and values into and out of them that may cause changes in political, economic, social and cultural meanings, processes and states. As a result, the behaviour and relationships of components are open to modification by events and phenomena from outside the system of study. This modification can even apply to us as modellers because of what economist George Soros has termed the ‘human uncertainty principle’ (Soros 2003). Soros draws parallels between his principle and the Heisenberg uncertainty principle in quantum mechanics. However, a more appropriate way to think about this problem might be by considering the distinction Ian Hacking makes between the classification of ‘indifferent’ and ‘interactive’ kinds (Hacking, 1999; also see Hoggart et al., 2002). Indifferent kinds – such as trees, rocks, or fish – are not aware that they are being classified by an observer. In contrast humans are ‘interactive kinds’ because they are aware and can respond to how they are being classified (including how modellers classify different kinds of agent behaviour in their models). Whereas indifferent kinds do not modify their behaviour because of their classification, an interactive kind might. This situation has the potential to invalidate a model of interactive kinds before it has even been used. For example, even if a modeller has correctly classified risk-takers vs. risk avoiders initially, a person in the system being modelled may modify their behaviour (e.g., their evaluation of certain risks) on seeing the results of that behaviour in the model. Although the initial structure of the model was appropriate, the model may potentially later lead to its own invalidity!” (p. 304, Millington et al. 2013)

The new edition was just published this week and will continue to be a great resource for teaching at upper levels (I used the first edition in the Systems Modeling and Simulation course I taught at MSU, for example).

More recently, I discussed these ideas about how models interact with their subjects with Peter McBurney, Professor in Informatics here at KCL. Peter has written a great article entitled ‘What are Models For?’, although it’s somewhat hidden away in the proceedings of a conference. In a similar manner to Epstein, Peter lists the various possible uses for simulation models (other than prediction, which is only one of many) and also discusses two uses in more detail – mensatic and epideictic. The former function relates to how models can bring people around a metaphorical table for discussion (e.g., for identifying and potentially deciding about policy trade-offs). The other, epideictic, relates to how ideas and arguments are presented and leads Peter to argue that by representing real world systems in a simulation model can force people to “engage in structured and rigorous thinking about [their problem] domain”.

John and I will be touching on these ideas about the mensatic and epideictic functions of models in our manuscript. However, beyond this discussion, and of relevance here, Peter discusses meta-models. That is, models of models. The purpose here, and continuing from the passage from my book chapter above, is to produce a model (B) of another model (A) to better understand the relationships between Model A and the real intelligent entities inside the domain that Model A represents:

“As with any model, constructing the meta-model M will allow us to explore “What if?” questions, such as alternative policies regarding the release of information arising from model A to the intelligent entities inside domain X. Indeed, we could even explore the consequences of allowing the entities inside X to have access to our meta-model M.” (p.185, McBurney 2012)

Thus, the models are nested with a hope of better understanding the recursive relationship between models and their subjects. Constructing such meta-models will likely not be trivial, but we’re thinking about it. Hopefully the manuscript John and I are working on will help further these ideas, as does writing blog posts like this.

Selected Reference
McBurney (2012): What are models for? Pages 175-188, in: M. Cossentino, K. Tuyls and G. Weiss (Editors): Post-Proceedings of the Ninth European Workshop on Multi-Agent Systems (EUMAS 2011). Lecture Notes in Computer Science, volume 7541. Berlin, Germany: Springer.

Millington et al. (2013) Representing human activity in environmental modelling In: Wainwright, J. and Mulligan, M. (Eds.) Environmental Modelling: Finding Simplicity in Complexity. (2nd Edition) Wiley, pp. 291-307 [Online] [Wiley]

Agent-based models – because they’re worth it?

So term is drawing to an end. There’s lots been going on since I last posted here and I’ll write a full update of that over the Christmas break. I’ll just highlight here quickly that the agent-based modelling book I contributed to has now been published.

Agent-Based Models of Geographical Systems, is editied by Alison Heppenstall, Andrew Crooks, Linda See and Mike Batty and presents a comprehensive collection of papers on the background, theory, technical issues and applications of agent-based modelling (ABM) in geographical systems. David O’Sullivan, George Perry, John Wainwright and I put together a paper entitled ‘Agent-based models – because they’re worth it?’ that falls into the ‘Principles and Concepts of Agent-Based Modelling’ section of the book. To give an idea of what the paper is about, here’s the opening paragraph:

“In this chapter we critically examine the usefulness of agent-based models (ABMs) in geography. Such an examination is important be-cause although ABMs offer some advantages when considered purely as faithful representations of their subject matter, agent-based approaches place much greater demands on computational resources, and on the model-builder in their requirements for explicit and well-grounded theories of the drivers of social, economic and cultural activity. Rather than assume that these features ensure that ABMs are self-evidently a good thing – an obviously superior representation in all cases – we take the contrary view, and attempt to identify the circumstances in which the additional effort that taking an agent-based approach requires can be justified. This justification is important as such models are also typically demanding of detailed data both for input parameters and evaluation and so raise other questions about their position within a broader research agenda.”

In the paper we ask:

  • Are modellers agent-based because they should be or because they can be?
  • What are agents? And what do they do?
  • So when do agents make a difference?

To summarise our response to this last question we argue;

“Where agents’ preferences and (spatial) situations differ widely, and where agents’ decisions substantially alter the decision-making con-texts for other agents, there is likely to be a good case for exploring the usefulness of an agent-based approach. This argument focuses attention on three model features: heterogeneity of the decision-making context of agents, the importance of interaction effects, and the overall size and organization of the system.”

Hopefully people will find this, and the rest of the book useful! You can check out the full table of contents here.

Citation
O’Sullivan, D., J.D.A. Millington, G.L.W. Perry, J. Wainwright (2012) Agent-based models – because they’re worth it? p.109 – 123 In: Heppenstall, A.J., A.T. Crooks, L.M. See, M. Batty (Eds.) Agent-Based Models of Geographical Systems, Springer. DOI: 10.1007/978-90-481-8927-4_6

Philosophy of Modelling and RGS 2011

I just updated the Philosophy of Modelling page on my website. It’s not anything too detailed but I was prompted to add something by my activities over the last few weeks. I’ve been working on both making progress with my ‘modelling narratives’ project and a paper I’ve started working on with John Wainwright exploring the epistemological roles agent-based simulation might play beyond mathematical and statistical modelling (expected to appear in the new-ish journal Dialogues in Human Geography).

It’s only a few weeks now until this year’s Royal Geographical Society annual meeting (31 Aug – 2 Sept). I’m making two presentations, unfortunately both in the same session! It seems my work sits squarely within ‘Environmental modelling and decision making’, as the both abstract I submitted were allocated to that session on the Friday afternoon (Skempton Building, Room 060b; last session of the week so people might be flagging!). The first presentation will deal with the ‘generative’ properties of agent-based modelling [.pdf] and what that implies for how we might study and use that modelling approach, and the second will summarise the Michigan forest modelling work we’ve completed so far. Both abstracts are below.

This also seems a good point to highlight that King’s Geography Department are hosting a drinks reception on the Thurdsay evening from 18:45 at Eastside Bar, Princes Garden, SW7 1AZ. Free drinks for the first 50 guests, so get there sharpish!

Millington RGS 2011 Abstracts

Model Histories: The generative properties of agent-based modelling
Fri 2 Sept, Session 4, Skempton Building, Room 060b
James Millington (King’s College London)
David O’Sullivan (University of Auckland, New Zealand)
George Perry (University of Auckland, New Zealand)

Novels, Kundera has suggested, are a means to explore unrealised possibilities and potential futures, to ask questions and investigate scenarios, starting from the present state of the world as we observe it – the “trap the world has become”. In this paper, we argue that agent-based simulation models (ABMs) are much like Kundera’s view of novels, having generative properties that provide a means to explore alternative possible futures (or pasts) by allowing the user to investigate the likely results of causal mechanisms given pre-existing structures and in different conditions. Despite the great uptake in the application of ABMs, many have not taken full advantage of the representational and explanatory opportunities inherent in ABMs. Many applications have relied too much on ‘statistical portraits’ of aggregated system properties at the expense of more detailed stories about individual agent context and particular pathways from initial to final conditions (via heterogeneous agent interactions). We suggest that this generative modelling approach allows the production of narratives that can be used to i) demonstrate and illustrate the significance of the mechanisms underlying emergent patterns, ii) inspire users to reflect more deeply on modelled system properties and potential futures, and iii) provide a means to reveal the model building process and the routes to discovery that lie therein. We discuss these issues in the context of, and using examples from, the increasing number of studies using ABMs to investigate human-environment interactions in geography and the environmental sciences.

Trees, Birds and Timber: Coordinating Long-term Forest Management
Fri 2 Sept, Session 4, Skempton Building, Room 060b
James Millington (King’s College London)
Megan Matonis (Colorado State University, United States)
Michael Walters (Michigan State University, United States)
Kimberly Hall (The Nature Conservancy, United States)
Edward Laurent (American Bird Conservancy, United States)
Jianguo Liu (Michigan State University, United States)

Forest structure is an important determinant of habitat use by songbirds, including species of conservation concern. In this paper, we investigate the combined long-term impacts of variable tree regeneration and timber management on stand structure, bird occupancy probabilities, and timber production in the northern hardwood forests of Michigan’s Upper Peninsula. We develop species-specific relationships between bird occupancy and forest stand structure from field data. We integrate these bird-forest structure relationships with a forest model that couples a forest-gap tree regeneration submodel developed from our field data with the US Forest Service Forest Vegetation Simulator (Ontario variant). When simulated over a century, we find that higher tree regeneration densities ensure conditions allowing larger harvests of merchantable timber, and reducing the impacts of timber harvest on bird forest-stand occupancy probability. When regeneration is poor (e.g., 25% or less of trees succeed in regenerating), timber harvest prescriptions have a greater relative influence on bird species occupancy probabilities than on the volume of merchantable timber harvested. Our results imply that forest and wildlife managers need to work together to ensure tree regeneration and prevent detrimental impacts on timber output and habitat for avian species over the long-term. Where tree regeneration is currently poor (e.g., due to deer herbivory), forest and wildlife managers should pay particularly close attention to the long-term impacts of timber harvest prescriptions on bird species.

The Politics of Expectations

Next year’s Annual meeting of the Association of American Geographers will be in Seattle. I was considering attending but I think it might be best to let the dust settle after moving back to the UK in January. Many others will be there however, including James Porter, a colleague and friend from PhD times at King’s College, London. On his behalf, here’s the call for papers for a session he’s organising at the meeting. Deadline is 1st October, more details at the bottom.

Call for Papers
The Politics of Expectations: Nature, Culture, and the Production of Space

Association of American Geographers, Annual Meeting, 12-16th April 2011, Seattle.

Session Organisers:
James Porter (King’s College London) and Samuel Randalls (University College London)

Expectations are incredibly powerful things. Whether materialized via climatic models, economic forecasts, or based on the promise of personalised medicines, expectations (and those who engineer them) play a deeply political yet often unsung role in bringing into being a particular kind of future as well as shaping a particular kind of present. Savvy actors seeking to engineer change may decide to write editorials, give press briefings, or try to normalise trust between the communities involved so as to enrol support and resources for an emerging marketplace (and consumer) they have envisioned. Such discursive as well as performative practices pre-emptively shape the social and economic context for developing technologies so that the actors involved not only develop their physical objects but also influence other people’s thinking. Rather than dismiss such efforts as exaggerated or self-serving claims, the “sociology of expectations” (cf. Brown, 2003; Hedgecoe, 2004; Law, 1994) points to the constructive, performative, and even destructive role such expectations have in today’s world where competition for funding, research impact and innovation are so intense. As many geographers researching the ‘commercialization of nature’ have noted (cf. Castree, 2003; Johnson, 2010; Lave et al., 2010; Prudham, 2005), expectations of future natures inhabit contemporary environmental management in a series of subtle and not so subtle ways for all actors.

But how are expectations created, configured, and stabilized? What, and whose, interests shape them, and in turn, whose interests do they shape? And why do some persist whilst others don’t? Such questions speak directly to the ways in which nature (and knowledge of it) is being increasingly commercialized and commodified through its interactions with science and technology. This session builds on controversies such as the climate change emails at UEA, medical trials, carbon forestry and much more to showcase how the “future” is mobilized to govern or proliferate uncertainty and justify particular mechanisms for managing environmental problems. Geographers are uniquely placed to comment on this providing theoretical depth and empirical evidence that sheds light on the commodification of nature whilst also contributing to the socio-technical analyses employed by science and technology studies scholars. We therefore invite papers addressing (though not limited to) the following questions:

  • Who constructs expectations and why? How / where do they get enacted (i.e. technological, sociocultural, artefacts, etc.)? And how do they get accepted, institutionalized, or perhaps resisted?
  • How are expectations of nature commercialized? To what extent are expectations central to processes of commercialization and does this vary depending on the specific environmental arena? Are there unnatural expectations?
  • Do expectations have agency? Can they be negotiated or adapted? If so, what role have geographers played in shaping past perceptions and might hope to play in the future?
  • What happens if a set of expectations is not successful? Why didn’t they succeed? And what lessons can we learn?

Abstracts should be sent to both James Porter (james.porter at kcl.ac.uk) and Samuel Randalls (s.randalls at ucl.ac.uk) by Friday 1st October 2010.

For conference information, see: www.aag.org/cs/annualmeeting

The Omnivores’ Trifecta: A feast of ideas

This week I went to a seminar presented by Dr Richard Bawden of the Systemic Development Institute, Australia. This was the first event in MSU’s “conversation about our food future”. It turned out to be much more interesting than I had hoped; Bawden is an engaging and charismatic speaker who presented a thoughtful perspective on what he termed ‘The Omnivores’ Trifecta’: Agriculture, Food and Health and the Systemic Relationships between them. He covered a hearty spread of ideas, so I’ll recap his most interesting points in bite-sized pieces:

i) Bawden suggested that Agriculture, Food and Health (A-F-H) when considered separately are not a system. But by understanding each as a discourse (i.e. as a subject for “formal discussion of debate”) they become viewed in a systemic perspective.

ii) At the intersection of these three subjects are four very important (sub-)discourses which Bawden termed the “engagement discourse subsystem”. These are: business, lay citizens, governance, and experts.

iii) Bawden proposed that it is the profound differences in episteme (worldview) between these discourse ‘subsystems’ that are at the heart of the majority of the conflicts across the A-F-H system and the environment in which it is situated.

iv) These epistemic differences are so profound as to be polemic. Bawden bemoaned this fact and highlighted that “Dialectic yields to Polemic“. He emphasised that dialectics are the only way forward to forge a world in common and that polemics prevent deliberation, debate and kill democracy.

v) To illustrate these points Bawden used the case of Australian agriculture since the mid-20th century. He described this case as being characteristic of many messy, wicked problems and argued that reductionist science alone was insufficient to bring resolution (and hence is why he founded the Systemic Development Institute). During this argument he quoted Beck but questioned whether we have reached second modernity. Bawden argued that the “culture of technical control” still prevails within current modernist society has an episteme that privileges fact over value, analysis over synthesis, individualism over communalism, teaching over learning and productionism over sustainablism.

vi) On these last two dichotomies, Bawden suggested that the question of what is to be sustained (and therefore what sustainability is) is a moral question not a technical one.

vii) He proposed that higher education is about learning differently not learning more; the ability to look the world and make sense of it for oneself (and then take action in response) is what characterises a good education. Awareness of the presence of different worldviews is key to this ability. Furthermore, Bawden argued that the complete learner will be prepared to enter a form of learning that the academy is currently unable to provide because it is too reductionist. This learning would require critical reflection of one’s own worldview, as Jack Mezirow has proposed.

viii) Bawden then presented the diagram that synthesises his message (see below). This diagram describes the “integrated process of the critical learning system” and shows how perceiving, understanding, planning and acting are connected within our rational experience of the world and how they are linked to the intuitive facets of learning.


Quite the feast of ideas eh? I’m still digesting them and might be for a while. But the key message I take away from this is a post-normal one; in learning about human-environment interactions and to solve current wicked problems, inter-epistemic as well as inter-disciplinary work will be needed. Although different scientific disciplines such as ecology, biology, and chemistry have different terminology and conventions, they share a worldview – the one that favours facts over values and aims to subsume empirical observations into universal laws and theories. Other worldviews are available. Inter-epistemic human-environment study would seek to cross the boundaries between worldviews, recognize that reductionist science is only one way to understand the world and is unlikely provide complete answers to wicked problems, and emphasise dialectics over polemics.

Interdisciplinarity, Sustainability and Critical Realism

I have a new paper to add to my collection of favourites. Hidden in the somewhat obscure Journal of Critical Realism it touches on several issues that I often find myself thinking about and studying: Interdisciplinarity, Ecology and Scientific Theory.

Karl Høyer and Petter Naess also have plenty to say about sustainability, planning and decision-making and, although they use the case of sustainable urban development, much of what they discuss is relevant to broader issues in the study of coupled human and natural systems. Their perspective resonates with my own.

For example, they outline some of the differences between studying open and closed systems (interestingly with reference to some Nordic writers I have not previously encountered);

… The principle of repetitiveness is crucial in these kinds of [reductionist] science [e.g. atomic physics, chemistry] and their related technologies. But such repetitiveness only takes place in closed systems manipulated by humans, as in laboratories. We will never find it in nature, as strongly emphasised by both Kvaløy and Hägerstrand within the Nordic school. In nature there are always open, complex systems, continuously changing with time. This understanding is in line with key tenets of critical realism. Many of our most serious ecological problems can be explained this way: technologies, their products and substances, developed and tested in closed systems under artificial conditions that generate the illusion of generalised repetitiveness, are released in the real nature of open systems and non-existing repetitiveness. We are always taken by surprise when we experience new, unexpected ecological effects. But this ought not to be surprising at all; under these conditions such effects will necessarily turn up all the time.

At the same time, developing strategies for a sustainable future relies heavily on the possibility of predicting the consequences of alternative solutions with at least some degree of precision. Arguably, a number of socio-technical systems, such as the spatial structures of cities and their relationships with social life and human activities, make up ‘pseudo-closed’ systems where the scope for prediction of outcomes of a proposed intervention is clearly lower than in the closed systems of the experiments of the natural sciences, but nevertheless higher than in entirely open systems. Anticipation of consequences, which is indispensable in planning, is therefore possible and recommendable, although fallible.

The main point of their paper, however, is the important role critical realism [see also] might play as a platform for interdisciplinary research. Although Høyer and Naess do highlight some of the more political reasons for scientific and academic disciplinarity, their main points are philosophical;

…the barriers to interdisciplinary integration may also result from metatheoretical positions explicitly excluding certain types of knowledge and methods necessary for a multidimensional analysis of sustainability policies, or even rejecting the existence of some types of impacts and/or the entities causing these impacts.

These philosophical (metatheoretical) barriers include staunchly positivist and strong social constructionist perspectives;

According to a positivist view, social science research should emulate research within the natural sciences as much as possible. Knowledge based on research where the observations do not lend themselves to mathematical measurement and analysis will then typically be considered less valid and perhaps be dismissed as merely subjective opinions. Needless to say, such a view hardly encourages natural scientists to integrate knowledge based on qualitative social research or from the humanities. Researchers adhering to an empiricist/naive realist metatheory will also tend to dismiss claims of causality in cases where the causal powers do not manifest themselves in strong and regular patterns of events – although such strong regularities are rare in social life.

On the other hand, a strong social constructionist position implies a collapsing of the existence of social objects to the participating agents’ conception or understanding of these objects. …strong social constructionism would typically limit the scope to the cultural processes through which certain phenomena come to be perceived as environmental problems, and neglecting the underlying structural mechanisms creating these phenomena as well as their impacts on the physical environment. At best, strong social constructionism is ambivalent as to whether we can know anything at all about reality beyond the discourses. Such ‘empty realism’, typical of dominant strands of postmodern thought, implies that truth is being completely relativised to discourses on the surface of reality, with the result that one must a priori give up saying anything about what exists outside these discourses. At worst, strong social constructionism may pave the way for the purely idealist view that there is no such reality.

At opposite ends of the positivist-relativist spectrum neither of these perspectives seem to be the most useful for interdisciplinary research. Something that sits between these two extremes – critical realism – might be more useful [I can’t do this next section justice in an abridged version – and this is the main point of the article – so here it is in its entirety];

The above-mentioned examples of shortcomings of reductionist metatheories do not imply that research based on these paradigms is necessarily without value. However, reductionist paradigms tend to function as straitjackets preventing researchers from taking into consideration phenomena and factors of influence not compatible with or ignored in their metatheory. In practice, researchers have often deviated from the limitations prescribed by their espoused metatheoretical positions. Usually, such deviations have tended to improve research rather than the opposite.

However, for interdisciplinary research, there is an obvious need for a more inclusive metatheoretical platform. According to Bhaskar and Danermark, critical realism provides such a platform, as it is ontologically characterised doubly by inclusiveness greater than competing metatheories: it is maximally inclusive in terms of allowing causal powers at different levels of reality to be empirically investigated; and it is maximally inclusive in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks.

Arguably, many of the ecologists and ecophilosophers referred to earlier in this paper have implicitly based their work on the same basic assumptions as critical realism. Some critical realist thinkers have also addressed ecological and environmental problems explicitly. Notably, Ted Benton and Peter Dickens have demonstrated the need for an epistemology that recognises social mediation of knowledge but also the social and material dimensions of environmental problems, and how the absence of an interdisciplinary perspective hinders essential understanding of nature/society relationships.

According to critical realism, concrete things or events in open systems must normally be explained ‘in terms of a multiplicity of mechanisms, potentially of radically different kinds (and potentially demarcating the site of distinct disciplines) corresponding to different levels or aspects of reality’. As can be seen from the above, the objects involved in explanations of the (un)sustainability of urban development belong partially to the natural sciences, partially to the social sciences, and are partially of a normative or ethical character. They also belong to different geographical or organisational scales. Thus, similar to (and arguably to an even higher extent than) what Bhaskar and Danermark state about disability research, events and processes influencing the sustainability of urban development must be understood in terms of physical, biological, socioeconomic, cultural and normative kinds of mechanisms, types of contexts and characteristic effects.

According to Bhaskar, social life must be seen in the depiction of human nature as ‘four-planar social being’, which implies that every social event must be understood in terms of four dialectically interdependent planes: (a) material transactions with nature, (b) social interaction between agents, (c) social structure proper, and (d) the stratification of embodied personalities of agents. All these categories of impacts should be addressed in research on sustainable urban development. Impacts along the first dimension, category (a), typically include consequences of urban development for the physical environment. Consequences in terms of changing location of activities and changing travel- ling patterns are examples of impacts within category (b). But this category also includes the social interaction between agents leading to changes in, among others, the spatial and social structures of cities. Relevant mechanisms at the level of social structure proper (category [c]) might include, for exam- ple, impacts of housing market conditions on residential development projects and consequences of residential development projects for the overall urban structure. The stratified personalities of agents (category [d]) include both influences of agents on society and the physical environment and influences of society and the physical environment on the agents. The latter sub-category includes physical impacts of urban development, such as unwholesome noise and air pollution, but also impacts of the way urban planning and decision- making processes are organised, for example, in terms of effects on people’s self esteem, values, opportunities for personal growth and their motivation for participating in democratic processes. The influence of discourses on the population’s beliefs about the changes necessary to bring about sustainable development and the conditions for implementing such changes also belongs to this sub-category. The sub-category of influences of agents on society and the physical environment includes the exercise of power by individual and corporate agents, their participation in political debates, their contribution to knowledge, and their practices in terms of, for example, type and location of residence, mobility, lifestyles more generally, and so on.

Regarding issues of urban sustainability, the categories (a)–(d) are highly interrelated. If this is the case, we are facing what Bhaskar and Danermark characterise as a ‘laminated’ system, in which case explanations involving mechanisms at several or all of these levels could be termed ‘laminated expla- nations’. In such situations, monodisciplinary empirical studies taking into consideration only those factors of influence ‘belonging’ to the researcher’s own discipline run a serious risk of misinterpreting these influences. Examples of such misinterpretations are analyses where increasing car travel in cities is explained purely in terms of prevailing attitudes and lifestyles, addressing neither political-economic structures contributing to consumerism and car-oriented attitudes, nor spatial-structural patterns creating increased needs for individual motorised travel.

Moreover, the different strata of reality and their related mechanisms (that is, physical, biological, socio-economic, cultural and normative kinds of mechanisms) involved in urban development cannot be understood only in terms of categories (a)–(d) above. They are also situated in macroscopic (or overlying) and less macroscopic (or underlying) kinds of structures or mechanisms. For research into sustainable urban development issues, such scale-awareness is crucial. Much of the disagreement between proponents of the ‘green’ and the ‘compact’ models of environmentally sustainable urban development can probably be attributed to their focus on problems and challenges at different geographical scales: whereas the ‘compact city’ model has focused in particular on the impacts of urban development on the surrounding environment (ranging from the nearest countryside to the global level), proponents of the ‘green city’ model have mainly been concerned about the environment within the city itself. A truly environmentally sustainable urban development would require an integration of elements both from the former ‘city within the ecology’ and the latter ‘ecology within the city’ approaches. Similarly, analyses of social aspects of sustainable development need to include both local and global effects, and combine an understanding of practices within particular groups with an analysis of how different measures and traits of development affect the distribution of benefits and burdens across groups.

Acknowledging that reality consists of different strata, that multiple causes are usually influencing events and situations in open systems, and that a pluralism of research methods is recommended as long as they take the ontological status of the research object into due consideration, critical realism appears to be particularly well suited as a metatheoretical platform for interdisciplinary research. This applies not least to research into urban sustainability issues where, as has been illustrated above, other metatheoretical positions tend to limit the scope of analysis in such a way that sub-optimal policies within a particular aspect of sustainability are encouraged at the cost of policies addressing the challenges of sustainable urban development in a comprehensive way.

In conclusion; critical realism can play a very important role as an underlabourer of interdisciplinarity, with its maximal inclusiveness both in terms of allowing causal powers at different levels of reality to be empirically investigated and in terms of accommodating insights of other meta-theoretical positions while avoiding their drawbacks

I’m going to have to spend some time thinking about this but there seems to be plenty to get ones teeth into here with regards the study of coupled human and natural systems and the use of agent-based modelling approaches. For example, agent-based modelling seems to offer a means to represent Bhaskar‘s four planes but there are plenty of questions about how to do this appropriately. I also need to think more carefully about how these four planes are manifested in the systems I study. Generally however, it seems that critical realism offers a useful foundation from which to build interdisciplinary studies of the interaction of humans and their environment for the exploration of potential pathways to ensure sustainable landscapes.

Reference
Høyer, K.G and Naess, P. 2008 Interdisciplinarity, ecology and scientific theory: The case of sustainable urban development Journal of Critical Realism 7(2) 179-207 doi: 10.1558/jocr.v7i2.179

What is the point… of social simulation modelling?

Previously, I mentioned a thread on SIMSOC initiated by Scott Moss. He asked ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast?. Following on to the responses to that question, earlier this week he started a new thread entitled ‘What’s the Point?:

“We already know that economic recessions and recoveries have probably never been forecast correctly — at least no counter-examples have been offered. Similarly, no financial market crashes or recoveries or significant shifts in market shares have ever, as far as we know, been forecast correctly in real time.

I believe that social simulation modelling is useful for reasons I have been exploring in publications for a number of years. But I also recognise that my beliefs are not widely held.

So I would be interested to know why other modellers think that modelling is useful or, if not useful, why they do it.”

After reading others’ responses I decided to reply with my own view:

“For me prediction of the future is only one facet of modelling (whether agent-based or any other kind) and not necessarily the primary use, especially with regards policy modelling. This view stems party from the philosophical difficulties outlined by Oreskes et al. (1994), amongst others. I agree with Mike that the field is still in the early stages of development, but I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world’. As Pablo suggested, if we are to predict the future the inherent uncertainties will be best highlighted and accounted for by ensuring predictions are tied to a probability.”

I also highlighted the reasons offered by Epstein and outlined a couple of other reasons I think ABM are useful.

There was a brief response to mine then and then another, more assertive, response that (I think) highlights a common confusion of the different uses of prediction in modelling:

“If models of economic policy are fundamentally unable to at some point predict the effects of policy — that is, to in some measure predict the future — then, to be blunt, what good are they? If they are unable to be predictive then they have no empirical, practical, or theoretical value. What’s left? I ask that in all seriousness.

Referring to Epstein’s article, if a model is not sufficiently grounded to show predictive power (a necessary condition of scientific results), then how can it be said to have any explanatory power? Without prediction as a stringent filter, any amount of explanation from a model becomes equivalent to a “just so” story, at worst giving old suppositions the unearned weight of observation, and at best hitting unknowably close to the mark by accident. To put that differently, if I have a model that provides a neat and tidy explanation of some social phenomena, and yet that model does not successfully replicate (and thus predict) real-world results to any degree, then we have no way of knowing if it is more accurate as an explanation than “the stars made it happen” or any other pseudo-scientific explanation. Explanations abound; we have never been short of them. Those that can be cross-checked in a predictive fashion against hard reality are those that have enduring value.

But the difficulty of creating even probabalistically predictive models, and the relative infancy of our knowledge of models and how they correspond to real-world phenomena, should not lead us into denying the need for prediction, nor into self-justification in the face of these difficulties. Rather than a scholarly “the dog ate my homework,” let’s acknowledge where we are, and maintain our standards of what modeling needs to do to be effective and valuable in any practical or theoretical way. Lowering the bar (we can “train practitioners” and “discipline policy dialogue” even if we have no way of showing that any one model is better than another) does not help the cause of agent-based modeling in the long run.

I felt this required a response – it seemed to me that difference between logical prediction and temporal prediction was being missed:

“In my earlier post I wrote: “I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world'”. I was careful about how I worded this [more careful than ensuring correct formatting of the post it seems – my original post is below in a more human-readable format] and maybe some clarification in the light of Mike’s comments would be useful. Here goes…

Precisely predicting the future state of an ‘open’ system at a particular instance in time does not imply we have explained or understand it (due to the philosophical issues of affirming the consequent, equifinality, underdetermination, etc.). To be really useful for explanation and to have enduring value model predictions of any system need to be cross-checked against hard reality *many times*, and in the case of societies probably also in many places (and should ideally be produced by models that are consistent with other theories). Producing multiple accurate predictions will be particularly tricky for things like the global economy for which only have one example (but of course will be easier where experimental replication more ogistically feasible).

My point is two-fold:
1) a single, precise prediction of a future does not really mean much with regard our understanding of an open system,
2) multiple precise predictions are more useful but will be more difficult to come by.

This doesn’t necessarily mean that we will never be able to consistently predict the future of open systems (in Scott’s sense of correctly forecasting of the timing and direction of change of specified indicators). I just think it’s a ways off yet, that there will always be uncertainty, and that we need to deal with this uncertainty explicitly via probabilistic output from model ensembles and other methods.Rather than lowering standards, a heuristic use of models demands we think more closely about *how* we model and what information we provide to policy makers (isn’t that the point of modelling policy outcomes in the end?).

Let’s be clear, the heuristic use of models does not allow us to ignore the real world – it still requires us to compare our model output with empirical data. And as Mike rightly pointed out, many of Epstein’s reasons to model – other than to predict – require such comparisons. However, the scientific modelling process of iteratively comparing model output with empirical data and then updating our models is a heuristic one – it does not require that precise prediction at specific point in the future is the goal before all others.

Lowering any level of standards will not help modelling – but I would argue that understanding and acknowledging the limits of using modelling in different situations in the short-term will actually help to improve standards in the long run. To develop this understanding we need to push models and modelling to their limits to find our what works, what we can do and what we can’t – that includes iteratively testing the temporal predictions of models. Iteratively testing models, understanding the philosophical issues of attempting to model social systems, exploring the use of models and modelling qualitatively (as a discussant, and a communication tool, etc.) should help modellers improve the information, the recommendations, and the working relationships they have with policy-makers.

In the long run I’d argue that both modellers and policy-makers will benefit from a pragmatic and pluralistic approach to modelling – one that acknowledges there are multiple approaches and uses of models and modelling to address societal (and environmental) questions and problems, and that [possibly self evidently] in different situations different approaches will be warranted. Predicting the future should not be the only goal of modelling social (or environmental) systems and hopefully this thread will continue to throw up alternative ideas for how we can use models and the process of modelling.”

Note that I didn’t explicitly point out the difference between the two different uses of prediction (that Oreskes and other have previously highlighted). It took Dan Olner a couple of posts later to explicitly describe the difference:

“We need some better words to describe model purpose. I would distinguish two –

a. Forecasting (not prediction) – As Mike Sellers notes, future prediction is usually “inherently probabalistic” – we need to know whether our models can do any better than chance, and how that success tails off as time passes. Often when we talk about “prediction” this is what we mean – prediction of a more-or-less uncertain future. I can’t think of a better word than forecasting.

b. Ontological prediction (OK, that’s two words!) – a term from Gregor Betz, Prediction Or Prophecy (2006). He gives the example of the prediction of Neptune’s existence from Newton’s laws – Uranus’ orbit implied that another body must exist. Betz’s point is that an ontological prediction is “timeless” – the phenomenon was always there. Einstein’s predictions about light bending near the sun is another: something that always happened, we just didn’t think to look for it. (And doubtless Eddington wouldn’t have considered *how* to look, without the theory.)

In this sense forecasting (my temporal prediction) is distinctly temporal (or spatial) and demands some statement about when (or where) an event or phenomena will occur. In contrast, ontological prediction (my logical prediction) is independent of time and/or space and is often used in closed system experiments searching for ‘universal’ laws. I wrote more about this in a series of blog posts I wrote a while back on the validation of models of open systems.

This discussion is ongoing on SIMSOC and Scott Moss has recently posted again suggesting a summary of the early responses:

“I think a perhaps extreme summary of the common element in the responses to my initial question (what is the point?, 9/6/09) is this:

**The point of modelling is to achieve precision as distinct from accuracy.**

That is, a model is a more or less complicated formal function relating a set of inputs clearly to a set of outputs. The formal inputs and outputs should relate unambiguously to the semantics of policy discussions or descriptions of observed social states and/or processes.

This precision has a number of virtues including the reasons for modelling listed by Josh Epstein. The reasons offered by Epstein and expressed separately by Lynne Hamill in her response to my question include the bounding and informing of policy discussions.

I find it interesting that most of my respondents do not consider accuracy to be an issue (though several believe that some empirically justified frequency or even probability distributions can be produced by models). And Epstein explicitly avoids using the term validation in the sense of confirmation that a model in some sense accurately describes its target phenomena.

So the upshot of all this is that models provide a kind of socially relevant precision. I think it is implicit in all of the responses (and the Epstein note) that, because of the precision, other people should care about the implications of our respective models. This leads to my follow-on questions:

Is precision a good enough reason for anyone to take seriously anyone else’s model? If it is not a good enough reason, then what is?

And so arises the debate about the importance of accuracy over precision (but the original ‘What is the point’ thread continues also). In hindsight, I think it may have been more appropriate for me to use the word accurate than precise in my postings. All this debate may seem to be just semantics and navel-gazing to many people, but as I argued in my second post, understanding the underlying philosophical basis of modelling and representing reality (however we might measure or perceive it) gives us a better chance of improving models and modelling in the long run…

Predicting 2009

Over the holiday period the media offer us plenty of fodder to discuss the past year’s events and what the future may hold. Whether it’s current affairs, music, sport, economics or any other aspect of human activity, most media outlets have something to say about what people did that was good, what they did that was bad, and what they’ll do next, in the hope that they can keep their sales up over the holiday period.

Every year The Economist publishes a collection of forecasts and predictions for the year ahead. The views and and opinions of journalists, politicians and business people accompany interactive maps and graphs that provide numerical analysis. But how good are these forecasts and predictions? And what use are they? This year The Economist stopped to look back on how well it performed:

“Who would have thought, at the start of 2008, that the year would see crisis engulf once-sturdy names from Freddie Mac and Fannie Mae to AIG, Merrill Lynch, HBOS, Wachovia and Washington Mutual (WaMu)?

Not us. The World in 2008 failed to predict any of this. We also failed to foresee Russia’s invasion of Georgia (though our Moscow correspondent swears it was in his first draft). We said the OPEC cartel would aim to keep oil prices in the lofty range of $60-80 a barrel (the price peaked at $147 in July)…”

And on the list goes. Not that any of us are particularly surprised, are we? So why should we bother to read their predictions for the next year? In its defence, The Economist offers a couple of points. First, the usual tactic (for anyone defending their predictions) of pointing out what they actually did get right (slumping house prices, interest-rate cuts, etc). But then they highlight a perspective which I think is almost essential when thinking about predictions of future social or economic activity:

“The second reason to carry on reading is that, oddly enough, getting predictions right or wrong is not all that matters. The point is also to capture a broad range of issues and events that will shape the coming year, to give a sense of the global agenda.”

Such a view is inherently realist. Given the multitudes of interacting elements and potential influences affecting economic systems, given that it is an ‘open’ historical system, producing a precise prediction about future system states is nigh-on impossible. Naomi Oreskes has highlighted the difference between ‘logical prediction’ (if A and B then C) and ‘temporal prediction’ (event C will happen at time t + 10), and this certainly applies here [I’m surprised I haven’t written about this distinction on this this blog before – I’ll try to remedy that soon]. Rather than simply developing models or predictions with the hope of accurately matching the timing and magnitude of future empirical events, I argue that we will be better placed (in many circumstances related to human social and economic activity) to use models and predictions as discussants to lead to better decision-making and as means to develop an understanding of the relevant causal structures and mechanisms at play.

In a short section of his recent book and TV series, The Ascent of Money, Niall Ferguson talks about the importance of considering history in economic markets and decision-making. He presents the example of Long Term Capital Management (LTCM) and their attempt to use mathematical models of the global economic system to guide their trading decision-making. In Ferguson’s words, their model was based on the following set of assumptions about how the system worked:

“Imagine another planet – a planet without all the complicating frictions caused by subjective, sometimes irrational human beings. One where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximise profits; where they never stopped trading; where markets were continuous, frictionless and completely liquid. Financial markets on this plan would follow a ‘random walk’, meaning that each day’s prices would be quite unrelated to the previous day’s but would reflect all the relevant information available.” p.320

Using these assumptions about how the world works, the Nobel prize-winning mathematicians Myron Scholes and Robert C. Merton derived a mathematical model. Initially the model performed wonderfully, allowing returns of 40% on investments for the first couple of years. However, crises in the Asian and Russian financial systems in 1997 and 1998 – not accounted for in the assumptions of the mathematical model – resulted in LTCM losing $1.85 billion through the middle of 1998. The model assumptions were unable to account for these events, and subsequently its predictions were inaccurate. As Ferguson puts it:

“…the Nobel prize winners had known plenty of mathematics, but not enough history. They had understood the beautiful theory of Planet finance, but overlooked the messy past of Planet Earth.” p.329

When Ferguson says ‘not enough history’, his implication is that the mathematical model was based on insufficient empirical data. Had the mathematicians used data that covered the variability of the global economic system over a longer period of time it may have included a stock market downturn similar to that caused by Asian and Russian economic crises. But a data set for a longer time period would likely have been characterised by greater overall variability, requiring a greater number of parameters and variables to account for that variability. Whether such a model would have performed as well as the model they did produce is questionable, as is the potential to predict the exact timing and magnitude of any ‘significant’ event (e.g. a market crash).

But further, Ferguson also points out that the problem with the LTCM model wasn’t just that they hadn’t used enough data to develop their model, but that their assumptions (i.e. their understanding of Planet Finance) just aren’t realistic enough to accurately predict Planet Earth over ‘long’ periods of time. Traders and economic actors are not perfectly rational and do not have access to all the data all the time. Such a situation has led (more realistic) economists to develop ideas like bounded rationality.

Assuming that financial traders try to be rational is likely not a bad assumption. But it has been pointed out that “[r]ationality is not tantamount to optimality”, and that in situations where information, memory or computing resources are not complete (as is usually the case in the real world) the principle of bounded rationality is a more worthwhile approach. For example, Herbert Simon recognised that rarely do actors in the real world optimise their behaviour, but rather they merely try to do ‘well enough’ to satisfy their goal(s). Simon termed this non-optimal behaviour ‘satisficing’, the basis for much of bounded rationality theory since. Thus, satisficing is essentially a cost-benefit tradeoff, establishing when the utility of an option exceeds an aspiration level.

Thinking along the same lines George Soros has developed his own ‘Human Uncertainty Principle’. This principle “holds that people’s understanding of the world in which they live cannot correspond to the facts and be complete and coherent at the same time. Insofar as people’s thinking is confined to the facts, it is not sufficient to reach decisions; and insofar as it serves as the basis of decisions, it cannot be confined to the facts. The human uncertainty principle applies to both thinking and reality. It ensures that our understanding is often incoherent and always incomplete and introduces an element of genuine uncertainty – as distinct from randomness – into the course of events.

The human uncertainty principle bears a strong resemblance to Heisenberg’s uncertainty principle, which holds that the position and momentum of quantum particles cannot be measured at the same time. But there is an important difference. Heisengberg’s uncertainty principle does not influence the behavior of quantum particles one iota; they would behave the same way if the principle had never been discovered. The same is not true of the human uncertainty principle. Theories about human behavior can and do influence human behavior. Marxism had a tremendous impact on history, and market fundamentalism is having a similar influence today.” Soros (2003) Preface

This final point has been explored in more detail by Ian Hacking and his discussion of the issue of the differences between interactive and indifferent kinds. Both of these views (satisficing and the uncertainy principle) implicitly understand that the context in which an actor acts is important. In the perfect world of Planet Finance and associated mathematical models context is non-existent.

In response to the problems encountered by LTCM, “Merrill Lynch observed in its annual reports that mathematical risk models, ‘may provide a greater sense of security than warranted; therefore, reliance on these models should be limited’“. I think it is clear that humans need to make decisions (whether they be social, economic, political, or about any resource) based on human understanding derived from empirical observation. Quantitative models will help with this but cannot be used alone, partly because (as numerous examples have shown) it is very difficult to make (accurate) predictions about future human activity. Likely there are general behaviours that we can expect and use in models (e.g. aim of traders to make profit). But how those behaviours play out in the different contexts provided by the vagaries of day-to-day events and changes in global economic, political and physical conditions will require multiple scenarios of the future to be examined.

My personal view is one of the primary benefits of developing quantitative models of human social and economic activity is that they allow us to make explicit our implicitly held models. Developing quantitative models forces us to be structured about our worldview – writing it down (often in computer code) allows other to scrutinise that model, something that is not possible if the model remains implicit. In some situations, such a private financial strategy-making, the publication this approach may not be welcome (because it is not beneficial for a competitor to know your model of the world). But in other decision-making situations, for example about environmental resources, this approach will be useful to foster greater understanding about how the ‘experts’ think the world works.

By writing down their expectations for the forthcoming year the experts at The Economist are making explicit their understanding of the world. It’s not terribly important that that they don’t get everything right – there’s very little possibility that will happen. What is important is that it helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states. This information might then be used to shape the future as we would like it to be, based on informed expectations. Quantitative models of human social and economic activity also offer this type of opportunity.

Modelling Pharmaceuticals in the Environment

On Friday I spoke at a workshop at MSU that examined a subject I’m not particularly well acquainted with. Participants in Pharmaceuticals in the Environment: Current Trends and Research Priorities convened to consider the natural, physical, social, and behavioral dimensions regarding the fate and impact of pharmaceutical products in the natural environment. The primary environmental focus of this issue is the presence of toxins in our water supply as a result of the disposal of human or veterinary medicines. I was particularly interested in what Dr. Shane Synder had to say about water issues facing Las Vegas, Nevada.

So what did I have to do with all this? Well the organisers wanted someone from our research group at the Center for Systems Integration and Sustainability to present some thoughts on how modelling of coupled human and natural systems might contribute to the study of this issue. The audience contained experts from a variety of disciplines (including toxicologists, chemists, sociologists, political scientists) and given my limited knowledge about the subject matter I decided I would keep my presentation rather broad in message and content. I drew on several of the topics I have discussed previously on this blog: the nature of coupled human-natural systems, reasons we might model, and potential risks we face when modelling CHANS.

In particular, I suggested that if prediction of a future system state is our goal we will be best served focusing our modelling efforts on the natural system and then using that model with scenarios of future human behaviour to examine the plausible range of states the natural system might take. Alternatively, if we view modelling as an exclusively heuristic tool we might better envisage the modeling process as a means to facilitate communication between disparate groups of experts or publics and explore what different conceptualisations allow and prevent from happening with regards our stewardship or management of the system. Importantly, in both cases the act of making our implicitly held models of how the world works explicit by laying down a formal model structure is the primary value of modelling CHANS.

There was brief talk towards the end of the meeting about setting up a workshop website that might even contain audio/video recordings of presentations and discussions that took place. If such a website appears I’ll link to it here. In the meantime, the next meeting I’ll be attending on campus is likely to be the overview of Coupled Human-Natural Systems discussion in the Networking for Environmental Researchers program.

Science Fictions

What’s happened to this blog recently? I used to write things like this and this. All I seem to have posted recently are rather vacuous posts about website updates and TV shows I haven’t watched (yet).

Well, one thing that has prevented me from posting recently has been that I’ve spent some of my spare time (i.e., when I’m not at work teaching or having fun with data manipulation and analysis for the UP modelling project) working on a long-overdue manuscript.

Whilst I was visiting at the University of Auckland back in 2005, David O’Sullivan, George Perry and I started talking about the benefits of simulation modelling over less-dynamic forms of modelling (such as statistical modelling). Later that summer I presented a paper at the Royal Geographical Society Annual Conference that arose from these discussions. We saw this as our first step toward writing a manuscript for publication in a peer review journal. Unfortunately, this paper wasn’t at the top of our priorities, and whilst on occasions since I have tried to sit down to write something coherent, it has only been this month [three years later!] that I have managed to finish a first draft.

Our discussions about the ‘added value’ of simulation modelling have focused on the narrative properties of of this scientific tool. The need for narratives in scientific fields that deal with ‘historical systems’ has been recognised by several authors previously (e.g. Frodeman in Geology), and in his 2004 paper on Complexity Science and Human Geography, David suggested that there was room, if not a need, for greater reference to the narrative properties of simulation modelling.

What inspired me to actually sit down and write recently was some thinking and reading I had been doing related to the course I’m teaching on Systems Modelling and Simulation. In particular, I was re-acquainting myself with Epstein’s idea of ‘Generative Social Science‘ to explain the emergence of macroscopic societal regularities (such as norms or price equilibria) arising from the local interaction of heterogeneous, autonomous agents. The key tool for the generative social scientist is agent-based simulation that considers the local interactions of heterogeneous, autonomous agents acting in a spatially-explicit environment and possessing bounded (i.e. imperfect) information and computing power. The aim of the generative social scientist is to ‘grow’ (i.e. generate) the observed macroscopic regularity from the ‘bottom up’. In fact, for Epstein this is the key to explanation – the demonstration of a micro-specification (properties or rules of agent interactions and change) able generate the macroscopic regularity of interest is a necessary condition for explanation. Describing the final aggregate characteristics and effects of these processes without accounting for how they arose due to the interactions of the agents is insufficient in the generativist approach.

As I was reading I was reminded of the recent suggestion of the potential of a Generative Landscape Science. Furthermore, the generative approach really seemed to ring true to the critical realist perspective of investigating the world – understanding that regularity does not imply causation and explanation is achieved by identifying causal mechanisms, how they work, and under what conditions they are activated.

Thus, in the paper (or the first draft I’ve written at least – no doubt it will take on several different forms before we submit for publication!) after discussing the characteristics of the ‘open, middle-numbered’ systems that we study in the ‘historical sciences’, reviewing Epstein’s generative social science and presenting examples of the application of generative simulation modelling (i.e., discrete element or agent-based) to land use/cover change, I go on to dicuss how a narrative approach might complement quantitative analysis of these models. Specifically, I look at how narratives could (and do) aid model explanation and interpretation, and the communication of these findings to others, and how the development of narratives will help to ‘open up’ the process of model construction for increased scrutiny.

In one part of this discussion I touch upon the keynote speech given by William Cronon at the RGS annual meeting in 2006 about the need for ‘sustainable narratives‘ of the current environmental issues we are facing as a global society. I also briefly look at how narrative might act as mediators between models and society (related to calls for ‘extended peer communities‘ and the like), and highlight where some of the potential problems for this narrative approach lie.

Now, as I’ve only just [!] finished this very rough initial draft, I’m going to leave the story of this manuscript here. David and George are going to chew over what I’ve written for a while and then it will be back to me to try to draw it all together again. As we progress on this iterative writing process, and the story becomes clearer, I’ll add another chapter here on the blog.