Traditional Fire Knowledge in Spain

When you haven’t done something for a while it’s often best not to rush straight back in at the intensity you were at before. So here’s a nice easy blog to get me going again (not that I was blogging intensely before!).

I didn’t blog about it at the time (unsurprisingly), but back in late June 2013 I went to visit a colleague of mine in Madrid, Dr Francisco Seijo. Francisco and I met back at something I did blog about, the 2009 US-IALE conference in Snowbird. Since then we’ve been discussing how we can use the idea of coupled-human and natural systems to investigate Mediterranean landscapes.

Example of Traditional Fire Knowledge. The ‘pile-burning’ technique involves raking, piling and igniting leaves. This contrasts with ‘a manta’ broadcast burning in which leaves and ground litter are burned across larger areas. Photos by the authors of the paper.

After a brief field visit by me, an interview campaign by Francisco, collection of secondary data from other sources (aerial photography and official fire statistics) and some desk analysis, we recently published our first paper on the work. Entitled Forgetting fire: Traditional fire knowledge in two chestnut forest ecosystems of the Iberian Peninsula and its implications for European fire management policy, and published in the journal Land Use Policy, the article presents the results of our mixed-methods and interdisciplinary approach. Building on Francisco’s previous examination of ‘pre-industrial anthropogenic fire regimes’ we to to investigate differences between the fire regimes and management approaches of chestnut forest ecosystems in two municipalities in central Spain. In the paper we also discuss ideas of Traditional Ecological Knowledge (TEK), the related idea of Traditional Fire Knowledge (TFK), and discuss them in light of contemporary fire management approaches in Europe.

The full abstract is below with links to the paper. I’ll stop here now as this rate of blogging it making me quite dizzy (but hopefully I’ll be back for more soon).

—-

Seijo, Francisco, James DA Millington, Robert Gray, Verónica Sanz, Jorge Lozano, Francisco García-Serrano, Gabriel Sangüesa-Barreda, and Jesús Julio Camarero (2015) Forgetting fire: Traditional fire knowledge in two chestnut forest ecosystems of the Iberian Peninsula and its implications for European fire management policy. Land Use Policy 47 130-144. doi: 10.1016/j.landusepol.2015.03.006
[Online] [Pre-print]

Abstract

Human beings have used fire as an ecosystem management tool for thousands of years. In the context of the scientific and policy debate surrounding potential climate change adaptation and mitigation strategies, the importance of the impact of relatively recent state fire exclusion policies on fire regimes has been debated. To provide empirical evidence to this ongoing debate we examine the impacts of state fire exclusion policies in the chestnut forest ecosystems of two geographically neighbouring municipalities in central Spain, Casillas and Rozas de Puerto Real. Extending the concept of ‘Traditional Ecological Knowledge’ to include the use of fire as a management tool as ‘Traditional Fire Knowledge’ (TFK), we take a mixed-methods and interdisciplinary approach to argue that currently observed differences between the municipalities are useful for considering the characteristics of “pre-industrial anthropogenic fire regimes” and their impact on chestnut forest ecosystems. We do this by examining how responses from interviews and questionnaire surveys of local inhabitants about TFK in the past and present correspond to the current biophysical landscape state and recent fire activity (based on data from dendrochronological analysis, aerial photography and official fire statistics). We then discuss the broader implications of TFK decline for future fire management policies across Europe particularly in light of the published results of the EU sponsored FIRE PARADOX research project. In locations where TFK-based “pre-industrial anthropogenic fire regimes” still exist, ecosystem management strategies for adaptation and mitigation to climate change could be conceivably implemented at a minimal economic and political cost to the state by local communities that have both the TFK and the adequate social, economic and cultural incentives to use it.

Key words

Fire exclusion policies; traditional ecological knowledge; traditional fire knowledge; Chestnut forest ecosystems; FIRE PARADOX

 

Aspiration, Attainment and Success accepted

Back in February last year I wrote a blog post describing my initial work using agent-based modelling to examine spatial patterns of school choice in some of London’s education authorities. Right at the start of this month I presented a summary of the development of that work at the IGU 2013 Conference on Applied GIS and Spatial Modelling (see the slideshare presentation below). And then this week I had a full paper with all the detailed analysis accepted by JASSS – the Journal of Artificial Societies and Social Simulation. Good news!


One of the interesting things we show with the model, which was not readily at the outset of our investigation, is that parent agents with above average but not very high spatial mobility fail to get their child into their preferred school more frequently than other parents – including those with lower mobility. This is partly due to the differing aspirations of parents to move house to ensure they live in appropriate neighbourhoods, given the use of distance (from home to school) to ration places at popular schools. In future, when better informed by individual-level data and used in combination with scenarios of different education policies, our modelling approach will allow us to more rigorously investigate the consequences of education policy for inequalities in access to education.

I’ve pasted the abstract below and because JASSS is freely available online you’ll be able to read the entire paper in a few months when it’s officially published. Any questions before then, just zap me an email.

Millington, J.D.A., Butler, T. and Hamnett, C. (forthcoming) Aspiration, Attainment and Success: An agent-based model of distance-based school allocation Journal of Artificial Societies and Social Simulation

Abstract
In recent years, UK governments have implemented policies that emphasise the ability of parents to choose which school they wish their child to attend. Inherently spatial school-place allocation rules in many areas have produced a geography of inequality between parents that succeed and fail to get their child into preferred schools based upon where they live. We present an agent-based simulation model developed to investigate the implications of distance-based school-place allocation policies. We show how a simple, abstract model can generate patterns of school popularity, performance and spatial distribution of pupils which are similar to those observed in local education authorities in London, UK. The model represents ‘school’ and ‘parent’ agents. Parental ‘aspiration’ to send their child to the best performing school (as opposed to other criteria) is a primary parent agent attribute in the model. This aspiration attribute is used as a means to constrain the location and movement of parent agents within the modelled environment. Results indicate that these location and movement constraints are needed to generate empirical patterns, and that patterns are generated most closely and consistently when schools agents differ in their ability to increase pupil attainment. Analysis of model output for simulations using these mechanisms shows how parent agents with above-average – but not very high – aspiration fail to get their child a place at their preferred school more frequently than other parent agents. We highlight the kinds of alternative school-place allocation rules and education system policies the model can be used to investigate.

Recursion in society and simulation

This week I visited one of my former PhD advisors, Prof John Wainwright, at Durham University. We’ve been working on a manuscript together for a while now and as it’s stalled recently we thought it time we met up to re-inject some energy into it. The manuscript is a discussion piece about how agent-based modelling (ABM) can contribute to understanding and explanation in geography. We started talking about the idea in Pittsburgh in 2011 at a conference on the Epistemology of Modeling and Simulation. I searched through this blog to see where I’d mentioned the conference and manuscript before, but to my surprise, before this post I hadn’t.

In our discussion of what we can learn through using ABM, John highlighted the work of Kurt Godel and his incompleteness theorems. Not knowing all that much about that stuff I’ve been ploughing my way through Douglas Hofstadter’s tome ‘Godel, Escher and Bach: An Eternal Golden Braid’ – heavy going in places but very interesting. In particular, his discussion of the concept of recursion has taken my notice, as it’s something I’ve been identifying elsewhere.

The general concept of recursion involved nesting, like Russian dolls, stories within stories (like in Don Quixote) and images within images:


Computer programmers of take advantage of recursion in their code, calling a given procedure from within that same procedure (hence their love of recursive acronyms like PHP [PHP Hypertext Processor]). An example of how this works is in Saura and Martinez-Millan’s modified random clusters method for generating land cover patterns with given properties. I used this method in the simulation model I developed during my PhD and have re-coded the original algorithm for use in NetLogo [available online here]. In the code (below) the grow-cover_cluster procedure is called from within itself, allowing clusters of pixels to ‘grow themselves’.


However, rather than get into the details of the use of recursion in programming, I want to highlight two other ways in which recursion is important in social activity and its simulation.

The first, is in how society (and social phenomena) has a recursive relationship with the people (and their activities) composing it. For example, Anthony Gidden’s theory of structuration argues that the social structures (i.e., rules and resources) that constrain or prompt individuals’ actions are also ultimately the result of those actions. Hence, there is a duality of structure which is:

“the essential recursiveness of social life, as constituted in social practices: structure is both medium and outcome of reproduction of practices. Structure enters simultaneously into the constitution of the agent and social practices, and ‘exists’ in the generating moments of this constitution”. (p.5 Giddens 1979)

Another example comes from Andrew Sayer in his latest book ‘Why Things Matter to People’ which I’m also progressing through currently. One of Sayer’s arguments is that we humans are “evaluative beings: we don’t just think and interact but evaluate things”. For Sayer, these day-to-day evaluations have a recursive relationship with the broader values that individuals hold, values being ‘sedimented’ valuations, “based on repeated particular experiences and valuations of actions, but [which also tend], recursively, to shape subsequent particular valuations of people and their actions”. (p.26 Sayer 2011)

However, while recursion is often used in computer programming and has been suggested as playing a role in different social processes (like those above), its examination in social simulation and ABM has not been so prominent to date. This was a point made by Paul Thagard at the Pittsburgh epistemology conference. Here, it seems, is an opportunity for those seeking to use simulation methods to better understand social patterns and phenomena. For example, in an ABM how do the interactions between individual agents combine to produce structures which in turn influence future interactions between agents?

Second, it seems to me that there are potentially recursive processes surrounding any single simulation model. For if those we simulate should encounter the model in which they are represented (e.g., through participatory evaluation of the model), and if that encounter influences their future actions, do we not then need to account for such interactions between model and modelee (i.e., the person being modelled) in the model itself? This is a point I raised in the chapter I helped John Wainwright and Dr Mark Mulligan re-write for the second edition of their edited book “Environmental Modelling: Finding Simplicity in Complexity”:

“At the outset of this chapter we highlighted the inherent unpredictability of human behaviour and several of the examples we have presented may have done little to persuade you that current models of decision-making can make accurate forecasts about the future. A major reason for this unpredictability is because socio-economic systems are ‘open’ and have a propensity to structural changes in the very relationships that we hope to model. By open, we mean that the systems have flows of mass, energy, information and values into and out of them that may cause changes in political, economic, social and cultural meanings, processes and states. As a result, the behaviour and relationships of components are open to modification by events and phenomena from outside the system of study. This modification can even apply to us as modellers because of what economist George Soros has termed the ‘human uncertainty principle’ (Soros 2003). Soros draws parallels between his principle and the Heisenberg uncertainty principle in quantum mechanics. However, a more appropriate way to think about this problem might be by considering the distinction Ian Hacking makes between the classification of ‘indifferent’ and ‘interactive’ kinds (Hacking, 1999; also see Hoggart et al., 2002). Indifferent kinds – such as trees, rocks, or fish – are not aware that they are being classified by an observer. In contrast humans are ‘interactive kinds’ because they are aware and can respond to how they are being classified (including how modellers classify different kinds of agent behaviour in their models). Whereas indifferent kinds do not modify their behaviour because of their classification, an interactive kind might. This situation has the potential to invalidate a model of interactive kinds before it has even been used. For example, even if a modeller has correctly classified risk-takers vs. risk avoiders initially, a person in the system being modelled may modify their behaviour (e.g., their evaluation of certain risks) on seeing the results of that behaviour in the model. Although the initial structure of the model was appropriate, the model may potentially later lead to its own invalidity!” (p. 304, Millington et al. 2013)

The new edition was just published this week and will continue to be a great resource for teaching at upper levels (I used the first edition in the Systems Modeling and Simulation course I taught at MSU, for example).

More recently, I discussed these ideas about how models interact with their subjects with Peter McBurney, Professor in Informatics here at KCL. Peter has written a great article entitled ‘What are Models For?’, although it’s somewhat hidden away in the proceedings of a conference. In a similar manner to Epstein, Peter lists the various possible uses for simulation models (other than prediction, which is only one of many) and also discusses two uses in more detail – mensatic and epideictic. The former function relates to how models can bring people around a metaphorical table for discussion (e.g., for identifying and potentially deciding about policy trade-offs). The other, epideictic, relates to how ideas and arguments are presented and leads Peter to argue that by representing real world systems in a simulation model can force people to “engage in structured and rigorous thinking about [their problem] domain”.

John and I will be touching on these ideas about the mensatic and epideictic functions of models in our manuscript. However, beyond this discussion, and of relevance here, Peter discusses meta-models. That is, models of models. The purpose here, and continuing from the passage from my book chapter above, is to produce a model (B) of another model (A) to better understand the relationships between Model A and the real intelligent entities inside the domain that Model A represents:

“As with any model, constructing the meta-model M will allow us to explore “What if?” questions, such as alternative policies regarding the release of information arising from model A to the intelligent entities inside domain X. Indeed, we could even explore the consequences of allowing the entities inside X to have access to our meta-model M.” (p.185, McBurney 2012)

Thus, the models are nested with a hope of better understanding the recursive relationship between models and their subjects. Constructing such meta-models will likely not be trivial, but we’re thinking about it. Hopefully the manuscript John and I are working on will help further these ideas, as does writing blog posts like this.

Selected Reference
McBurney (2012): What are models for? Pages 175-188, in: M. Cossentino, K. Tuyls and G. Weiss (Editors): Post-Proceedings of the Ninth European Workshop on Multi-Agent Systems (EUMAS 2011). Lecture Notes in Computer Science, volume 7541. Berlin, Germany: Springer.

Millington et al. (2013) Representing human activity in environmental modelling In: Wainwright, J. and Mulligan, M. (Eds.) Environmental Modelling: Finding Simplicity in Complexity. (2nd Edition) Wiley, pp. 291-307 [Online] [Wiley]

Social simulation: what criticism do we get?

This week on the SIMSOC listserv was a request from Annie Waldherr & Nanda Wijermans for modellers of social systems to complete a short questionnaire on the sort of criticism they receive. The questionnaire is only two short questions, one asking what field you are in and the other asking you to ‘Describe the criticism you receive. For instance, recall the questions or objections you got during a talk you gave. Feel free to address several points.’

Here was my quick response to the second question:

1) Too many ‘parameters’ in agent-based models (ABM) make them difficult to analyse rigorously and fully appreciate the uncertainty of (although I think this kind of statement highlights the mis-understanding some have of how ABM can be structured – often models of this type are more reliant on rules of interactions between agents than individual parameters).

2) The results of models are seen as being driven by the assumptions of the modeller than by the state of the real world. That is, modellers may learn a lot about their models but not much about the real world (see similar point made by Grimm [1999] in Ecological Modelling 115)

I think it would have been nice to have a third question offering an opportunity to suggest how we can, or should, respond to these critisisms. Here’s what I would have written if that third question was there:

To address point 1) above we need to make sure that we:

i) document our models comprehensively (e.g., via ODD) so that others understand model structure and can identify likely important parameters/rules and assumptions;

ii) show that the model parameter space has been widley explored (e.g., via use of techniques like Latin hypercube sampling).

To address 2) we need to make sure that:

iii) when documenting our models (see i) we fully justify the rationale of our models, hopefully with reference to real world data;

iv) we acknowledge and emphasise that the current state of ABM means that usually they can be no more than metaphors or sophisticated analogies for the real world but that they are useful for providing alternative means to think about social phenomena (i.e., they have heuristic properties).

If you’re working in this area go and share your thoughts by completing the short questionnaire , or leaving comments below.

Modelling Spatial Patterns of School Choice

A couple of weeks ago I visited King’s Department of Education to give a seminar I entitled Agent-based simulation for distance-based school allocation policy analysis. The aim was to introduce agent-based modelling to those unaware and hopefully open a debate on how it might be used in future education research. This all came about as I’ve been working on modelling the drivers and consequences of school choice with Profs Chris Hamnett and Tim Butler here in King’s Geography Department.

Hackney School Admissions Brochure

In their recent research, Chris and Tim looked at the role geography plays in educational inequalities in East London. Many UK local education authorities (LEAs) use spatial distance as a key criterion in their policy for allocating school places: people that live closer to a school rank get allocated to it before those that live farther away. This is necessary because it’s often the case that more people want to send their children to a school than there are places available at it. For example, you can read about the criteria the Hackney LEA uses in their brochure for 2012.

Using data from several LEAs, Chris and Tim showed empirically how this distance criterion is related to school popularity. School popularity is indicated for example by the ratio of school applicants to the number of places available at the school (A:P) – some schools have very high ratios (e.g. up to 8 applications per place) and others very low (e.g. down to around one application per place). Furthermore, this spatial allocation criterion is an important influence on parents’ strategies for school applications, dependent on the location of their home relative to schools and their ability to move home.

These allocation rules, combined with parent’s strategies, produce patterns and relationships between schools’ GCSE achievement levels, A:P ratio and the maximum distance that allocated pupils live from the school. In Barking, for example, we see in the figure below that more popular schools have higher percentages of pupils achieving five GCSE’s with grades A* – C, and that these same popular schools also have the smallest maximum distances (i.e. pupils generally live very close to the school).

Empirical Patterns in Barking Schools

This spatial pattern can also seen when we look at maps of the locations of successful and unsuccessful applicants to popular and less popular schools in Hackney. For example, looking at the figure below (found in Hamnett and Butler 2011) we can see how successful applicants to The Bridge Academy (a popular school) are more tightly clustered around the it than those for Clapton Girls’ Technology College (not such a popular school).

Map of successful and unsuccessful applicants to two schools in Hackney

The geography of this school allocation policy, combined with differences in parents’ circumstances, suggests this issue is a prime candidate for study using agent-based modelling. Agent-based simulation modelling might be useful here because it provides a means to represent interactions between individual actors with different attributes (in this case schools and parents) across space and time. Once the simulation model structure (e.g. rules of interactions between agents) has been established, it can then be used to examine the potential effects of things like opening or closing schools (i.e. changes in external conditions) or changes in school allocation policy rules or parents’ application strategies (i.e. internal model relationships and rules).

I developed an initial ‘model’ as a proof of concept and which you can try out yourself. Things have progressed from that proof of concept model, and the model now represents changes in cohorts of school applicants and pupils through time, including the potential for parents to move house to be more likely to get their child into a desired school.

In the seminar with the Department of Education guys I presented some ouput from the recent modelling. I showed how the abstract model with relatively few and simple assumptions can start from random conditions to reproduce empirical spatial patterns in school applications and attainment outcomes like those described above (see the figure below)

School model screenshot

I also presented early results from using the simulation model to explore implications of potential policy alternatives (such as closing failing schools). These ideas were generally welcomed in the seminar but there were some interesting questions about the what the model assumptions might entail for maintaining existing policy assumptions and intentions (what we might term the rhetoric of modelling).

I’m exploring some of these questions now, including for example issues of how we define a ‘good’ school and how parents’ school application strategies might change as allocation rules change. These will feed into a research manuscript that I’ll continue to work on with Chris and Tim.

ABM, Prezi and the New Term

I’ve not been in the office much over the last month or so, but that’s all about to change now that the new academic term has arrived!

Since I last posted, I attended and presented work at the Royal Geographical Society Annual Conference, one presentation on our managed forest landscape modelling in Michigan and one on the narrative properties of simulation modelling. Both presentations were in the environmental modelling and decision making session, but despite being the graveyard session (last of the conference!) we had some interesting questions and discussion. I tried out Prezi for my narratives presentation (brought to my attention by Tom Smith). It certainly requires a different approach than the linear style PowerPoint enforces. Whether Prezi is a more useful tool probably depends on the message you’re trying to communicate – if your story isn’t particularly linear then Prezi might be useful.

These last few days I’ve been up in Edinburgh visiting folks at the Forestry Commission’s Northern Research Station to discuss the socio-ecological modelling of potential woodland creation I’ve been working on recently. I also got to talk with Derek Robinson at the University of Edinburgh about some of these issues. Everyone seemed interested in what I’ve been doing, particularly with the ideas I’ve been bouncing around relating to the work Burton and Wilson have been doing on post-productivist farmer self-identities, how these self-identities might change, how they might influence adoption of woodland planting and how we might model that. For example, I think an agent-based simulation approach might be particularly useful for exploring what Burton and Wilson term the ‘‘temporal discordance’ in the transition towards a post-productivist agricultural regime”. And I also think there’s potential to tie it in with work like my former CSIS colleague Xiaodong Chen has been doing using agent-based approaches to model the effects of social norms on enrollment in payments for ecosystem services (such as woodland creation).

I was away on holiday for a couple of weeks after the RGS. On returning, I’ve been preparing for King’s Geography tutorials with the incoming first year undergraduates. The small groups we’ll be working will allow us to discuss and explore critical thinking and techniques about issues and questions in physical geography. Looking forward a busy autumn term!

Leverhulme Early Career Fellowship

Around the time I wrote this blog about the National Assessment of UK Forestry and Climate Change Steering Group report I was thinking about writing a proposal to the Leverhulme Trust for an Early Career Fellowship. I found out recently that my proposal was successful and so from January 2011 I will be back at King’s College, London!

The Leverhulme Trust makes awards in support of research and education with special emphasis on original and significant research that aims to remove barriers between traditional disciplines. Their Early Career Fellowships are awarded across all disciplines and in 2010 approximately 70 were expected to be awarded to individuals to hold at universities in the UK. Given the emphasis on original, significant and cross-disciplinary research made by the Trust I looked for something that matched my research skills in coupled human and natural systems modelling but that pushed work in that area in a new direction. I thought back to the ideas about model narratives I have previously explored with David O’Sullivan and George Perry (but have not worked on since then) and Bill Cronon’s plenary address at the Royal Geographical Society in 2006 on the need for ‘sustainable narratives’. With that in mind, and given the UK Forestry and Climate change report I had been reading, I decided to make a pitch for a project that would explore how narratives from the use of models could help individuals identify how local actions transcend scales to mitigate global climate change in the context of the anticipated woodland planting that will be ongoing in the UK in future years. It proved to be a successful pitch!

I’m sure I will blog plenty more about the project in the future, so for now I will just leave you with the proposal rationale (below). I’m looking forward to getting to work on this when I get back to London, but before that there’s plenty more things to get done on the Michigan forest landscape ecological-economic modelling.

Model narratives for climate change mitigation
The abstract, vast, and systemic narratives that dominate the issue of global climate change do little to illustrate to individuals and groups how their actions might contribute to mitigate the effects of what is often framed as a global problem (Cronon 2006). Ways to improve the ability of individuals and groups to identify how their local actions transcend scales to mitigate global climate change are needed. In this research I will explore how narratives produced from computer simulation models that represent individuals’ actions can provide people with insights into how their behaviour affects system properties at a larger scale. Although the narrative properties of simulation models have been highlighted (O’Sullivan 2004), the use of models to develop localised narratives of climate change which emphasise individual agency has yet to be explored. Confronting individuals with these narratives will also help researchers reveal important underlying, and possibly implicitly held, assumptions that influence choices and behaviour.

This research will address the following general questions:

  • How can computer simulation models be better used to reveal to individuals how their local actions can contribute to global environmental issues such as Climate Change Mitigation (CCM)?
  • What are the narrative properties of simulation models and how can they be exploited to help individuals find meaning about their actions as they relate to global climate change?
  • By using simulation tools to spur reflection what can we learn about the factors influencing individuals’ choices and behaviour with regards CCM options?

Answering these questions will require a uniquely interdisciplinary research approach that spans the physical sciences, social sciences and humanities. Such ground-breaking, boundary-crossing work is necessary if we are to re-connect the physical sciences with the publics they intend to benefit and find solutions to large-scale and pressing environmental problems. For example, one of the key findings from a recent report by the National Assessment of UK Forestry and Climate Change Steering Group (Read et al. 2009) was that “[t]he extent to which the potential for additional [greenhouse gas] emissions abatement through tree planting is realized … will be determined in large part by economic forces and society’s attitudes rather than by scientific and technical issues alone” (p.xvii). The report also argued the need “to better understand and consider the role of different influences affecting choices and behaviour. Without the appropriate emotional, cultural or psychological disposition, information will make no difference.” (p.210). Narratives based on scientific understanding which portray how individuals can make a difference to large-scale, diffuse environmental issues will be important for fostering such a disposition. Simulation models – quantitative representations of reality which provide a means to logically examine how high-level and large-scale patterns are generated by lower-level and smaller-scale processes and events – have the potential to contribute to the construction of these narratives.

Social Network Analysis

As I mentioned in a tweet earlier this week, Prof. Ken Frank was ‘visiting’ CSIS this week. Ken studies organizational change and innovation using, amongst other methods, Social Network Analysis (SNA). SNA examines how the structure of ties between people affects individuals’ behaviour, at how social network structure and composition influences the social norms of a group, and how resources (for example, of information) flow through a social network. This week Ken organised a couple of seminars on the use of SNA to investigate natural resource decision-making (for example, in small-scale fisheries) and I joined a workshop he ran on how we actually go about doing SNA, learning about software like p2 and KliqueFinder. Ken showed us the two main models; the selection model and the influence model. The former addresses network formation and examines individuals’ networks and how they chose it. The latter examines how individuals are influenced by the people in their network and the consequences for their behaviour. As an example of how SNA might be used, take a look at this executive summary [pdf] of the thesis of a recent graduate students from MSU Fisheries and Wildlife.

On Friday, after having been introduced through the week to what SNA is, I got to chat with Ken about how it might relate to the agricultural decision-making modelling I did during my PhD. In my agent-based model I used a spatial neighbourhood rule to represent the influence of social norms (i.e. whether a farmer is ‘traditional’ or ‘commercial’ in my categories). However, the social network of farmers is not solely determined by spatial relationshps – farmers have kinship ties and might meet other individuals at the market or in the local cerveceria. We discussed how I might be able to use SNA to better represent the influences of other farmers on an indiviuals’ decision-making in my model. I don’t have the network data needed to do this right now but it’s something to think about for the future.

If I’d been more aware of SNA previously I may have incorporated some discussion of it into the book chapter I re-wrote recently for Environmental Modelling. In that chapter I focused on the increasing importance of behavioural economics for investigating and modelling the relationships between human activity and the environment. SNA is certainy something to add to the toolbox and seems to be on the rise in natural resources research. Something else I missed whilst working on re-writing that that chapter was the importance of behavioural economics to David Cameron‘s ‘Big Society’ idea. He seems to be aware of the lessons we’ve started learning from things like social network analysis and behavioural economics – now he’s in charge maybe we’ll start seeing some direct application of those lessons to UK public policy.

The Omnivores’ Trifecta: A feast of ideas

This week I went to a seminar presented by Dr Richard Bawden of the Systemic Development Institute, Australia. This was the first event in MSU’s “conversation about our food future”. It turned out to be much more interesting than I had hoped; Bawden is an engaging and charismatic speaker who presented a thoughtful perspective on what he termed ‘The Omnivores’ Trifecta’: Agriculture, Food and Health and the Systemic Relationships between them. He covered a hearty spread of ideas, so I’ll recap his most interesting points in bite-sized pieces:

i) Bawden suggested that Agriculture, Food and Health (A-F-H) when considered separately are not a system. But by understanding each as a discourse (i.e. as a subject for “formal discussion of debate”) they become viewed in a systemic perspective.

ii) At the intersection of these three subjects are four very important (sub-)discourses which Bawden termed the “engagement discourse subsystem”. These are: business, lay citizens, governance, and experts.

iii) Bawden proposed that it is the profound differences in episteme (worldview) between these discourse ‘subsystems’ that are at the heart of the majority of the conflicts across the A-F-H system and the environment in which it is situated.

iv) These epistemic differences are so profound as to be polemic. Bawden bemoaned this fact and highlighted that “Dialectic yields to Polemic“. He emphasised that dialectics are the only way forward to forge a world in common and that polemics prevent deliberation, debate and kill democracy.

v) To illustrate these points Bawden used the case of Australian agriculture since the mid-20th century. He described this case as being characteristic of many messy, wicked problems and argued that reductionist science alone was insufficient to bring resolution (and hence is why he founded the Systemic Development Institute). During this argument he quoted Beck but questioned whether we have reached second modernity. Bawden argued that the “culture of technical control” still prevails within current modernist society has an episteme that privileges fact over value, analysis over synthesis, individualism over communalism, teaching over learning and productionism over sustainablism.

vi) On these last two dichotomies, Bawden suggested that the question of what is to be sustained (and therefore what sustainability is) is a moral question not a technical one.

vii) He proposed that higher education is about learning differently not learning more; the ability to look the world and make sense of it for oneself (and then take action in response) is what characterises a good education. Awareness of the presence of different worldviews is key to this ability. Furthermore, Bawden argued that the complete learner will be prepared to enter a form of learning that the academy is currently unable to provide because it is too reductionist. This learning would require critical reflection of one’s own worldview, as Jack Mezirow has proposed.

viii) Bawden then presented the diagram that synthesises his message (see below). This diagram describes the “integrated process of the critical learning system” and shows how perceiving, understanding, planning and acting are connected within our rational experience of the world and how they are linked to the intuitive facets of learning.


Quite the feast of ideas eh? I’m still digesting them and might be for a while. But the key message I take away from this is a post-normal one; in learning about human-environment interactions and to solve current wicked problems, inter-epistemic as well as inter-disciplinary work will be needed. Although different scientific disciplines such as ecology, biology, and chemistry have different terminology and conventions, they share a worldview – the one that favours facts over values and aims to subsume empirical observations into universal laws and theories. Other worldviews are available. Inter-epistemic human-environment study would seek to cross the boundaries between worldviews, recognize that reductionist science is only one way to understand the world and is unlikely provide complete answers to wicked problems, and emphasise dialectics over polemics.

Putting decison-making in context

A while back I wrote about how it takes all sorts to make a world and why we need to account for those different sorts in our models of it. One of the things that I highlighted in that post was the need for mainstream economics to acknowledge and use more of the findings from behavioural economists.

One of the examples I used in the draft of the book chapter I have been writing for the second edition of Wainwright and Mulligan’s Environmental Modelling was the paper by Tversky and Kahneman, The Framing of Decisions and the Psychology of Choice. They showed how the way in which a problem is framed can influence human decision-making and causes problems for rational choice theory. In one experiment Tversky and Kahneman asked people if they would buy a $10 ticket on arriving at the theatre when finding themselves in two different situations:

i) they find they have lost $10 on the way to the theatre,
ii) they find they have lost their pre-paid $10 ticket.

In both situations the person has lost the value of the ticket ($10) and under neoclassical economic assumptions should behave the same when deciding whether to buy a ticket when arriving at the theatre. However, Tversky and Kahneman found that people were more likely to buy a ticket in the first situation (88%) than buying a (replacement) ticket in the second (46%). They suggest this behaviour is due to human ‘psychological accounting’, in which we mentally allocate resources to different purposes. In this case people are less willing to spend money again on something they have already allocated to their ‘entertainment account’ than if they have lost money which they allocate to their ‘general expenses account’.

More recently, Galinsky and colleagues examined how someone else’s irrational thought processes can influence our own decision-making. In their study they asked college students to take over decision-making for a fictitious person they had never met (the students were unaware the person was fictitious).

In one experiment, the volunteers watched the following scenario play out via text on a computer screen: the fictitious decision-maker tried to outbid another person for a prize of 356 points, which equaled $4.45 in real money. The decision-maker started out with 360 points, and every time the other bidder upped the ante by 40 points, the decision-maker followed suit. Volunteers were told that once the decision-maker bid over 356 points, he or she would begin to lose some of the $12 payment for participating in the study.

When the fictitious decision-maker neared this threshold, the volunteers were asked to take over bidding. Objectively, the volunteers should have realized that – like the person who makes a bad investment in a ‘fixer-upper’ – the decision-maker would keep throwing good money after bad. But the volunteers who felt an identification with the fictitious player (i.e., those told by the researchers that they shared the same month of birth or year in school) made almost 60% more bids and were more likely to lose money than those who didn’t feel a connection.

Are we really surprised that neoclassical economic models often fall down? Accounting for seemingly irrational human behaviour may make the representation of human decision-making more difficult, but increasingly it seems irrational not to do so.