Global Change Blog

This week I discovered a new blog that looks worth following for anyone interested in human-environment interactions, sustainability, or CHANS. The Global Change blog intends to explore big questions about society and environmental change, such as:

  • How do personal choices and values play a role in this conversation?
  • What do the natural sciences have to say about the way our world is changing?
  • What do the social sciences and humanities have to say about the ways that the social and the cultural intersect with questions surrounding environment?
  • How can we address environmental and social challenges at the same time?
  • How is environmentalism changing in response to these pressures?
  • What’s the role of higher education in facilitating sustainability and environmental literacy?

So far the blog has posted a mix of thoughtful original writing (for example on reasons why people don’t engage climate change) and brief highlights of other work. Hope they keep it coming!

It takes all sorts

Neoclassical economics, both its assumptions and its ability to forecast future economic activity, has been taking a bit of a panning recently. Back near the start of this most recent economic downturn, Jean-Philippe Bouchard argued that neoclassical economists need to develop more pragmatic and realistic representations of what actually happens in ‘wild’ and messy free markets. And at the start of this year I highlighted how Niall Ferguson has stressed the importance of considering history in economic markets and decision-making. In both cases the criticism is that some economists have been blinded by the beauty of their elegant models and have failed to see where their assumptions and idealizations fail to match what’s happening in the real world. Most recently, Paul Krugman argued that ‘flaws-and-frictions economics’ (emphasizing imperfect decision-making and rejecting ideas of a perfectly free ‘friction-less’ market) must become more important. Krugman (‘friend’ of Niall Ferguson) suggests that mainstream economics needs to become more ‘behavioural’, and follow the lead of the behavioural economists that incorporate social, cognitive and emotional factors into their analyses of human decision-making.

The view from the Nature editors on all this is that in the future agent-based modelling will be an important tool to inform economic policy. In many ways agent-based modelling is very well suited to build more ‘behaviour’ into economics. For example, agent-based modelling provides the ability to represent several types of agent each with their own rules for decision-making, potentially based on their own life-histories and circumstances (this in contrast to the single perfectly rational ‘representative agent’ of neoclassical economics). Farmer and Foley, in their opinon piece of the same issue of Nature, are keen:

“Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. … To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.”

At the very least, our agent-based models need to improve upon the homogenizing assumptions of neoclassical economics. It takes all sorts to make a world — we need to do a better job of accounting for those different sorts in our models of it.

What is the point… of social simulation modelling?

Previously, I mentioned a thread on SIMSOC initiated by Scott Moss. He asked ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast?. Following on to the responses to that question, earlier this week he started a new thread entitled ‘What’s the Point?:

“We already know that economic recessions and recoveries have probably never been forecast correctly — at least no counter-examples have been offered. Similarly, no financial market crashes or recoveries or significant shifts in market shares have ever, as far as we know, been forecast correctly in real time.

I believe that social simulation modelling is useful for reasons I have been exploring in publications for a number of years. But I also recognise that my beliefs are not widely held.

So I would be interested to know why other modellers think that modelling is useful or, if not useful, why they do it.”

After reading others’ responses I decided to reply with my own view:

“For me prediction of the future is only one facet of modelling (whether agent-based or any other kind) and not necessarily the primary use, especially with regards policy modelling. This view stems party from the philosophical difficulties outlined by Oreskes et al. (1994), amongst others. I agree with Mike that the field is still in the early stages of development, but I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world’. As Pablo suggested, if we are to predict the future the inherent uncertainties will be best highlighted and accounted for by ensuring predictions are tied to a probability.”

I also highlighted the reasons offered by Epstein and outlined a couple of other reasons I think ABM are useful.

There was a brief response to mine then and then another, more assertive, response that (I think) highlights a common confusion of the different uses of prediction in modelling:

“If models of economic policy are fundamentally unable to at some point predict the effects of policy — that is, to in some measure predict the future — then, to be blunt, what good are they? If they are unable to be predictive then they have no empirical, practical, or theoretical value. What’s left? I ask that in all seriousness.

Referring to Epstein’s article, if a model is not sufficiently grounded to show predictive power (a necessary condition of scientific results), then how can it be said to have any explanatory power? Without prediction as a stringent filter, any amount of explanation from a model becomes equivalent to a “just so” story, at worst giving old suppositions the unearned weight of observation, and at best hitting unknowably close to the mark by accident. To put that differently, if I have a model that provides a neat and tidy explanation of some social phenomena, and yet that model does not successfully replicate (and thus predict) real-world results to any degree, then we have no way of knowing if it is more accurate as an explanation than “the stars made it happen” or any other pseudo-scientific explanation. Explanations abound; we have never been short of them. Those that can be cross-checked in a predictive fashion against hard reality are those that have enduring value.

But the difficulty of creating even probabalistically predictive models, and the relative infancy of our knowledge of models and how they correspond to real-world phenomena, should not lead us into denying the need for prediction, nor into self-justification in the face of these difficulties. Rather than a scholarly “the dog ate my homework,” let’s acknowledge where we are, and maintain our standards of what modeling needs to do to be effective and valuable in any practical or theoretical way. Lowering the bar (we can “train practitioners” and “discipline policy dialogue” even if we have no way of showing that any one model is better than another) does not help the cause of agent-based modeling in the long run.

I felt this required a response – it seemed to me that difference between logical prediction and temporal prediction was being missed:

“In my earlier post I wrote: “I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world'”. I was careful about how I worded this [more careful than ensuring correct formatting of the post it seems – my original post is below in a more human-readable format] and maybe some clarification in the light of Mike’s comments would be useful. Here goes…

Precisely predicting the future state of an ‘open’ system at a particular instance in time does not imply we have explained or understand it (due to the philosophical issues of affirming the consequent, equifinality, underdetermination, etc.). To be really useful for explanation and to have enduring value model predictions of any system need to be cross-checked against hard reality *many times*, and in the case of societies probably also in many places (and should ideally be produced by models that are consistent with other theories). Producing multiple accurate predictions will be particularly tricky for things like the global economy for which only have one example (but of course will be easier where experimental replication more ogistically feasible).

My point is two-fold:
1) a single, precise prediction of a future does not really mean much with regard our understanding of an open system,
2) multiple precise predictions are more useful but will be more difficult to come by.

This doesn’t necessarily mean that we will never be able to consistently predict the future of open systems (in Scott’s sense of correctly forecasting of the timing and direction of change of specified indicators). I just think it’s a ways off yet, that there will always be uncertainty, and that we need to deal with this uncertainty explicitly via probabilistic output from model ensembles and other methods.Rather than lowering standards, a heuristic use of models demands we think more closely about *how* we model and what information we provide to policy makers (isn’t that the point of modelling policy outcomes in the end?).

Let’s be clear, the heuristic use of models does not allow us to ignore the real world – it still requires us to compare our model output with empirical data. And as Mike rightly pointed out, many of Epstein’s reasons to model – other than to predict – require such comparisons. However, the scientific modelling process of iteratively comparing model output with empirical data and then updating our models is a heuristic one – it does not require that precise prediction at specific point in the future is the goal before all others.

Lowering any level of standards will not help modelling – but I would argue that understanding and acknowledging the limits of using modelling in different situations in the short-term will actually help to improve standards in the long run. To develop this understanding we need to push models and modelling to their limits to find our what works, what we can do and what we can’t – that includes iteratively testing the temporal predictions of models. Iteratively testing models, understanding the philosophical issues of attempting to model social systems, exploring the use of models and modelling qualitatively (as a discussant, and a communication tool, etc.) should help modellers improve the information, the recommendations, and the working relationships they have with policy-makers.

In the long run I’d argue that both modellers and policy-makers will benefit from a pragmatic and pluralistic approach to modelling – one that acknowledges there are multiple approaches and uses of models and modelling to address societal (and environmental) questions and problems, and that [possibly self evidently] in different situations different approaches will be warranted. Predicting the future should not be the only goal of modelling social (or environmental) systems and hopefully this thread will continue to throw up alternative ideas for how we can use models and the process of modelling.”

Note that I didn’t explicitly point out the difference between the two different uses of prediction (that Oreskes and other have previously highlighted). It took Dan Olner a couple of posts later to explicitly describe the difference:

“We need some better words to describe model purpose. I would distinguish two –

a. Forecasting (not prediction) – As Mike Sellers notes, future prediction is usually “inherently probabalistic” – we need to know whether our models can do any better than chance, and how that success tails off as time passes. Often when we talk about “prediction” this is what we mean – prediction of a more-or-less uncertain future. I can’t think of a better word than forecasting.

b. Ontological prediction (OK, that’s two words!) – a term from Gregor Betz, Prediction Or Prophecy (2006). He gives the example of the prediction of Neptune’s existence from Newton’s laws – Uranus’ orbit implied that another body must exist. Betz’s point is that an ontological prediction is “timeless” – the phenomenon was always there. Einstein’s predictions about light bending near the sun is another: something that always happened, we just didn’t think to look for it. (And doubtless Eddington wouldn’t have considered *how* to look, without the theory.)

In this sense forecasting (my temporal prediction) is distinctly temporal (or spatial) and demands some statement about when (or where) an event or phenomena will occur. In contrast, ontological prediction (my logical prediction) is independent of time and/or space and is often used in closed system experiments searching for ‘universal’ laws. I wrote more about this in a series of blog posts I wrote a while back on the validation of models of open systems.

This discussion is ongoing on SIMSOC and Scott Moss has recently posted again suggesting a summary of the early responses:

“I think a perhaps extreme summary of the common element in the responses to my initial question (what is the point?, 9/6/09) is this:

**The point of modelling is to achieve precision as distinct from accuracy.**

That is, a model is a more or less complicated formal function relating a set of inputs clearly to a set of outputs. The formal inputs and outputs should relate unambiguously to the semantics of policy discussions or descriptions of observed social states and/or processes.

This precision has a number of virtues including the reasons for modelling listed by Josh Epstein. The reasons offered by Epstein and expressed separately by Lynne Hamill in her response to my question include the bounding and informing of policy discussions.

I find it interesting that most of my respondents do not consider accuracy to be an issue (though several believe that some empirically justified frequency or even probability distributions can be produced by models). And Epstein explicitly avoids using the term validation in the sense of confirmation that a model in some sense accurately describes its target phenomena.

So the upshot of all this is that models provide a kind of socially relevant precision. I think it is implicit in all of the responses (and the Epstein note) that, because of the precision, other people should care about the implications of our respective models. This leads to my follow-on questions:

Is precision a good enough reason for anyone to take seriously anyone else’s model? If it is not a good enough reason, then what is?

And so arises the debate about the importance of accuracy over precision (but the original ‘What is the point’ thread continues also). In hindsight, I think it may have been more appropriate for me to use the word accurate than precise in my postings. All this debate may seem to be just semantics and navel-gazing to many people, but as I argued in my second post, understanding the underlying philosophical basis of modelling and representing reality (however we might measure or perceive it) gives us a better chance of improving models and modelling in the long run…

Abandon Hope

Last Friday I was aiming to go to a seminar by Dr Michael Nelson entitled An Unprecedented Challenge: Environmental Ethics and Global Climate Change. Unfortunately time flies when you’re coding [our ecological-economic forest simulation model] and I missed it.

But I found a few bits and pieces on the MSU website that I assume are related. Like his recent article Abandon Hope in The Ecologist (written with <a href="; class=”regular” target=”_blank”>John Vucetich), and this associated MSU interview in which he outlines his argument:

Even if they aren’t quite what was discussed on Friday, it’s still interesting stuff. Nelson’s argument is that if the only reason we have to live sustainably is the hope that environmental disaster will be averted, it’s unlikely that we’ll actually avert those disasters. Why? Because hope is a pretty weak argument when confronted by a continual news stream about how unsustainable western societies are and because many messages suggest disaster is inevitable.

It seems much of this argument stems from Nelson’s dissatisfaction with books like Jared Diamond’s Collapse which spends the vast majority of 500 pages discussing the demise of previous societies and what could go wrong now, then finishing with a 5 page section entitled Reasons for Hope.

Nelson’s dissatisfaction reminds me of William Cronon’s argument against the Grand Narratives of global environmental problems that I wrote about previously.

Cronon argued that global, ‘prophetic’ narratives are politically and socially inadequate because they don’t offer the possibility of individual or group action to address global problems. Such ‘big’ issues are hard for individuals to feel like they can do anything about.

Part of Cronon’s solution was the identification of ‘smaller’ (more focused) stories that individuals will be better able to empathise with. However, Cronon also played the hope card – suggesting that these more focused narratives offer individuals more hope than the global narratives.

Focusing on smaller issues closer to home may help – doesn’t hope become a stronger argument when the problems faced are less complex and the solutions are seemingly closer at hand? But Nelson seems to be suggesting that (as any ardent sports fan will tell you) it’s the hope that kills you.

“Instead of hope we need to provide young people with reasons to live sustainably that are rational and effective. We need to equate sustainable living, not so much with hope for a better future, but with basic virtues such as sharing and caring, which we already recognize as good in and of themselves, and not because of their measured consequences.”

Nelson’s is an ethical argument – that living sustainably should be portrayed as the ‘the right thing to do’, and that we should do it regardless of the consequences.

But then the question arises: how do we live sustainably? How do I know what the right thing to do is? Given a choice (on what printer paper to buy, for example) what decision to I make if I want to be sustainable? In order to make this choice we immediately need to start measuring the future consequences of our decisions. The future is an inherent part of the sustainability concept – it is about maintaining system processes or function into the future. So when we make our lifestyle decisions now, guided as they might be by the virtue of ‘doing the right thing’, we still need to have some idea about how we want the future to be, and which actions are more likely to get us there.

Nelson may be right – blind hope in a better future may prove counter productive given the current stream of global, prophetic, doomsaying narratives. But equally, just saying ‘do the right thing’ may be equally confusing for many people. Nelson isn’t arguing that this is all we should do, of course – he also suggests there is a “desperate need for environmental educators, writers, journalists and other leaders to work these [virtuous] ideas into their efforts”. It would be a good thing if living sustainably was more widely understood as ‘doing the right thing’. But this virtue will remain largely irrelevant if we don’t also work out how individuals and societies can live sustainably.

So what’s the result of all this thinking? It seems we should be focusing less on on doomsaying prophetic narratives (boiling seas bleaching coral reefs on continents thousands of miles away, stories of global warming when there’s a foot of snow outside, and so on) and more on what the individual person or group can do now, themselves, practically. In conjunction with the argument of acting virtuously with respect to sustainability, this focus may provide people with ‘rational and effective’ reasons, leaving them feeling more optimistic about the future and empowered to lead sustainable lives.

Update – 6th March
Okay, how about a couple of quick examples to go with that rhetoric? The cover story of this month’s National Geographic Magazine is a good one – Peter Miller looks at how we can start making energy savings (reducing CO2 emissions) around our own homes. And of course, I should have already pointed out the BBC’s Ethical Man as he works out how to keep his environmental impact to a minimum. Currently he’s attemting to traverse the USA without flying or driving. The ethics of Ethical Man are more implied than stated explicitly, but it’s another example of the sort of reporting is discussed above – showing how individuals can act now rather than merely hoping for a better future.

PEST or Panacea?

Although some may say blogging is dead, the editors at Nature think it’s good to blog. The Nature editors discuss the place of blogging in scientific discourse, focusing on the reporting of results from papers in press (i.e. accepted by a journal for publication but not actually in print yet). They suggest that if the results of an article in press are reported at a conference then they are fair game for discussion and blogging. And they argue that “[m]ore researchers should engage with the blogosphere, including authors of papers in press”.

I wish I had more papers in the in press pile. Unfortunately I’ve got more in the under review pile (see my previous post), but at least I’m adding to it. Earlier this week David Demeritt, Sarah Dyer and I submitted a manuscript to Transactions of the Institute of British Geographers. The paper discusses public engagement in science and technology and examines some of the practical challenges such a collaboration entails. One of the examples we use is the work I did during my PhD examining the communication of my model results with local stakeholders. It’s only just submitted so I’ll just post the abstract for now. As we get further along the review process toward the in press stage (with this and other papers) I’ll return to see if we can spark some debate.

David Demeritt, Sarah Dyer and James Millington
PEST or Panacea? Science, Democracy, and the Promise of Public Participation
Submitted Abstract
This paper explores what is entailed by the emerging UK consensus on the need for increased public engagement in science and technology, or PEST as we call it. Common to otherwise incompatible instrumental and de-ontological arguments for PEST is an associated claim that increased public engagement will also somehow make for ‘better’ science and science-based policy. We distinguish two different ways in which PEST might make such a substantive contribution, which we term ‘normative steering’ and ‘epistemic checking’. Achieving those different aims involves engaging with different publics in different ways to different ends. Accordingly, we review a number of recent experiments in PEST to assess the practical challenges in delivering on its various substantive promises. The paper concludes with some wider reflections on whether public engagement in science is actually the best way of resolving the democratic dilemmas to which PEST is addressed.

Predicting 2009

Over the holiday period the media offer us plenty of fodder to discuss the past year’s events and what the future may hold. Whether it’s current affairs, music, sport, economics or any other aspect of human activity, most media outlets have something to say about what people did that was good, what they did that was bad, and what they’ll do next, in the hope that they can keep their sales up over the holiday period.

Every year The Economist publishes a collection of forecasts and predictions for the year ahead. The views and and opinions of journalists, politicians and business people accompany interactive maps and graphs that provide numerical analysis. But how good are these forecasts and predictions? And what use are they? This year The Economist stopped to look back on how well it performed:

“Who would have thought, at the start of 2008, that the year would see crisis engulf once-sturdy names from Freddie Mac and Fannie Mae to AIG, Merrill Lynch, HBOS, Wachovia and Washington Mutual (WaMu)?

Not us. The World in 2008 failed to predict any of this. We also failed to foresee Russia’s invasion of Georgia (though our Moscow correspondent swears it was in his first draft). We said the OPEC cartel would aim to keep oil prices in the lofty range of $60-80 a barrel (the price peaked at $147 in July)…”

And on the list goes. Not that any of us are particularly surprised, are we? So why should we bother to read their predictions for the next year? In its defence, The Economist offers a couple of points. First, the usual tactic (for anyone defending their predictions) of pointing out what they actually did get right (slumping house prices, interest-rate cuts, etc). But then they highlight a perspective which I think is almost essential when thinking about predictions of future social or economic activity:

“The second reason to carry on reading is that, oddly enough, getting predictions right or wrong is not all that matters. The point is also to capture a broad range of issues and events that will shape the coming year, to give a sense of the global agenda.”

Such a view is inherently realist. Given the multitudes of interacting elements and potential influences affecting economic systems, given that it is an ‘open’ historical system, producing a precise prediction about future system states is nigh-on impossible. Naomi Oreskes has highlighted the difference between ‘logical prediction’ (if A and B then C) and ‘temporal prediction’ (event C will happen at time t + 10), and this certainly applies here [I’m surprised I haven’t written about this distinction on this this blog before – I’ll try to remedy that soon]. Rather than simply developing models or predictions with the hope of accurately matching the timing and magnitude of future empirical events, I argue that we will be better placed (in many circumstances related to human social and economic activity) to use models and predictions as discussants to lead to better decision-making and as means to develop an understanding of the relevant causal structures and mechanisms at play.

In a short section of his recent book and TV series, The Ascent of Money, Niall Ferguson talks about the importance of considering history in economic markets and decision-making. He presents the example of Long Term Capital Management (LTCM) and their attempt to use mathematical models of the global economic system to guide their trading decision-making. In Ferguson’s words, their model was based on the following set of assumptions about how the system worked:

“Imagine another planet – a planet without all the complicating frictions caused by subjective, sometimes irrational human beings. One where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximise profits; where they never stopped trading; where markets were continuous, frictionless and completely liquid. Financial markets on this plan would follow a ‘random walk’, meaning that each day’s prices would be quite unrelated to the previous day’s but would reflect all the relevant information available.” p.320

Using these assumptions about how the world works, the Nobel prize-winning mathematicians Myron Scholes and Robert C. Merton derived a mathematical model. Initially the model performed wonderfully, allowing returns of 40% on investments for the first couple of years. However, crises in the Asian and Russian financial systems in 1997 and 1998 – not accounted for in the assumptions of the mathematical model – resulted in LTCM losing $1.85 billion through the middle of 1998. The model assumptions were unable to account for these events, and subsequently its predictions were inaccurate. As Ferguson puts it:

“…the Nobel prize winners had known plenty of mathematics, but not enough history. They had understood the beautiful theory of Planet finance, but overlooked the messy past of Planet Earth.” p.329

When Ferguson says ‘not enough history’, his implication is that the mathematical model was based on insufficient empirical data. Had the mathematicians used data that covered the variability of the global economic system over a longer period of time it may have included a stock market downturn similar to that caused by Asian and Russian economic crises. But a data set for a longer time period would likely have been characterised by greater overall variability, requiring a greater number of parameters and variables to account for that variability. Whether such a model would have performed as well as the model they did produce is questionable, as is the potential to predict the exact timing and magnitude of any ‘significant’ event (e.g. a market crash).

But further, Ferguson also points out that the problem with the LTCM model wasn’t just that they hadn’t used enough data to develop their model, but that their assumptions (i.e. their understanding of Planet Finance) just aren’t realistic enough to accurately predict Planet Earth over ‘long’ periods of time. Traders and economic actors are not perfectly rational and do not have access to all the data all the time. Such a situation has led (more realistic) economists to develop ideas like bounded rationality.

Assuming that financial traders try to be rational is likely not a bad assumption. But it has been pointed out that “[r]ationality is not tantamount to optimality”, and that in situations where information, memory or computing resources are not complete (as is usually the case in the real world) the principle of bounded rationality is a more worthwhile approach. For example, Herbert Simon recognised that rarely do actors in the real world optimise their behaviour, but rather they merely try to do ‘well enough’ to satisfy their goal(s). Simon termed this non-optimal behaviour ‘satisficing’, the basis for much of bounded rationality theory since. Thus, satisficing is essentially a cost-benefit tradeoff, establishing when the utility of an option exceeds an aspiration level.

Thinking along the same lines George Soros has developed his own ‘Human Uncertainty Principle’. This principle “holds that people’s understanding of the world in which they live cannot correspond to the facts and be complete and coherent at the same time. Insofar as people’s thinking is confined to the facts, it is not sufficient to reach decisions; and insofar as it serves as the basis of decisions, it cannot be confined to the facts. The human uncertainty principle applies to both thinking and reality. It ensures that our understanding is often incoherent and always incomplete and introduces an element of genuine uncertainty – as distinct from randomness – into the course of events.

The human uncertainty principle bears a strong resemblance to Heisenberg’s uncertainty principle, which holds that the position and momentum of quantum particles cannot be measured at the same time. But there is an important difference. Heisengberg’s uncertainty principle does not influence the behavior of quantum particles one iota; they would behave the same way if the principle had never been discovered. The same is not true of the human uncertainty principle. Theories about human behavior can and do influence human behavior. Marxism had a tremendous impact on history, and market fundamentalism is having a similar influence today.” Soros (2003) Preface

This final point has been explored in more detail by Ian Hacking and his discussion of the issue of the differences between interactive and indifferent kinds. Both of these views (satisficing and the uncertainy principle) implicitly understand that the context in which an actor acts is important. In the perfect world of Planet Finance and associated mathematical models context is non-existent.

In response to the problems encountered by LTCM, “Merrill Lynch observed in its annual reports that mathematical risk models, ‘may provide a greater sense of security than warranted; therefore, reliance on these models should be limited’“. I think it is clear that humans need to make decisions (whether they be social, economic, political, or about any resource) based on human understanding derived from empirical observation. Quantitative models will help with this but cannot be used alone, partly because (as numerous examples have shown) it is very difficult to make (accurate) predictions about future human activity. Likely there are general behaviours that we can expect and use in models (e.g. aim of traders to make profit). But how those behaviours play out in the different contexts provided by the vagaries of day-to-day events and changes in global economic, political and physical conditions will require multiple scenarios of the future to be examined.

My personal view is one of the primary benefits of developing quantitative models of human social and economic activity is that they allow us to make explicit our implicitly held models. Developing quantitative models forces us to be structured about our worldview – writing it down (often in computer code) allows other to scrutinise that model, something that is not possible if the model remains implicit. In some situations, such a private financial strategy-making, the publication this approach may not be welcome (because it is not beneficial for a competitor to know your model of the world). But in other decision-making situations, for example about environmental resources, this approach will be useful to foster greater understanding about how the ‘experts’ think the world works.

By writing down their expectations for the forthcoming year the experts at The Economist are making explicit their understanding of the world. It’s not terribly important that that they don’t get everything right – there’s very little possibility that will happen. What is important is that it helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states. This information might then be used to shape the future as we would like it to be, based on informed expectations. Quantitative models of human social and economic activity also offer this type of opportunity.

Geographical Perspectives: Externalities, Inputs and Participation

One of the most enjoyable things about studying as a post-graduate in a UK Geography department was the diversity of conversation topics I could get myself into in the corridors, over lunch, and after work in the pub. Investigating social, economic, cultural, atmospheric, geomorphological, and ecological patterns and processes (too name just a few) geography departments contain scholars with interests and skills that span the globe’s physical and social environments. This variety of backgrounds and worldviews can lead to widely differing perspectives on the current affairs of any particular day.

In many ways my PhD studies, funded by an interdisciplinary research studentship from the ESRC and NERC, allowed (demanded?) me to search out these differing perspectives and engage in these conversations. However, this diversity of perspectives isn’t appealing for faculty members focused narrowly on their own particular research specialism and the current paper they are writing about it. Maybe they just don’t have time. Or maybe there’s something deeper.

The distinction between the social sciences (human geography) and natural sciences (physical geography) has led to somewhat of a divide between these two ‘sides’ of Geography. As my former tutor and advisor Prof. David Demeritt highlights in the latest volume of the Transactions of the Institute of British Geographers, ‘human’ and ‘physical’ geographers have become so estranged that dedicated forums to initiate ‘conversations across the divide‘ of Geography now occur regularly at annual conferences. Demeritt’s article discusses how ‘Environmental Geography’ is often touted as having the integrative research potential to bridge the human-physical divide.

Environmental Geography (EG) explicitly sets out to examine human-environment interactions and is generally understood to be the intersection of Human and Physical in the Geography Venn diagram. Essentially, EG is the Geographical version of the Coupled Human and Natural Systems (CHANS) research program that has become prominent recently largely thanks to NSF funding. Whereas CHANS emphasises systemic concepts (thresholds, feedbacks, resilience etc.), EG emphasises concepts more at home in the geographical lexicon – scale, space and (seemingly most often absent from CHANS research) place. This is not to say that these concepts are exclusively used by either one or the other – whether you do ‘CHANS research’ or ‘Environmental Geography’ is also likely to be determined by where your research funding comes from, what department you work in, and the type or training you received in graduate school.

One of the main points Demeritt makes in his commentary is that this flat distinction between Human and Physical Geography is not as straight forward as it is often made out to be. Friedman’s world may be flat, but the Geography world isn’t. Demeritt attempts to illustrate this with a new diagramtic 3D representation of the overlap between the many sub-disciplines of Geography (most of which are also academic disciplines in their own right):

Demeritt's 2008 three dimensional interpretation of the relationship between sub-disciplines in Geography
Thus, “Rather than thinking about geography just in terms of a horizontal divide between human and physical geography, we need to recognise the heterogeneity within those very broad divisions. …within those two broad divisions geography is stretched out along a vertical dimension. … Like the fabled double helix, these vertical strands twist round each other and the horizontal connections across the human-physical divide to open up new opportunities for productive engagement.” [p.5]

This potential doesn’t come without its challenges however. Demeritt uses EG to demonstrate such challenges, highlighting how research in this field is often ‘framed’. ‘Framing’ here refers to the perspective researchers take about how their subject (in this case interactions between humans and the natural environment) will be (should be) studied. Demeritt highlights three particular perspectives:

1. The Externality Perspective. This perspective might be best associated with the reductionist mode of scientific investigation, where a specific component of a human-environment system is considered in isolation from any other components. Research disregards or ignores other work in sub-disciplines, whether horizontally across the human-physical divide or vertically either side, and concentrates on understanding a specific phenomena or process.

2. The Integrated Perspective. We might think of this perspective as being loosely systematic. Rather than simply ignoring the connections with other processes and phenomena considered in other sub-disciplines, they are used as some form of ‘input’ to the component under particular consideration. This is probably the mode that most closely resembles how much CHANS research is currently done, and how most ‘interdisciplinary’ environmental research is currently done.

3. The Participatory Perspective. This third approach has become more prominent recently, associated with calls for more democratic forms of science-based decision-making and as issues expertise and risk have come to the fore in environmental issues. This mode demands scientists and researchers become more engaged with publics, stakeholders and decision-makers and is closely related to the perspective of ‘critical’ geography and proponents of ‘post-normal’ science.

Demeritt discusses the benefits and challenges of these approaches in more detail, as I have briefly touched on previously. Rather than go over them again, here I want to think a bit more about the situations in which each of these modes of research might be most useful. In turn, this will help us to think about where engagement with other disciplines and sub-disciplines will be most fruitful.

One situation in which the externality perspective would be most useful is when the spatial/temporal scope of the process/phenomena of interest makes engagement between (sub-)disciplines either useless or impossible. For example, reconciling economic or cultural processes with Quaternary research is likely to extraordinarily difficult (but see Wainwright 2008). A second would be when investigation is interested more in ‘puzzle solving’ than ‘problem-solving’. For example, with regards research on Northern Hardwood Forests the puzzler would ask questions like ‘what is the biological relationship between light availability and tree growth?’ whereas the problem-solver might ask ‘how should we manage our timber harvest to ensure sufficient light availability allows continued regeneration of younger trees in the forest understory?’.

The integrated approach has often been used in the situation when one ‘more predictable’ system is influenced by another ‘less predictable’ system. One system might be more predictable than another because more data are available for one than another, because less assumptions are invoked to ‘close’ one system for study than another, or simply because the systems are perceived to be more or less predictable. A prime example is the use of scenarios of global social end economic change to set the parameters of investigations of future climate change (although this example may actually have slowed problem-solving rather than sped it).

The participatory perspective will be useful when system uncertainties are primarily ethical or epistemological. Important questions here are ‘what are the ethical consequences of my study this phenomena?’ and ‘are sufficient theoretical tools available to study this problem?’. Further, in contrast to the externality mode, this approach will be useful when investigation is interested in ‘problem-solving’ rather than ‘puzzle solving’. For example, participatory research will be most useful when the research question is ‘how do we design a volcano monitoring system to efficiently and adequately alert local populations such that they can/will respond appropriately in the event of an eruption?’ rather than ‘what are the physical processes in the Earth’s interior that cause volcanoes to erupt when they do?’

Implicit in the choice of which question is asked in this final example is the framing of the issue at hand. Hopefully it is clear from my brief outline that it is a close relationship between research objectives and the framing or mode of the research. How these objectives and framings are arrived at is really at the root of Demeritt’s commentary. Given the choice, it will be easy for many researchers to take the easy option:

Engaging with other perspectives and approaches is not just demanding, but also risky too. … Progress in science has always come precisely from exposing ourselves to the possibility of getting it wrong or that things might not work out quite as planned’. [p.9]

Thinking clearly about the situations in which different modes of study are most useful might help save both embarrassment and time. Further, it also seems sensible to suggest that most thought should be done when researchers are considering engaging non-scientists in the participatory mode. If it is risky to expose ones self to fellow scientists, who understand the foibles of the research process and the difficulties of grappling with new ideas and data sets, it will be even more risky when the exposure is to non-scientists. Decision-makers, politicians, ‘lay persons’ and the general public at large are likely to be less acquainted with (but not ignorant of) how research proceeds (messily), how knowledge is generated (often a mixture of deductive proofs and inductive ideas), and the assumptions (and limitations) implicit in data collection and analysis. So when should academics feel most confident about parachuting in from the ivory tower?

First, it seems important for scientists to avoid telling people things they already ‘know’. Just because it hasn’t been written down in a scientific journal doesn’t mean it isn’t known (not that I want to get into discussion here about when something becomes ‘known’). We should try very hard to work out where help is needed to harness local knowledge, rather than ignoring it and assuming we know best (this of course harks back to the third wave). For example, while local farmers may know a lot about the history and consequences of land use/cover change in their local area, they may struggle to understand how land use/cover change will occur, or influence other processes, over larger spatial extents (e.g. landscape connectivity of species habitat or wildfire fuel loadings). In other situations, local knowledge may be entirely absent because a given phenomena is outside the perception/observation of the local community. In this case, it will be very difficult (or impossible) for them to contribute to knowledge formation even though the phenomena affects them. For example, the introduction of genetically modified crops will potentially have impacts on other nearby vegetation species due to hybridization, yet the processes at work are at a scale that is unobservable to lay persons (i.e genetic recombination at the molecular level versus farmland biodiversity at the landscape level).

The important point in all this however (as it occurs to me), seems to be that the ‘framing’ one researcher or scientist adopts will depend on their particular objectives. If those objectives are of the scientific puzzle-solving kind, and can be framed so that the solution can be found without leaving the comfy environment of a single sub-discipline, engagement will not happen (and neither should it). The risks it poses means that engagement will happen only if funding bodies demand it (as they increasingly are) or if the the research is really serious about solving a problem (as opposed to solving a puzzle or simply publishing scientific articles). As the human population grows within a finite environment the human-environment interface will only grow, likely demanding more and more engaged research. As I’ve highlighted before, a genuine science of sustainability is more likely to succeed if it adopts an engaged, participatory (post-normal) stance toward its subject.

Engaging researchers from other (sub-)disciplines or non-scientists will not always be the best option. But Geography and geographers are well placed to help develop theory and thinking to inform other scientists about how to frame environmental problems and establish exactly when engaging with experts (whether certified or not) from outside their field, or even from outside science itself, will be a fruitful endeavour. Geographers will only gain the authority on when and how interdisciplinary and participatory research should proceed once they’ve actually done some.

Demeritt, D. (2008) From externality to inputs and interference: framing environmental research in geography Transactions of the Institute of British Geographers 34(1) 3 – 11
Published Online: 11 Dec 2008


Towards the end of last week the MSU Environmental Science and Public Policy Program held a networking event on Coupled Human and Natural Systems (CHANS). These monthly events provide opportunities for networking around different environmental issues and last week was the turn of the area CSIS focuses on. The meeting reminded me of a couple of things I thought I would point out here.

First is the continued commitment that the National Science Foundation (NSF) is making to funding CHANS research. The third week in November will be the annual deadline for research proposals, so watch out for (particularly) tired looking professors around that time of year.

Second, I realized I haven’t highlighted on this blog one of the NSF CHANS projects currently underway at CSIS. CHANS-Net aims to develop an international network of research on CHANS to facilitate communication and collaboration among members of the CHANS research community. Central to the project is the establishment of an online meeting place for research collaboration. An early version of the website is currently in place but improvements are in the planning. I was asked for a few suggestions earlier this week and it made me realise how interested I am in the potential of the technologies that have arrived with web 2.0 (I suppose that interest is also clear right here in front of you on this blog). I hope to be able to continue to make suggestions and participate in the development of the site from afar (there’s too much to be doing elsewhere to get my hands really dirty on that project). Currently, only Principle Investigators (PIs) and Co-PIs on NSF funded CHANS projects are members of the network, but hopefully opportunities for wider participation will be available in the future. In that event, I’ll post again here.

Modelling Pharmaceuticals in the Environment

On Friday I spoke at a workshop at MSU that examined a subject I’m not particularly well acquainted with. Participants in Pharmaceuticals in the Environment: Current Trends and Research Priorities convened to consider the natural, physical, social, and behavioral dimensions regarding the fate and impact of pharmaceutical products in the natural environment. The primary environmental focus of this issue is the presence of toxins in our water supply as a result of the disposal of human or veterinary medicines. I was particularly interested in what Dr. Shane Synder had to say about water issues facing Las Vegas, Nevada.

So what did I have to do with all this? Well the organisers wanted someone from our research group at the Center for Systems Integration and Sustainability to present some thoughts on how modelling of coupled human and natural systems might contribute to the study of this issue. The audience contained experts from a variety of disciplines (including toxicologists, chemists, sociologists, political scientists) and given my limited knowledge about the subject matter I decided I would keep my presentation rather broad in message and content. I drew on several of the topics I have discussed previously on this blog: the nature of coupled human-natural systems, reasons we might model, and potential risks we face when modelling CHANS.

In particular, I suggested that if prediction of a future system state is our goal we will be best served focusing our modelling efforts on the natural system and then using that model with scenarios of future human behaviour to examine the plausible range of states the natural system might take. Alternatively, if we view modelling as an exclusively heuristic tool we might better envisage the modeling process as a means to facilitate communication between disparate groups of experts or publics and explore what different conceptualisations allow and prevent from happening with regards our stewardship or management of the system. Importantly, in both cases the act of making our implicitly held models of how the world works explicit by laying down a formal model structure is the primary value of modelling CHANS.

There was brief talk towards the end of the meeting about setting up a workshop website that might even contain audio/video recordings of presentations and discussions that took place. If such a website appears I’ll link to it here. In the meantime, the next meeting I’ll be attending on campus is likely to be the overview of Coupled Human-Natural Systems discussion in the Networking for Environmental Researchers program.

Science Fictions

What’s happened to this blog recently? I used to write things like this and this. All I seem to have posted recently are rather vacuous posts about website updates and TV shows I haven’t watched (yet).

Well, one thing that has prevented me from posting recently has been that I’ve spent some of my spare time (i.e., when I’m not at work teaching or having fun with data manipulation and analysis for the UP modelling project) working on a long-overdue manuscript.

Whilst I was visiting at the University of Auckland back in 2005, David O’Sullivan, George Perry and I started talking about the benefits of simulation modelling over less-dynamic forms of modelling (such as statistical modelling). Later that summer I presented a paper at the Royal Geographical Society Annual Conference that arose from these discussions. We saw this as our first step toward writing a manuscript for publication in a peer review journal. Unfortunately, this paper wasn’t at the top of our priorities, and whilst on occasions since I have tried to sit down to write something coherent, it has only been this month [three years later!] that I have managed to finish a first draft.

Our discussions about the ‘added value’ of simulation modelling have focused on the narrative properties of of this scientific tool. The need for narratives in scientific fields that deal with ‘historical systems’ has been recognised by several authors previously (e.g. Frodeman in Geology), and in his 2004 paper on Complexity Science and Human Geography, David suggested that there was room, if not a need, for greater reference to the narrative properties of simulation modelling.

What inspired me to actually sit down and write recently was some thinking and reading I had been doing related to the course I’m teaching on Systems Modelling and Simulation. In particular, I was re-acquainting myself with Epstein’s idea of ‘Generative Social Science‘ to explain the emergence of macroscopic societal regularities (such as norms or price equilibria) arising from the local interaction of heterogeneous, autonomous agents. The key tool for the generative social scientist is agent-based simulation that considers the local interactions of heterogeneous, autonomous agents acting in a spatially-explicit environment and possessing bounded (i.e. imperfect) information and computing power. The aim of the generative social scientist is to ‘grow’ (i.e. generate) the observed macroscopic regularity from the ‘bottom up’. In fact, for Epstein this is the key to explanation – the demonstration of a micro-specification (properties or rules of agent interactions and change) able generate the macroscopic regularity of interest is a necessary condition for explanation. Describing the final aggregate characteristics and effects of these processes without accounting for how they arose due to the interactions of the agents is insufficient in the generativist approach.

As I was reading I was reminded of the recent suggestion of the potential of a Generative Landscape Science. Furthermore, the generative approach really seemed to ring true to the critical realist perspective of investigating the world – understanding that regularity does not imply causation and explanation is achieved by identifying causal mechanisms, how they work, and under what conditions they are activated.

Thus, in the paper (or the first draft I’ve written at least – no doubt it will take on several different forms before we submit for publication!) after discussing the characteristics of the ‘open, middle-numbered’ systems that we study in the ‘historical sciences’, reviewing Epstein’s generative social science and presenting examples of the application of generative simulation modelling (i.e., discrete element or agent-based) to land use/cover change, I go on to dicuss how a narrative approach might complement quantitative analysis of these models. Specifically, I look at how narratives could (and do) aid model explanation and interpretation, and the communication of these findings to others, and how the development of narratives will help to ‘open up’ the process of model construction for increased scrutiny.

In one part of this discussion I touch upon the keynote speech given by William Cronon at the RGS annual meeting in 2006 about the need for ‘sustainable narratives‘ of the current environmental issues we are facing as a global society. I also briefly look at how narrative might act as mediators between models and society (related to calls for ‘extended peer communities‘ and the like), and highlight where some of the potential problems for this narrative approach lie.

Now, as I’ve only just [!] finished this very rough initial draft, I’m going to leave the story of this manuscript here. David and George are going to chew over what I’ve written for a while and then it will be back to me to try to draw it all together again. As we progress on this iterative writing process, and the story becomes clearer, I’ll add another chapter here on the blog.