Social Network Analysis

As I mentioned in a tweet earlier this week, Prof. Ken Frank was ‘visiting’ CSIS this week. Ken studies organizational change and innovation using, amongst other methods, Social Network Analysis (SNA). SNA examines how the structure of ties between people affects individuals’ behaviour, at how social network structure and composition influences the social norms of a group, and how resources (for example, of information) flow through a social network. This week Ken organised a couple of seminars on the use of SNA to investigate natural resource decision-making (for example, in small-scale fisheries) and I joined a workshop he ran on how we actually go about doing SNA, learning about software like p2 and KliqueFinder. Ken showed us the two main models; the selection model and the influence model. The former addresses network formation and examines individuals’ networks and how they chose it. The latter examines how individuals are influenced by the people in their network and the consequences for their behaviour. As an example of how SNA might be used, take a look at this executive summary [pdf] of the thesis of a recent graduate students from MSU Fisheries and Wildlife.

On Friday, after having been introduced through the week to what SNA is, I got to chat with Ken about how it might relate to the agricultural decision-making modelling I did during my PhD. In my agent-based model I used a spatial neighbourhood rule to represent the influence of social norms (i.e. whether a farmer is ‘traditional’ or ‘commercial’ in my categories). However, the social network of farmers is not solely determined by spatial relationshps – farmers have kinship ties and might meet other individuals at the market or in the local cerveceria. We discussed how I might be able to use SNA to better represent the influences of other farmers on an indiviuals’ decision-making in my model. I don’t have the network data needed to do this right now but it’s something to think about for the future.

If I’d been more aware of SNA previously I may have incorporated some discussion of it into the book chapter I re-wrote recently for Environmental Modelling. In that chapter I focused on the increasing importance of behavioural economics for investigating and modelling the relationships between human activity and the environment. SNA is certainy something to add to the toolbox and seems to be on the rise in natural resources research. Something else I missed whilst working on re-writing that that chapter was the importance of behavioural economics to David Cameron‘s ‘Big Society’ idea. He seems to be aware of the lessons we’ve started learning from things like social network analysis and behavioural economics – now he’s in charge maybe we’ll start seeing some direct application of those lessons to UK public policy.

Putting decison-making in context

A while back I wrote about how it takes all sorts to make a world and why we need to account for those different sorts in our models of it. One of the things that I highlighted in that post was the need for mainstream economics to acknowledge and use more of the findings from behavioural economists.

One of the examples I used in the draft of the book chapter I have been writing for the second edition of Wainwright and Mulligan’s Environmental Modelling was the paper by Tversky and Kahneman, The Framing of Decisions and the Psychology of Choice. They showed how the way in which a problem is framed can influence human decision-making and causes problems for rational choice theory. In one experiment Tversky and Kahneman asked people if they would buy a $10 ticket on arriving at the theatre when finding themselves in two different situations:

i) they find they have lost $10 on the way to the theatre,
ii) they find they have lost their pre-paid $10 ticket.

In both situations the person has lost the value of the ticket ($10) and under neoclassical economic assumptions should behave the same when deciding whether to buy a ticket when arriving at the theatre. However, Tversky and Kahneman found that people were more likely to buy a ticket in the first situation (88%) than buying a (replacement) ticket in the second (46%). They suggest this behaviour is due to human ‘psychological accounting’, in which we mentally allocate resources to different purposes. In this case people are less willing to spend money again on something they have already allocated to their ‘entertainment account’ than if they have lost money which they allocate to their ‘general expenses account’.

More recently, Galinsky and colleagues examined how someone else’s irrational thought processes can influence our own decision-making. In their study they asked college students to take over decision-making for a fictitious person they had never met (the students were unaware the person was fictitious).

In one experiment, the volunteers watched the following scenario play out via text on a computer screen: the fictitious decision-maker tried to outbid another person for a prize of 356 points, which equaled $4.45 in real money. The decision-maker started out with 360 points, and every time the other bidder upped the ante by 40 points, the decision-maker followed suit. Volunteers were told that once the decision-maker bid over 356 points, he or she would begin to lose some of the $12 payment for participating in the study.

When the fictitious decision-maker neared this threshold, the volunteers were asked to take over bidding. Objectively, the volunteers should have realized that – like the person who makes a bad investment in a ‘fixer-upper’ – the decision-maker would keep throwing good money after bad. But the volunteers who felt an identification with the fictitious player (i.e., those told by the researchers that they shared the same month of birth or year in school) made almost 60% more bids and were more likely to lose money than those who didn’t feel a connection.

Are we really surprised that neoclassical economic models often fall down? Accounting for seemingly irrational human behaviour may make the representation of human decision-making more difficult, but increasingly it seems irrational not to do so.

It takes all sorts

Neoclassical economics, both its assumptions and its ability to forecast future economic activity, has been taking a bit of a panning recently. Back near the start of this most recent economic downturn, Jean-Philippe Bouchard argued that neoclassical economists need to develop more pragmatic and realistic representations of what actually happens in ‘wild’ and messy free markets. And at the start of this year I highlighted how Niall Ferguson has stressed the importance of considering history in economic markets and decision-making. In both cases the criticism is that some economists have been blinded by the beauty of their elegant models and have failed to see where their assumptions and idealizations fail to match what’s happening in the real world. Most recently, Paul Krugman argued that ‘flaws-and-frictions economics’ (emphasizing imperfect decision-making and rejecting ideas of a perfectly free ‘friction-less’ market) must become more important. Krugman (‘friend’ of Niall Ferguson) suggests that mainstream economics needs to become more ‘behavioural’, and follow the lead of the behavioural economists that incorporate social, cognitive and emotional factors into their analyses of human decision-making.

The view from the Nature editors on all this is that in the future agent-based modelling will be an important tool to inform economic policy. In many ways agent-based modelling is very well suited to build more ‘behaviour’ into economics. For example, agent-based modelling provides the ability to represent several types of agent each with their own rules for decision-making, potentially based on their own life-histories and circumstances (this in contrast to the single perfectly rational ‘representative agent’ of neoclassical economics). Farmer and Foley, in their opinon piece of the same issue of Nature, are keen:

“Agent-based models potentially present a way to model the financial economy as a complex system, as Keynes attempted to do, while taking human adaptation and learning into account, as Lucas advocated. Such models allow for the creation of a kind of virtual universe, in which many players can act in complex — and realistic — ways. … To make agent-based modelling useful we must proceed systematically, avoiding arbitrary assumptions, carefully grounding and testing each piece of the model against reality and introducing additional complexity only when it is needed. Done right, the agent-based method can provide an unprecedented understanding of the emergent properties of interacting parts in complex circumstances where intuition fails.”

At the very least, our agent-based models need to improve upon the homogenizing assumptions of neoclassical economics. It takes all sorts to make a world — we need to do a better job of accounting for those different sorts in our models of it.

What is the point… of social simulation modelling?

Previously, I mentioned a thread on SIMSOC initiated by Scott Moss. He asked ‘Does anyone know of a correct, real-time, [agent] model-based, policy-impact forecast?. Following on to the responses to that question, earlier this week he started a new thread entitled ‘What’s the Point?:

“We already know that economic recessions and recoveries have probably never been forecast correctly — at least no counter-examples have been offered. Similarly, no financial market crashes or recoveries or significant shifts in market shares have ever, as far as we know, been forecast correctly in real time.

I believe that social simulation modelling is useful for reasons I have been exploring in publications for a number of years. But I also recognise that my beliefs are not widely held.

So I would be interested to know why other modellers think that modelling is useful or, if not useful, why they do it.”

After reading others’ responses I decided to reply with my own view:

“For me prediction of the future is only one facet of modelling (whether agent-based or any other kind) and not necessarily the primary use, especially with regards policy modelling. This view stems party from the philosophical difficulties outlined by Oreskes et al. (1994), amongst others. I agree with Mike that the field is still in the early stages of development, but I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world’. As Pablo suggested, if we are to predict the future the inherent uncertainties will be best highlighted and accounted for by ensuring predictions are tied to a probability.”

I also highlighted the reasons offered by Epstein and outlined a couple of other reasons I think ABM are useful.

There was a brief response to mine then and then another, more assertive, response that (I think) highlights a common confusion of the different uses of prediction in modelling:

“If models of economic policy are fundamentally unable to at some point predict the effects of policy — that is, to in some measure predict the future — then, to be blunt, what good are they? If they are unable to be predictive then they have no empirical, practical, or theoretical value. What’s left? I ask that in all seriousness.

Referring to Epstein’s article, if a model is not sufficiently grounded to show predictive power (a necessary condition of scientific results), then how can it be said to have any explanatory power? Without prediction as a stringent filter, any amount of explanation from a model becomes equivalent to a “just so” story, at worst giving old suppositions the unearned weight of observation, and at best hitting unknowably close to the mark by accident. To put that differently, if I have a model that provides a neat and tidy explanation of some social phenomena, and yet that model does not successfully replicate (and thus predict) real-world results to any degree, then we have no way of knowing if it is more accurate as an explanation than “the stars made it happen” or any other pseudo-scientific explanation. Explanations abound; we have never been short of them. Those that can be cross-checked in a predictive fashion against hard reality are those that have enduring value.

But the difficulty of creating even probabalistically predictive models, and the relative infancy of our knowledge of models and how they correspond to real-world phenomena, should not lead us into denying the need for prediction, nor into self-justification in the face of these difficulties. Rather than a scholarly “the dog ate my homework,” let’s acknowledge where we are, and maintain our standards of what modeling needs to do to be effective and valuable in any practical or theoretical way. Lowering the bar (we can “train practitioners” and “discipline policy dialogue” even if we have no way of showing that any one model is better than another) does not help the cause of agent-based modeling in the long run.

I felt this required a response – it seemed to me that difference between logical prediction and temporal prediction was being missed:

“In my earlier post I wrote: “I’m less confident about ever being able to precisely predict future systems states in the open systems of the ‘real world'”. I was careful about how I worded this [more careful than ensuring correct formatting of the post it seems – my original post is below in a more human-readable format] and maybe some clarification in the light of Mike’s comments would be useful. Here goes…

Precisely predicting the future state of an ‘open’ system at a particular instance in time does not imply we have explained or understand it (due to the philosophical issues of affirming the consequent, equifinality, underdetermination, etc.). To be really useful for explanation and to have enduring value model predictions of any system need to be cross-checked against hard reality *many times*, and in the case of societies probably also in many places (and should ideally be produced by models that are consistent with other theories). Producing multiple accurate predictions will be particularly tricky for things like the global economy for which only have one example (but of course will be easier where experimental replication more ogistically feasible).

My point is two-fold:
1) a single, precise prediction of a future does not really mean much with regard our understanding of an open system,
2) multiple precise predictions are more useful but will be more difficult to come by.

This doesn’t necessarily mean that we will never be able to consistently predict the future of open systems (in Scott’s sense of correctly forecasting of the timing and direction of change of specified indicators). I just think it’s a ways off yet, that there will always be uncertainty, and that we need to deal with this uncertainty explicitly via probabilistic output from model ensembles and other methods.Rather than lowering standards, a heuristic use of models demands we think more closely about *how* we model and what information we provide to policy makers (isn’t that the point of modelling policy outcomes in the end?).

Let’s be clear, the heuristic use of models does not allow us to ignore the real world – it still requires us to compare our model output with empirical data. And as Mike rightly pointed out, many of Epstein’s reasons to model – other than to predict – require such comparisons. However, the scientific modelling process of iteratively comparing model output with empirical data and then updating our models is a heuristic one – it does not require that precise prediction at specific point in the future is the goal before all others.

Lowering any level of standards will not help modelling – but I would argue that understanding and acknowledging the limits of using modelling in different situations in the short-term will actually help to improve standards in the long run. To develop this understanding we need to push models and modelling to their limits to find our what works, what we can do and what we can’t – that includes iteratively testing the temporal predictions of models. Iteratively testing models, understanding the philosophical issues of attempting to model social systems, exploring the use of models and modelling qualitatively (as a discussant, and a communication tool, etc.) should help modellers improve the information, the recommendations, and the working relationships they have with policy-makers.

In the long run I’d argue that both modellers and policy-makers will benefit from a pragmatic and pluralistic approach to modelling – one that acknowledges there are multiple approaches and uses of models and modelling to address societal (and environmental) questions and problems, and that [possibly self evidently] in different situations different approaches will be warranted. Predicting the future should not be the only goal of modelling social (or environmental) systems and hopefully this thread will continue to throw up alternative ideas for how we can use models and the process of modelling.”

Note that I didn’t explicitly point out the difference between the two different uses of prediction (that Oreskes and other have previously highlighted). It took Dan Olner a couple of posts later to explicitly describe the difference:

“We need some better words to describe model purpose. I would distinguish two –

a. Forecasting (not prediction) – As Mike Sellers notes, future prediction is usually “inherently probabalistic” – we need to know whether our models can do any better than chance, and how that success tails off as time passes. Often when we talk about “prediction” this is what we mean – prediction of a more-or-less uncertain future. I can’t think of a better word than forecasting.

b. Ontological prediction (OK, that’s two words!) – a term from Gregor Betz, Prediction Or Prophecy (2006). He gives the example of the prediction of Neptune’s existence from Newton’s laws – Uranus’ orbit implied that another body must exist. Betz’s point is that an ontological prediction is “timeless” – the phenomenon was always there. Einstein’s predictions about light bending near the sun is another: something that always happened, we just didn’t think to look for it. (And doubtless Eddington wouldn’t have considered *how* to look, without the theory.)

In this sense forecasting (my temporal prediction) is distinctly temporal (or spatial) and demands some statement about when (or where) an event or phenomena will occur. In contrast, ontological prediction (my logical prediction) is independent of time and/or space and is often used in closed system experiments searching for ‘universal’ laws. I wrote more about this in a series of blog posts I wrote a while back on the validation of models of open systems.

This discussion is ongoing on SIMSOC and Scott Moss has recently posted again suggesting a summary of the early responses:

“I think a perhaps extreme summary of the common element in the responses to my initial question (what is the point?, 9/6/09) is this:

**The point of modelling is to achieve precision as distinct from accuracy.**

That is, a model is a more or less complicated formal function relating a set of inputs clearly to a set of outputs. The formal inputs and outputs should relate unambiguously to the semantics of policy discussions or descriptions of observed social states and/or processes.

This precision has a number of virtues including the reasons for modelling listed by Josh Epstein. The reasons offered by Epstein and expressed separately by Lynne Hamill in her response to my question include the bounding and informing of policy discussions.

I find it interesting that most of my respondents do not consider accuracy to be an issue (though several believe that some empirically justified frequency or even probability distributions can be produced by models). And Epstein explicitly avoids using the term validation in the sense of confirmation that a model in some sense accurately describes its target phenomena.

So the upshot of all this is that models provide a kind of socially relevant precision. I think it is implicit in all of the responses (and the Epstein note) that, because of the precision, other people should care about the implications of our respective models. This leads to my follow-on questions:

Is precision a good enough reason for anyone to take seriously anyone else’s model? If it is not a good enough reason, then what is?

And so arises the debate about the importance of accuracy over precision (but the original ‘What is the point’ thread continues also). In hindsight, I think it may have been more appropriate for me to use the word accurate than precise in my postings. All this debate may seem to be just semantics and navel-gazing to many people, but as I argued in my second post, understanding the underlying philosophical basis of modelling and representing reality (however we might measure or perceive it) gives us a better chance of improving models and modelling in the long run…

Developing Sustainable Lifestyles

It can be hard not to abandon hope for a sustainable future when you read about our rapidly growing global population and the hopes of those in the developing world (growing the fastest) to lead more ‘western’ lifestyles. For ‘western’, read ‘consumptive’. Last year Jared Diamond came up with new numbers to make us feel even more hopeless; economically more developed countries are consuming resources and producing waste 32 times faster than less developed countries. That means, Diamond estimates, if everyone on earth were to eat as much meat, drive their cars as far and use electricity as prodigiously as Europeans, Americans and Japanese currently do it would be as if the human population had suddenly ballooned to 72 billion.

In an editorial in the latest issue of Conservation Biology, R. Edward Grumbine and Jianchu Xu use Diamond’s example when discussing the rise of China as a global economic power and consumer and the potential implications for conservation, the environment and the climate debate:

“China’s rapid economic rise has not helped conservation much. The country faces severe environmental challenges as the largest human population in history builds highways, factories, and housing to fully join the modern industrial world. The PRC [People’s Republic of China], however, remains relatively poor. Per capita income in 2007 was a mere one-fifth of the U.S. average; a typical American teenager has more discretionary income than the total annual salary of the average Chinese citizen.

Despite the importance of biodiversity issues, we want to draw attention to less-discussed environmental concerns that involve China at regional and global scales and which will likely transform life for all of us over the rest of the 21st century.”

Focusing on their discussion about issues related to climate change, Grumbine and Xu point out;

“Even if the European Union and the United States magically reduced their greenhouse gas emissions to zero while you are reading this sentence, China’s current pace by itself may keep global emissions rising through 2020.

China should not be blamed for the world’s runaway greenhouse gas emissions; the United States never even ratified the Kyoto Protocol. And we emphasize that China’s development dream is not a vision exclusive to the PRC. Beyond the Middle Kingdom, there are at least 1.2 billion people desiring cars, a decent house attached to a sewer system, potable water, and a fair measure of education and health care.”

The consequences of Chinese, and other poorer nations, realising their hopes of economic development?

“China and the rest of the less-developed world are driving wealthy countries toward a global reckoning with the fossil-fuel-powered, high-consumption, industrial way of life.

… The Tyndall Centre for Climate Change Research in the United Kingdom has estimated that some 23% of China’s total emissions result from net exports to the developed world. The Earth’s atmosphere bears a message: we are all in this together. China and climate change have collapsed us and them into we.”

Grumbine and Xu reckon China is poised to assume a leadership role in solving our international environmental problems despite, or maybe as a consequence of, its rapidly growing population and ecological footprint. The US government also seems to now recognise that we’re all in this together. In February, US Secretary of State Hillary Clinton set out to discuss these issues during her visit to China, and it appears her path may have been previously beaten (behind closed doors) during the preceding administration. In vowing to “restore science to its rightful place” President Obama named Nobel Prize laureate Steven Chu as his Energy Secretary. However, it seems that despite wanting to put science first, domestic political opposition to emissions cuts and to changes in the US energy mix are hindering these moves. Chu said recently to the BBC;

“As someone very concerned about climate I want to be as aggressive as possible but I also want to get started. And if we say we want something much more aggressive on the early timescales that would draw considerable opposition and that would delay the process for several years. … But if I am going to say we need to do much, much better I am afraid the US won’t get started.”

However, Chu went on to discuss his aims for a “massive programme of efficiency for commercial buildings”, vastly improved cost-effectiveness of solar energy, and an interconnected wind power grid. The Obama climate change bill is making progress, but the slow movement on energy policy because of domestic resistance to change has potential global consequences. If the economically more developed countries of the world cannot show that their populations are willing and able to change their lifestyles to be less consumptive, negotiations with developing countries will be hindered.

Pressure from lower levels of government will help push things along. Last week 178 Michigan scientists (including myself) signed a letter to the Michigan Congressional delegation calling for actions to achieve strong and effective federal climate change solutions policies. And scientists can (and need) to do more than just write letters and do their basic (physical) research in their laboratories and at their computers. Reiterating his commitment to science in an address to the National Academy of Science, President Obama asked scientists and academics to engage in society to inspire and enable people “to be makers of things, not just consumers of things”.

A paper by David Pimentel and colleagues, entitled Energy efficiency and conservation for individual Americans, provides some solid numbers and ideas about how we as individual citizens in the economically more developed world can modify our residential energy use, reduce the impact of personal transport, and make informed decisions about what we eat. I’ve listed some of their more interesting suggestions for a sustainable lifestyle below. These are rational and effective ways we can change our lifestyles to live more sustainably and show that we are willing to share the responsibility of mitigating the human impact on the global environment. If we don’t want to be left with mere hope for a sustainable future, we need to show how others in the world can realise their hopes of development whilst conserving energy, water and our other natural resources.

Residential Energy Conservation

  • Improve and upgrade windows – 25% of residential heating and cooling energy is lost directly through single pane windows
  • Plant trees – deciduous on south to shade the house in the summer and allow full-sun in the winter, evergreen trees to the north can act as a wind-break
  • Use the microwave – it’s the most efficient way to steam, boil, and bake vegetables
  • Power-down your computer when it’s not in use – “computers should be turned off if the unit will be left for 2 hours or more and if left for 30 min the machine should be set in standby mode”

Pimentel and colleagues suggest that implementing these, and other, measures around the home would save around 5,600 kWh/year, resulting in savings of about $390/year on home energy costs.

Personal Transport

  • Drive slower – “A reduction in speed from 104 kmph (65 mph) to 86 kmph (55 mph) will reduce fuel consumption 19% (UrbanPlanet). For a 104 km trip, only an additional 11 min would be required if one traveled at 86 kmph. This extra 11 min would repay the person nearly $1.86 in fuel saving, or repay the person $10/h.”
  • Inflate your car tires properly – this will decrease the fuel consumption by up to 3%
  • Get rid of that junk in your trunk – “each 45 kg (100 pounds) of additional load in the car will reduce fuel mileage about 1%”
  • Ride your bike – bicycling uses 25 kcal/km (34 kcal/mile)compared with 938 kcal/km (1,510 kcal/mile) for a mid-sized car

In summary: “[c]urrently, the average American uses about 1,900 l (500 gallons) of fuel/year in personal transport in contrast to the average person in the United Kingdom who consumes 1,700 l (450 gallons) (Renner 2003). If Americans implement the suggestions listed above [and others I haven’t listed] over a 10-year period, it would be possible to reduce fuel oil consumption between 10% and 20% from the current 20 quads of vehicle fuel [approximately 600 billion l or about 16 billion gallons of fuel] consumed in the U.S.”

Food system
The authors highlight several ways in which farmers and policy-makers can aggressively pursue sustainable agricultural practices. They are less precise about what individuals’ can do but offer some general ideas:

  • Eat local products – reduces transport energy costs [and find out where you should buy your wine from here]
  • Eat less (especially less meat) – read more about meat and the environment here
  • “Select aluminum and steel packaging over glass or plastic, for energy conservation. For the same reasons, however, plastic and especially recyclable plastic should be selected instead of glass and/or paper.”

Pimentel et al. summarise: “[w]ell-directed, serious conservation strategies influenced by individuals with supportive state and federal leadership and policies will have an enormous positive impact on transitioning to a sustainable energy future for the United States.”

Peter Orszag and Economic Models

A while ago I heard this interview with Peter Orszag, Director of the US Office of Management and Budget and one of President Obama’s key economic advisors. Interestingly to me, given what I’ve written previously about quantitative models of human social and economic activity, Orszag is interested in Behavioural Economics and is somewhat skeptical about the power of mathematical models:

“Too many academic fields have tried to apply pure mathematical models to activities that involve human beings. And whenever that happens — whether it’s in economics or health care or medical science — whenever human beings are involved, an approach that is attracted by that purity will lead you astray”

That’s not to say he’s not going to use some form of model forecasting to do his job. When Jon Stewart highlights (in his own amusingly honest way) the wide range of economic model projections out there for the US deficit, Orszag points out that he needs at least some semblance of a platform from which to anchor his management of the US economy. But it’s reassuring for me to know that in managing the future this guy won’t be seduced by quantitative predictions of it.

http://media.mtvnservices.com/mgid:cms:item:comedycentral.com:222776

Predicting 2009

Over the holiday period the media offer us plenty of fodder to discuss the past year’s events and what the future may hold. Whether it’s current affairs, music, sport, economics or any other aspect of human activity, most media outlets have something to say about what people did that was good, what they did that was bad, and what they’ll do next, in the hope that they can keep their sales up over the holiday period.

Every year The Economist publishes a collection of forecasts and predictions for the year ahead. The views and and opinions of journalists, politicians and business people accompany interactive maps and graphs that provide numerical analysis. But how good are these forecasts and predictions? And what use are they? This year The Economist stopped to look back on how well it performed:

“Who would have thought, at the start of 2008, that the year would see crisis engulf once-sturdy names from Freddie Mac and Fannie Mae to AIG, Merrill Lynch, HBOS, Wachovia and Washington Mutual (WaMu)?

Not us. The World in 2008 failed to predict any of this. We also failed to foresee Russia’s invasion of Georgia (though our Moscow correspondent swears it was in his first draft). We said the OPEC cartel would aim to keep oil prices in the lofty range of $60-80 a barrel (the price peaked at $147 in July)…”

And on the list goes. Not that any of us are particularly surprised, are we? So why should we bother to read their predictions for the next year? In its defence, The Economist offers a couple of points. First, the usual tactic (for anyone defending their predictions) of pointing out what they actually did get right (slumping house prices, interest-rate cuts, etc). But then they highlight a perspective which I think is almost essential when thinking about predictions of future social or economic activity:

“The second reason to carry on reading is that, oddly enough, getting predictions right or wrong is not all that matters. The point is also to capture a broad range of issues and events that will shape the coming year, to give a sense of the global agenda.”

Such a view is inherently realist. Given the multitudes of interacting elements and potential influences affecting economic systems, given that it is an ‘open’ historical system, producing a precise prediction about future system states is nigh-on impossible. Naomi Oreskes has highlighted the difference between ‘logical prediction’ (if A and B then C) and ‘temporal prediction’ (event C will happen at time t + 10), and this certainly applies here [I’m surprised I haven’t written about this distinction on this this blog before – I’ll try to remedy that soon]. Rather than simply developing models or predictions with the hope of accurately matching the timing and magnitude of future empirical events, I argue that we will be better placed (in many circumstances related to human social and economic activity) to use models and predictions as discussants to lead to better decision-making and as means to develop an understanding of the relevant causal structures and mechanisms at play.

In a short section of his recent book and TV series, The Ascent of Money, Niall Ferguson talks about the importance of considering history in economic markets and decision-making. He presents the example of Long Term Capital Management (LTCM) and their attempt to use mathematical models of the global economic system to guide their trading decision-making. In Ferguson’s words, their model was based on the following set of assumptions about how the system worked:

“Imagine another planet – a planet without all the complicating frictions caused by subjective, sometimes irrational human beings. One where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximise profits; where they never stopped trading; where markets were continuous, frictionless and completely liquid. Financial markets on this plan would follow a ‘random walk’, meaning that each day’s prices would be quite unrelated to the previous day’s but would reflect all the relevant information available.” p.320

Using these assumptions about how the world works, the Nobel prize-winning mathematicians Myron Scholes and Robert C. Merton derived a mathematical model. Initially the model performed wonderfully, allowing returns of 40% on investments for the first couple of years. However, crises in the Asian and Russian financial systems in 1997 and 1998 – not accounted for in the assumptions of the mathematical model – resulted in LTCM losing $1.85 billion through the middle of 1998. The model assumptions were unable to account for these events, and subsequently its predictions were inaccurate. As Ferguson puts it:

“…the Nobel prize winners had known plenty of mathematics, but not enough history. They had understood the beautiful theory of Planet finance, but overlooked the messy past of Planet Earth.” p.329

When Ferguson says ‘not enough history’, his implication is that the mathematical model was based on insufficient empirical data. Had the mathematicians used data that covered the variability of the global economic system over a longer period of time it may have included a stock market downturn similar to that caused by Asian and Russian economic crises. But a data set for a longer time period would likely have been characterised by greater overall variability, requiring a greater number of parameters and variables to account for that variability. Whether such a model would have performed as well as the model they did produce is questionable, as is the potential to predict the exact timing and magnitude of any ‘significant’ event (e.g. a market crash).

But further, Ferguson also points out that the problem with the LTCM model wasn’t just that they hadn’t used enough data to develop their model, but that their assumptions (i.e. their understanding of Planet Finance) just aren’t realistic enough to accurately predict Planet Earth over ‘long’ periods of time. Traders and economic actors are not perfectly rational and do not have access to all the data all the time. Such a situation has led (more realistic) economists to develop ideas like bounded rationality.

Assuming that financial traders try to be rational is likely not a bad assumption. But it has been pointed out that “[r]ationality is not tantamount to optimality”, and that in situations where information, memory or computing resources are not complete (as is usually the case in the real world) the principle of bounded rationality is a more worthwhile approach. For example, Herbert Simon recognised that rarely do actors in the real world optimise their behaviour, but rather they merely try to do ‘well enough’ to satisfy their goal(s). Simon termed this non-optimal behaviour ‘satisficing’, the basis for much of bounded rationality theory since. Thus, satisficing is essentially a cost-benefit tradeoff, establishing when the utility of an option exceeds an aspiration level.

Thinking along the same lines George Soros has developed his own ‘Human Uncertainty Principle’. This principle “holds that people’s understanding of the world in which they live cannot correspond to the facts and be complete and coherent at the same time. Insofar as people’s thinking is confined to the facts, it is not sufficient to reach decisions; and insofar as it serves as the basis of decisions, it cannot be confined to the facts. The human uncertainty principle applies to both thinking and reality. It ensures that our understanding is often incoherent and always incomplete and introduces an element of genuine uncertainty – as distinct from randomness – into the course of events.

The human uncertainty principle bears a strong resemblance to Heisenberg’s uncertainty principle, which holds that the position and momentum of quantum particles cannot be measured at the same time. But there is an important difference. Heisengberg’s uncertainty principle does not influence the behavior of quantum particles one iota; they would behave the same way if the principle had never been discovered. The same is not true of the human uncertainty principle. Theories about human behavior can and do influence human behavior. Marxism had a tremendous impact on history, and market fundamentalism is having a similar influence today.” Soros (2003) Preface

This final point has been explored in more detail by Ian Hacking and his discussion of the issue of the differences between interactive and indifferent kinds. Both of these views (satisficing and the uncertainy principle) implicitly understand that the context in which an actor acts is important. In the perfect world of Planet Finance and associated mathematical models context is non-existent.

In response to the problems encountered by LTCM, “Merrill Lynch observed in its annual reports that mathematical risk models, ‘may provide a greater sense of security than warranted; therefore, reliance on these models should be limited’“. I think it is clear that humans need to make decisions (whether they be social, economic, political, or about any resource) based on human understanding derived from empirical observation. Quantitative models will help with this but cannot be used alone, partly because (as numerous examples have shown) it is very difficult to make (accurate) predictions about future human activity. Likely there are general behaviours that we can expect and use in models (e.g. aim of traders to make profit). But how those behaviours play out in the different contexts provided by the vagaries of day-to-day events and changes in global economic, political and physical conditions will require multiple scenarios of the future to be examined.

My personal view is one of the primary benefits of developing quantitative models of human social and economic activity is that they allow us to make explicit our implicitly held models. Developing quantitative models forces us to be structured about our worldview – writing it down (often in computer code) allows other to scrutinise that model, something that is not possible if the model remains implicit. In some situations, such a private financial strategy-making, the publication this approach may not be welcome (because it is not beneficial for a competitor to know your model of the world). But in other decision-making situations, for example about environmental resources, this approach will be useful to foster greater understanding about how the ‘experts’ think the world works.

By writing down their expectations for the forthcoming year the experts at The Economist are making explicit their understanding of the world. It’s not terribly important that that they don’t get everything right – there’s very little possibility that will happen. What is important is that it helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states. This information might then be used to shape the future as we would like it to be, based on informed expectations. Quantitative models of human social and economic activity also offer this type of opportunity.

CHANS-Net

Towards the end of last week the MSU Environmental Science and Public Policy Program held a networking event on Coupled Human and Natural Systems (CHANS). These monthly events provide opportunities for networking around different environmental issues and last week was the turn of the area CSIS focuses on. The meeting reminded me of a couple of things I thought I would point out here.

First is the continued commitment that the National Science Foundation (NSF) is making to funding CHANS research. The third week in November will be the annual deadline for research proposals, so watch out for (particularly) tired looking professors around that time of year.

Second, I realized I haven’t highlighted on this blog one of the NSF CHANS projects currently underway at CSIS. CHANS-Net aims to develop an international network of research on CHANS to facilitate communication and collaboration among members of the CHANS research community. Central to the project is the establishment of an online meeting place for research collaboration. An early version of the website is currently in place but improvements are in the planning. I was asked for a few suggestions earlier this week and it made me realise how interested I am in the potential of the technologies that have arrived with web 2.0 (I suppose that interest is also clear right here in front of you on this blog). I hope to be able to continue to make suggestions and participate in the development of the site from afar (there’s too much to be doing elsewhere to get my hands really dirty on that project). Currently, only Principle Investigators (PIs) and Co-PIs on NSF funded CHANS projects are members of the network, but hopefully opportunities for wider participation will be available in the future. In that event, I’ll post again here.

Creating a Genuine Science of Sustainability

Previously, I wrote about Orrin Pilkey and Linda Pilkey-Jarvis’ book, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future. In a recent issue of the journal Futures, Jerome Ravetz reviews their book alongside David Waltner-Toews’ The Chickens Fight Back: Pandemic Panics and Deadly Diseases That Jump From Animals to Humans. Ravetz himself points out that the subject matter and approaches of the books are rather different, but suggests that “Read together, they provide insights about what needs to be done for the creation of a genuine science of sustainability”.

Ravetz (along with Silvio Funtowicz) has developed the idea of ‘post-normal’ science – a new approach to replace the reductionist, analytic worldview of ‘normal’ science. Post-normal science is a “systemic, synthetic and humanistic” approach, useful in cases where “facts are uncertain, values in dispute, stakes high and decisions urgent”. I used some of these ideas to experiment with some alternative model assessment criteria for the socio-ecological simulation model I developed during my PhD studies. Ravetz’s perspectives toward modelling, and science in general, shone through quite clearly in his review:

“On the philosophical side, the corruption of computer models can be understood as the consequence of a false metaphysics. Following on from the prophetic teachings of Galileo and Descartes, we have been taught to believe that Science is the sole and certain path to truth. And this Science is mathematical, using quantitative data and abstract reasonings. Such a science is not merely necessary for achieving genuine knowledge (an arguable position) but is also sufficient. We are all victims of the fantasy that once we have numerical data and mathematical argument (or computer programs), truth will inevitably follow. The evil consequences of this philosophy are quite familiar in neo-classical economics where partly true banalities about markets are dressed up in the language of the differential calculus to produce justifications for every sort of expropriation of the weak and vulnerable. ‘What you can’t count, doesn’t count’ sums it all up neatly. In the present case, the rule of models extends over nearly all the policy-relevant sciences, including those ostensibly devoted to the protection of the health of people and the environment.

We badly need an effective critical philosophy of mathematical science. … Now science has replaced religion as the foundation of our established order, and in it mathematical science reigns supreme. Systematic philosophical criticism is hard to find. (The late Imre Lakatos did pioneering work in the criticism of the dogmatism of ‘modern’ abstract mathematics but did not focus on the obscurities at the foundations of mathematical thinking.) Up to now, mathematical freethinking is mainly confined to the craftsmen, with their jokes of the ‘Murphy’s Law’ sort, best expressed in the acronym GIGO (Garbage In, Garbage Out). And where criticism is absent, corruption of all sorts, both deliberate and unaware, is bound to follow. Pseudo-mathematical reasonings about the unthinkable helped to bring us to the brink of nuclear annihilation a half-century ago. The GIGO sciences of computer models may well distract us now from a sane approach to coping with the many environmental problems we now face. The Pilkeys have done us a great service in providing cogent examples of the situation, and indicating some practical ways forward.”

Thus, Ravetz finds a little more value in the Useless Arithmetic book than I did. But equally, he highlights that the Pilkeys offer few, rather vague, solutions and instead turns to Waltner-Toews’ book for inspiration for the future:

Pilkey’s analysis of the corruptions of misconceived reductionist science shows us the depth of the problem. Waltner-Toews’ narrative about ourselves in our natural context (not always benign!) indicates the way to a solution.”

Using the outbreak of avian flu as an example of how to tackle complex environmental in the ‘risk society’ in which we now live, Waltner-Toews:

“… makes it very plain that we will never ‘conquer’ disease. Considering just a single sort of disease, the ‘zoonoses’ (deriving from animals), he becomes a raconteur of bio-social-cultural medicine …

What everyone learned, or should have learned, from the avian flu episode is that disease is a very complex entity. Judging from TV adverts for antiseptics, we still believe that the natural state of things is to be germ-free, and all we need to do is to find the germs and kill them. In certain limiting cases, this is a useful approximation to the truth, as in the case of infections of hospitals. But even there complexity intrudes … “

Complexity which demands an alternative perspective that moves beyond the next stage of ‘normal’ science to a post-normal science (to play on Kuhn’s vocabulary of paradigm shifts):

“That old simple ‘kill the germ’ theory may now be derided by medical authorities as something for the uneducated public and their media. But the practice of environmental medicine has not caught up with these new insights.

The complexity of zoonoses reflects the character of our interaction with all those myriads of other species. … the creatures putting us at risk are not always large enough to be fenced off and kept at a safe distance. … We can do all sorts of things to control our interactions with them, but one thing is impossible: to stamp them out, or even to kill the bad ones and keep the good ones.

Waltner-Toews is quite clear about the message, and about the sort of science that will be required, not merely for coexisting with zoonoses but also for sustainable living in general. Playing the philological game, he reminds us that the ancient Indo-European world for earth, dgghem, gave us, along with ‘humus’, all of ‘human’, ‘humane’ and ‘humble’. As he says, community by community, there is a new global vision emerging whose beauty and complexity and mystery we can now explore thanks to all our scientific tools.”

This global vision is a post-normal vision. It applies to far more than just avian flu – from coastal erosion and the disposal of toxic or radioactive waste (as the Pilekys discuss for example) to climate change. This post-normal vision focuses on uncertainty, value loading, and a plurality of legitimate perspectives that demands an “extended peer community” to evaluate the knowledge generated and decisions proposed.

In all fairness, it would not be easy to devise a conventional science-based curriculum in which Waltner-Toews’ insights could be effectively conveyed. For his vision of zoonoses is one of complexity, intimacy and contingency. To grasp it, one needs to have imagination, breadth of vision and humility, not qualities fostered in standard academic training. … “

This post-normal science won’t be easy and won’t be learned or fostered entirely within the esoteric confines of an ivory tower. Science, with its logical rigour, is important. It is still the best game in town. But the knowledge produced by ‘normal’ science is provisional and its march toward truth is seemingly Sisyphean when confronted faced with the immediacy of complex contemporary environmental problems. To contribute to the production a sustainable future, a genuine science of sustainability would do well to adopt a more post-normal stance toward its subject.

Columbia University Press Sale


Columbia University Press currently has a sale on. They have savings of up to 80% on more than 1,000 titles from several fields of study. I was particularly interested in their books in the Environmental Studies and Ecology section and purchased several:

Previously on this blog I reviewed another book they have on sale, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future by Orrin H. Pilkey and Linda Pilkey-Jarvis.

When I get round to reading this new batch I’ll review some of these also (at first glance the Wiens et al. book looks particularly useful for any Landscape Ecologist – student, teacher or researcher). You’ve got up until May 31st to order yours.