Publishing in Geography

Got a Geography paper you want to publish? You would do well to read the RGS guide to publishing in Geography. In fact, it’s got some good tips for anyone wanting to learn more about publishing in academia. And if you really aren’t bothered about academia or publishing you should still check it out because it has one of the nicest online document readers I’ve seen in a while.

Reading the RGS guide gave me the idea that maybe I should write up my blog on David Demeritt’s TIBG Boundary Crossing piece for submission as a commentary. So I’ve been reading and thinking about that and will hopefully have something submitted in February. I’ve also been asked to help re-write the Human Decision-Making chapter of Wainwright and Mulligan’s Environmental Modelling ready for its second edition. I’ll be working on that throughout 2009.

Other things I’ve been working on recently are the spatial deer density modelling manuscript (in draft) and the Deer browse/mesic conifer planting experiment (also in draft). I’ve nearly compled the revisions for the paper on my Landscape Fire Succession Model and should be able to return it to EMS soon. The Mind, the Gap paper still isn’t back from the reviewers, and who knows when I’ll ever get round to looking at the narratives paper again.

Not this weekend that’s for sure – Saturday is paper revisions and then on Sunday we’re heading north to our Michigan UP study area to meet with the timber companies (Plum Creek and American Forest Management) that have helped us with our fieldwork over the last two summers. Between the meetings we’ll drive through the study area and maybe jump out at one or two of our sites to take a look at them in the winter snow. I’ve been up there during Spring, Summer and Autumn, so this trip will check off my final season. I’ll take my camera and hopefully have a few pictures to post here next week.

Winter White-Tailed Deer Density Paper

First week back in CSIS after the holiday and I got cracking with the winter white-tailed deer density paper we’re working. Understanding the winter spatial distribution of deer are important for the wider simulation modelling project we’re working on as the model needs to be able to estimate deer densities at each model timestep. We need to do this so that we might represent the impacts of deer on tree regeneration following timber harvest in the simulation model. The work the paper will present is using data from several sources:

  1. data we collected this summer regarding forest stand composition and structure,
  2. similar data kindly shared with us by the Michigan DNR,
  3. estimates of deer density derived from deer pellet counts we also made this year,
  4. other environmental data such as snow depth data from SNODAS.

Here’s my first stab at the opening paragraph (which will no doubt change before publication):

Spatial distributions of wildlife species in forest landscapes are known to be influenced by forest-cover composition and pattern. The influence of forest stand structure on the spatial distribution of wildlife is less well understood. However, understanding the spatial distribution of herbivorous ungulate species that modify vegetation regeneration dynamics is vital for forest managers entrusted with the goal of ensuring both ecological and economic sustainability of their forests. Feedbacks between timber harvest, landscape pattern, stand structure, and herbivore population density may lead to spatial variation in tree regeneration success. In this paper we explore how forest stand structure and landscape pattern, and their interactions with other environmental factors, can be used to predict and understand the winter spatial distribution of white-tailed deer (Odocoileus virginianus) during in the managed forests of the central Upper Peninsula (U.P.) of Michigan, USA.

I’ll update the status of the paper here periodically.

Predicting 2009

Over the holiday period the media offer us plenty of fodder to discuss the past year’s events and what the future may hold. Whether it’s current affairs, music, sport, economics or any other aspect of human activity, most media outlets have something to say about what people did that was good, what they did that was bad, and what they’ll do next, in the hope that they can keep their sales up over the holiday period.

Every year The Economist publishes a collection of forecasts and predictions for the year ahead. The views and and opinions of journalists, politicians and business people accompany interactive maps and graphs that provide numerical analysis. But how good are these forecasts and predictions? And what use are they? This year The Economist stopped to look back on how well it performed:

“Who would have thought, at the start of 2008, that the year would see crisis engulf once-sturdy names from Freddie Mac and Fannie Mae to AIG, Merrill Lynch, HBOS, Wachovia and Washington Mutual (WaMu)?

Not us. The World in 2008 failed to predict any of this. We also failed to foresee Russia’s invasion of Georgia (though our Moscow correspondent swears it was in his first draft). We said the OPEC cartel would aim to keep oil prices in the lofty range of $60-80 a barrel (the price peaked at $147 in July)…”

And on the list goes. Not that any of us are particularly surprised, are we? So why should we bother to read their predictions for the next year? In its defence, The Economist offers a couple of points. First, the usual tactic (for anyone defending their predictions) of pointing out what they actually did get right (slumping house prices, interest-rate cuts, etc). But then they highlight a perspective which I think is almost essential when thinking about predictions of future social or economic activity:

“The second reason to carry on reading is that, oddly enough, getting predictions right or wrong is not all that matters. The point is also to capture a broad range of issues and events that will shape the coming year, to give a sense of the global agenda.”

Such a view is inherently realist. Given the multitudes of interacting elements and potential influences affecting economic systems, given that it is an ‘open’ historical system, producing a precise prediction about future system states is nigh-on impossible. Naomi Oreskes has highlighted the difference between ‘logical prediction’ (if A and B then C) and ‘temporal prediction’ (event C will happen at time t + 10), and this certainly applies here [I’m surprised I haven’t written about this distinction on this this blog before – I’ll try to remedy that soon]. Rather than simply developing models or predictions with the hope of accurately matching the timing and magnitude of future empirical events, I argue that we will be better placed (in many circumstances related to human social and economic activity) to use models and predictions as discussants to lead to better decision-making and as means to develop an understanding of the relevant causal structures and mechanisms at play.

In a short section of his recent book and TV series, The Ascent of Money, Niall Ferguson talks about the importance of considering history in economic markets and decision-making. He presents the example of Long Term Capital Management (LTCM) and their attempt to use mathematical models of the global economic system to guide their trading decision-making. In Ferguson’s words, their model was based on the following set of assumptions about how the system worked:

“Imagine another planet – a planet without all the complicating frictions caused by subjective, sometimes irrational human beings. One where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximise profits; where they never stopped trading; where markets were continuous, frictionless and completely liquid. Financial markets on this plan would follow a ‘random walk’, meaning that each day’s prices would be quite unrelated to the previous day’s but would reflect all the relevant information available.” p.320

Using these assumptions about how the world works, the Nobel prize-winning mathematicians Myron Scholes and Robert C. Merton derived a mathematical model. Initially the model performed wonderfully, allowing returns of 40% on investments for the first couple of years. However, crises in the Asian and Russian financial systems in 1997 and 1998 – not accounted for in the assumptions of the mathematical model – resulted in LTCM losing $1.85 billion through the middle of 1998. The model assumptions were unable to account for these events, and subsequently its predictions were inaccurate. As Ferguson puts it:

“…the Nobel prize winners had known plenty of mathematics, but not enough history. They had understood the beautiful theory of Planet finance, but overlooked the messy past of Planet Earth.” p.329

When Ferguson says ‘not enough history’, his implication is that the mathematical model was based on insufficient empirical data. Had the mathematicians used data that covered the variability of the global economic system over a longer period of time it may have included a stock market downturn similar to that caused by Asian and Russian economic crises. But a data set for a longer time period would likely have been characterised by greater overall variability, requiring a greater number of parameters and variables to account for that variability. Whether such a model would have performed as well as the model they did produce is questionable, as is the potential to predict the exact timing and magnitude of any ‘significant’ event (e.g. a market crash).

But further, Ferguson also points out that the problem with the LTCM model wasn’t just that they hadn’t used enough data to develop their model, but that their assumptions (i.e. their understanding of Planet Finance) just aren’t realistic enough to accurately predict Planet Earth over ‘long’ periods of time. Traders and economic actors are not perfectly rational and do not have access to all the data all the time. Such a situation has led (more realistic) economists to develop ideas like bounded rationality.

Assuming that financial traders try to be rational is likely not a bad assumption. But it has been pointed out that “[r]ationality is not tantamount to optimality”, and that in situations where information, memory or computing resources are not complete (as is usually the case in the real world) the principle of bounded rationality is a more worthwhile approach. For example, Herbert Simon recognised that rarely do actors in the real world optimise their behaviour, but rather they merely try to do ‘well enough’ to satisfy their goal(s). Simon termed this non-optimal behaviour ‘satisficing’, the basis for much of bounded rationality theory since. Thus, satisficing is essentially a cost-benefit tradeoff, establishing when the utility of an option exceeds an aspiration level.

Thinking along the same lines George Soros has developed his own ‘Human Uncertainty Principle’. This principle “holds that people’s understanding of the world in which they live cannot correspond to the facts and be complete and coherent at the same time. Insofar as people’s thinking is confined to the facts, it is not sufficient to reach decisions; and insofar as it serves as the basis of decisions, it cannot be confined to the facts. The human uncertainty principle applies to both thinking and reality. It ensures that our understanding is often incoherent and always incomplete and introduces an element of genuine uncertainty – as distinct from randomness – into the course of events.

The human uncertainty principle bears a strong resemblance to Heisenberg’s uncertainty principle, which holds that the position and momentum of quantum particles cannot be measured at the same time. But there is an important difference. Heisengberg’s uncertainty principle does not influence the behavior of quantum particles one iota; they would behave the same way if the principle had never been discovered. The same is not true of the human uncertainty principle. Theories about human behavior can and do influence human behavior. Marxism had a tremendous impact on history, and market fundamentalism is having a similar influence today.” Soros (2003) Preface

This final point has been explored in more detail by Ian Hacking and his discussion of the issue of the differences between interactive and indifferent kinds. Both of these views (satisficing and the uncertainy principle) implicitly understand that the context in which an actor acts is important. In the perfect world of Planet Finance and associated mathematical models context is non-existent.

In response to the problems encountered by LTCM, “Merrill Lynch observed in its annual reports that mathematical risk models, ‘may provide a greater sense of security than warranted; therefore, reliance on these models should be limited’“. I think it is clear that humans need to make decisions (whether they be social, economic, political, or about any resource) based on human understanding derived from empirical observation. Quantitative models will help with this but cannot be used alone, partly because (as numerous examples have shown) it is very difficult to make (accurate) predictions about future human activity. Likely there are general behaviours that we can expect and use in models (e.g. aim of traders to make profit). But how those behaviours play out in the different contexts provided by the vagaries of day-to-day events and changes in global economic, political and physical conditions will require multiple scenarios of the future to be examined.

My personal view is one of the primary benefits of developing quantitative models of human social and economic activity is that they allow us to make explicit our implicitly held models. Developing quantitative models forces us to be structured about our worldview – writing it down (often in computer code) allows other to scrutinise that model, something that is not possible if the model remains implicit. In some situations, such a private financial strategy-making, the publication this approach may not be welcome (because it is not beneficial for a competitor to know your model of the world). But in other decision-making situations, for example about environmental resources, this approach will be useful to foster greater understanding about how the ‘experts’ think the world works.

By writing down their expectations for the forthcoming year the experts at The Economist are making explicit their understanding of the world. It’s not terribly important that that they don’t get everything right – there’s very little possibility that will happen. What is important is that it helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states. This information might then be used to shape the future as we would like it to be, based on informed expectations. Quantitative models of human social and economic activity also offer this type of opportunity.

Walking Bristol

I found the video below on the walkit.com blog. If you’ve ever used Google Maps to get your directions from one city to another, walkit.com can do the same for you if want to walk in any of 12 British cities (more coming soon). It even tells you how many calories you will burn and how much CO2 you will save by not driving. You can choose between direct or less busy routes – the latter option will probably be useful for cyclists too as it accounts for traffic levels. Those options are nice, but what walkers (and cyclists) in hilly cities like Bristol and Sheffield could really do with is a ‘less steep’ option!

So, what’s this video I found? It’s the latest film from Urban Earth, a project to (re)present human habitats by walking across some of Earth’s biggest urban areas. One of the main aims of UrbanEarth is to show what the world’s cities are really like for the people who live there – something that mainstream media can give a distorted or incomplete image of. Each UrbanEarth film is composed of thousands of photographs, one for every 7 steps (5 metres) of a person’s walk through a city. It’s kind of like a walker’s version of Google’s StreeView. They’ve done London, Mumbai, and now their latest film, which you can watch below, is my home town of Bristol, UK. If you know the area, see if you can track the route the walker takes (click the ‘fullscreen’ option if the window is too small).

http://blip.tv/play/go4o3vI6keUo

I have to admit that I had no idea where the route started, but then by the time they got to Totterdown I knew where I was – those colourful houses on the steep hills overlooking the centre of the city gave it away. From there on I knew where I was going. If you’re still lost (or don’t know the area), here’s a map of the route.

Or, you can see the route traced on a ‘map of deprivation’. This map illustrates one of the vagaries of the shape of Bristol’s urban growth. Note the (inverted) crescent shape of the city – the southwest quarter of the city is ‘missing’. This is attributed to the Avon Gorge which cut off the growth of Bristol to the southwest. Not until Clifton Suspension Bridge was opened in 1864 could you quickly and easily access the southwest (rather than trekking down into Cumberland Basin and up the other side. You can also see the effect of the gorge from aerial photography – note how areas around Abbots Leigh and Leigh Woods appear much greener and less urban even though they are only about a mile from the centre of the city. In contrast, look at how the urban area expands relatively contiguously to the east for five or six miles into South Gloucestershire (Oldland, Warmley, etc). These areas aren’t officially ‘Bristol’ but they’re contiguous and have the same postal and dialing codes.

Anyway, that’s enough about the geography of Bristol. If you’re going to be walking around the city in the future, walkit.com may help plan your route (remember it doesn’t account for hills – but luckily it does know where the gorge is), and check the UrbanEarth blog for new films of other cities coming soon (maybe Bath would like to be next).

Geographical Perspectives: Externalities, Inputs and Participation

One of the most enjoyable things about studying as a post-graduate in a UK Geography department was the diversity of conversation topics I could get myself into in the corridors, over lunch, and after work in the pub. Investigating social, economic, cultural, atmospheric, geomorphological, and ecological patterns and processes (too name just a few) geography departments contain scholars with interests and skills that span the globe’s physical and social environments. This variety of backgrounds and worldviews can lead to widely differing perspectives on the current affairs of any particular day.

In many ways my PhD studies, funded by an interdisciplinary research studentship from the ESRC and NERC, allowed (demanded?) me to search out these differing perspectives and engage in these conversations. However, this diversity of perspectives isn’t appealing for faculty members focused narrowly on their own particular research specialism and the current paper they are writing about it. Maybe they just don’t have time. Or maybe there’s something deeper.

The distinction between the social sciences (human geography) and natural sciences (physical geography) has led to somewhat of a divide between these two ‘sides’ of Geography. As my former tutor and advisor Prof. David Demeritt highlights in the latest volume of the Transactions of the Institute of British Geographers, ‘human’ and ‘physical’ geographers have become so estranged that dedicated forums to initiate ‘conversations across the divide‘ of Geography now occur regularly at annual conferences. Demeritt’s article discusses how ‘Environmental Geography’ is often touted as having the integrative research potential to bridge the human-physical divide.

Environmental Geography (EG) explicitly sets out to examine human-environment interactions and is generally understood to be the intersection of Human and Physical in the Geography Venn diagram. Essentially, EG is the Geographical version of the Coupled Human and Natural Systems (CHANS) research program that has become prominent recently largely thanks to NSF funding. Whereas CHANS emphasises systemic concepts (thresholds, feedbacks, resilience etc.), EG emphasises concepts more at home in the geographical lexicon – scale, space and (seemingly most often absent from CHANS research) place. This is not to say that these concepts are exclusively used by either one or the other – whether you do ‘CHANS research’ or ‘Environmental Geography’ is also likely to be determined by where your research funding comes from, what department you work in, and the type or training you received in graduate school.

One of the main points Demeritt makes in his commentary is that this flat distinction between Human and Physical Geography is not as straight forward as it is often made out to be. Friedman’s world may be flat, but the Geography world isn’t. Demeritt attempts to illustrate this with a new diagramtic 3D representation of the overlap between the many sub-disciplines of Geography (most of which are also academic disciplines in their own right):

Demeritt's 2008 three dimensional interpretation of the relationship between sub-disciplines in Geography
Thus, “Rather than thinking about geography just in terms of a horizontal divide between human and physical geography, we need to recognise the heterogeneity within those very broad divisions. …within those two broad divisions geography is stretched out along a vertical dimension. … Like the fabled double helix, these vertical strands twist round each other and the horizontal connections across the human-physical divide to open up new opportunities for productive engagement.” [p.5]

This potential doesn’t come without its challenges however. Demeritt uses EG to demonstrate such challenges, highlighting how research in this field is often ‘framed’. ‘Framing’ here refers to the perspective researchers take about how their subject (in this case interactions between humans and the natural environment) will be (should be) studied. Demeritt highlights three particular perspectives:

1. The Externality Perspective. This perspective might be best associated with the reductionist mode of scientific investigation, where a specific component of a human-environment system is considered in isolation from any other components. Research disregards or ignores other work in sub-disciplines, whether horizontally across the human-physical divide or vertically either side, and concentrates on understanding a specific phenomena or process.

2. The Integrated Perspective. We might think of this perspective as being loosely systematic. Rather than simply ignoring the connections with other processes and phenomena considered in other sub-disciplines, they are used as some form of ‘input’ to the component under particular consideration. This is probably the mode that most closely resembles how much CHANS research is currently done, and how most ‘interdisciplinary’ environmental research is currently done.

3. The Participatory Perspective. This third approach has become more prominent recently, associated with calls for more democratic forms of science-based decision-making and as issues expertise and risk have come to the fore in environmental issues. This mode demands scientists and researchers become more engaged with publics, stakeholders and decision-makers and is closely related to the perspective of ‘critical’ geography and proponents of ‘post-normal’ science.

Demeritt discusses the benefits and challenges of these approaches in more detail, as I have briefly touched on previously. Rather than go over them again, here I want to think a bit more about the situations in which each of these modes of research might be most useful. In turn, this will help us to think about where engagement with other disciplines and sub-disciplines will be most fruitful.

One situation in which the externality perspective would be most useful is when the spatial/temporal scope of the process/phenomena of interest makes engagement between (sub-)disciplines either useless or impossible. For example, reconciling economic or cultural processes with Quaternary research is likely to extraordinarily difficult (but see Wainwright 2008). A second would be when investigation is interested more in ‘puzzle solving’ than ‘problem-solving’. For example, with regards research on Northern Hardwood Forests the puzzler would ask questions like ‘what is the biological relationship between light availability and tree growth?’ whereas the problem-solver might ask ‘how should we manage our timber harvest to ensure sufficient light availability allows continued regeneration of younger trees in the forest understory?’.

The integrated approach has often been used in the situation when one ‘more predictable’ system is influenced by another ‘less predictable’ system. One system might be more predictable than another because more data are available for one than another, because less assumptions are invoked to ‘close’ one system for study than another, or simply because the systems are perceived to be more or less predictable. A prime example is the use of scenarios of global social end economic change to set the parameters of investigations of future climate change (although this example may actually have slowed problem-solving rather than sped it).

The participatory perspective will be useful when system uncertainties are primarily ethical or epistemological. Important questions here are ‘what are the ethical consequences of my study this phenomena?’ and ‘are sufficient theoretical tools available to study this problem?’. Further, in contrast to the externality mode, this approach will be useful when investigation is interested in ‘problem-solving’ rather than ‘puzzle solving’. For example, participatory research will be most useful when the research question is ‘how do we design a volcano monitoring system to efficiently and adequately alert local populations such that they can/will respond appropriately in the event of an eruption?’ rather than ‘what are the physical processes in the Earth’s interior that cause volcanoes to erupt when they do?’

Implicit in the choice of which question is asked in this final example is the framing of the issue at hand. Hopefully it is clear from my brief outline that it is a close relationship between research objectives and the framing or mode of the research. How these objectives and framings are arrived at is really at the root of Demeritt’s commentary. Given the choice, it will be easy for many researchers to take the easy option:

Engaging with other perspectives and approaches is not just demanding, but also risky too. … Progress in science has always come precisely from exposing ourselves to the possibility of getting it wrong or that things might not work out quite as planned’. [p.9]

Thinking clearly about the situations in which different modes of study are most useful might help save both embarrassment and time. Further, it also seems sensible to suggest that most thought should be done when researchers are considering engaging non-scientists in the participatory mode. If it is risky to expose ones self to fellow scientists, who understand the foibles of the research process and the difficulties of grappling with new ideas and data sets, it will be even more risky when the exposure is to non-scientists. Decision-makers, politicians, ‘lay persons’ and the general public at large are likely to be less acquainted with (but not ignorant of) how research proceeds (messily), how knowledge is generated (often a mixture of deductive proofs and inductive ideas), and the assumptions (and limitations) implicit in data collection and analysis. So when should academics feel most confident about parachuting in from the ivory tower?

First, it seems important for scientists to avoid telling people things they already ‘know’. Just because it hasn’t been written down in a scientific journal doesn’t mean it isn’t known (not that I want to get into discussion here about when something becomes ‘known’). We should try very hard to work out where help is needed to harness local knowledge, rather than ignoring it and assuming we know best (this of course harks back to the third wave). For example, while local farmers may know a lot about the history and consequences of land use/cover change in their local area, they may struggle to understand how land use/cover change will occur, or influence other processes, over larger spatial extents (e.g. landscape connectivity of species habitat or wildfire fuel loadings). In other situations, local knowledge may be entirely absent because a given phenomena is outside the perception/observation of the local community. In this case, it will be very difficult (or impossible) for them to contribute to knowledge formation even though the phenomena affects them. For example, the introduction of genetically modified crops will potentially have impacts on other nearby vegetation species due to hybridization, yet the processes at work are at a scale that is unobservable to lay persons (i.e genetic recombination at the molecular level versus farmland biodiversity at the landscape level).

The important point in all this however (as it occurs to me), seems to be that the ‘framing’ one researcher or scientist adopts will depend on their particular objectives. If those objectives are of the scientific puzzle-solving kind, and can be framed so that the solution can be found without leaving the comfy environment of a single sub-discipline, engagement will not happen (and neither should it). The risks it poses means that engagement will happen only if funding bodies demand it (as they increasingly are) or if the the research is really serious about solving a problem (as opposed to solving a puzzle or simply publishing scientific articles). As the human population grows within a finite environment the human-environment interface will only grow, likely demanding more and more engaged research. As I’ve highlighted before, a genuine science of sustainability is more likely to succeed if it adopts an engaged, participatory (post-normal) stance toward its subject.

Engaging researchers from other (sub-)disciplines or non-scientists will not always be the best option. But Geography and geographers are well placed to help develop theory and thinking to inform other scientists about how to frame environmental problems and establish exactly when engaging with experts (whether certified or not) from outside their field, or even from outside science itself, will be a fruitful endeavour. Geographers will only gain the authority on when and how interdisciplinary and participatory research should proceed once they’ve actually done some.

Demeritt, D. (2008) From externality to inputs and interference: framing environmental research in geography Transactions of the Institute of British Geographers 34(1) 3 – 11
Published Online: 11 Dec 2008
doi:10.1111/j.1475-5661.2008.00333.x

CHANS-Net

Towards the end of last week the MSU Environmental Science and Public Policy Program held a networking event on Coupled Human and Natural Systems (CHANS). These monthly events provide opportunities for networking around different environmental issues and last week was the turn of the area CSIS focuses on. The meeting reminded me of a couple of things I thought I would point out here.

First is the continued commitment that the National Science Foundation (NSF) is making to funding CHANS research. The third week in November will be the annual deadline for research proposals, so watch out for (particularly) tired looking professors around that time of year.

Second, I realized I haven’t highlighted on this blog one of the NSF CHANS projects currently underway at CSIS. CHANS-Net aims to develop an international network of research on CHANS to facilitate communication and collaboration among members of the CHANS research community. Central to the project is the establishment of an online meeting place for research collaboration. An early version of the website is currently in place but improvements are in the planning. I was asked for a few suggestions earlier this week and it made me realise how interested I am in the potential of the technologies that have arrived with web 2.0 (I suppose that interest is also clear right here in front of you on this blog). I hope to be able to continue to make suggestions and participate in the development of the site from afar (there’s too much to be doing elsewhere to get my hands really dirty on that project). Currently, only Principle Investigators (PIs) and Co-PIs on NSF funded CHANS projects are members of the network, but hopefully opportunities for wider participation will be available in the future. In that event, I’ll post again here.

Anticipating Threats to Northern Hardwood Forest Biodiversity

Megan Matonis, one of the Masters students on the Michigan UP project, is headed to Washington D.C. for the National Council for Science and the Environment 9th National Conference on Science, Policy, and the Environment with a poster under her arm. Entitled Anticipating Threats to Northern Hardwood Forest Biodiversity with an Ecological-Economic Model the poster gives an overview of the modelling project and highlights some of the effects of deer browse and timber harvest on tree sapling and songbird diversity. Hopefully Megan will get some interesting questions and return with some new ideas about how we might use our model once it is up and running.

I haven’t posted on the blog for a little while. The main causes have been end of semester craziness and a trip to Montreal over Thanksgiving (maybe some pictures will appear on the photos page soon). More on CHANS research soon…

Modelling Pharmaceuticals in the Environment

On Friday I spoke at a workshop at MSU that examined a subject I’m not particularly well acquainted with. Participants in Pharmaceuticals in the Environment: Current Trends and Research Priorities convened to consider the natural, physical, social, and behavioral dimensions regarding the fate and impact of pharmaceutical products in the natural environment. The primary environmental focus of this issue is the presence of toxins in our water supply as a result of the disposal of human or veterinary medicines. I was particularly interested in what Dr. Shane Synder had to say about water issues facing Las Vegas, Nevada.

So what did I have to do with all this? Well the organisers wanted someone from our research group at the Center for Systems Integration and Sustainability to present some thoughts on how modelling of coupled human and natural systems might contribute to the study of this issue. The audience contained experts from a variety of disciplines (including toxicologists, chemists, sociologists, political scientists) and given my limited knowledge about the subject matter I decided I would keep my presentation rather broad in message and content. I drew on several of the topics I have discussed previously on this blog: the nature of coupled human-natural systems, reasons we might model, and potential risks we face when modelling CHANS.

In particular, I suggested that if prediction of a future system state is our goal we will be best served focusing our modelling efforts on the natural system and then using that model with scenarios of future human behaviour to examine the plausible range of states the natural system might take. Alternatively, if we view modelling as an exclusively heuristic tool we might better envisage the modeling process as a means to facilitate communication between disparate groups of experts or publics and explore what different conceptualisations allow and prevent from happening with regards our stewardship or management of the system. Importantly, in both cases the act of making our implicitly held models of how the world works explicit by laying down a formal model structure is the primary value of modelling CHANS.

There was brief talk towards the end of the meeting about setting up a workshop website that might even contain audio/video recordings of presentations and discussions that took place. If such a website appears I’ll link to it here. In the meantime, the next meeting I’ll be attending on campus is likely to be the overview of Coupled Human-Natural Systems discussion in the Networking for Environmental Researchers program.

Why Model?

When asked this question, Joshua Epstein would reply:

‘You are a modeler.’

In his recent article in JASSS he continues:

“Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.

But typically, it is an implicit model in which the assumptions are hidden, their internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown. But, when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other. It is just an implicit model that you haven’t written down.”(1.2-1.3)

Epstein goes on to imply that he thinks evaluating models by showing that their output matches empirical data isn’t a particularly useful test (as I have discussed previously). He emphasises that by making our implicit models explicit, we allow others to scrutinise the logic and coherence of that model and provide the opportunity for attempts at replication it (and the results).

In our paper reviewing concepts and examples of succession-disturbance dynamics in forest ecosystems George Perry and I used the distinction between modelling for explanation and modelling for prediction to structure our discussion. Epstein takes a similar tack, but the majority of his article seems to imply that he is more interested in the former than the latter. He suggests 16 reasons to model other than to predict. These are to:

  1. Explain (very distinct from predict)
  2. Guide data collection
  3. Illuminate core dynamics
  4. Suggest dynamical analogies
  5. Discover new questions
  6. Promote a scientific habit of mind
  7. Bound (bracket) outcomes to plausible ranges
  8. Illuminate core uncertainties
  9. Offer crisis options in near-real time
  10. Demonstrate tradeoffs / suggest efficiencies
  11. Challenge the robustness of prevailing theory through perturbations
  12. Expose prevailing wisdom as incompatible with available data
  13. Train practitioners
  14. Discipline the policy dialogue
  15. Educate the general public
  16. Reveal the apparently simple (complex) to be complex (simple)

After briefly discussing a couple of these points Epstein notably highlights the dictum attributed to George Box: “All models are wrong, but some are useful” (something I hope the students in my class are really beginning to appreciate). This idea leads neatly into Epstein’s final and, for him, most important point:

“To me, however, the most important contribution of the modeling enterprise—as distinct from any particular model, or modeling technique—is that it enforces a scientific habit of mind, which I would characterize as one of militant ignorance—an iron commitment to ‘I don’t know.’ That is, all scientific knowledge is uncertain, contingent, subject to revision, and falsifiable in principle. … One does not base beliefs on authority, but ultimately on evidence. This, of course, is a very dangerous idea. It levels the playing field, and permits the lowliest peasant to challenge the most exalted ruler—obviously an intolerable risk.”(1.16)

So, Why Model? To predict or to explain? As usual that’s probably a false dichotomy. The real point is that there are plenty of reasons to model other than to predict.

eLectures

During the second half of the course I’m teaching at MSU this semester (FW852 Systems Modeling and Simulation) I’ve invited several colleagues to give guest lectures on the modelling work they do. These lecture serve as examples to the students of modeling and simulation in practice, and provide the opportunity to tap the brains of experts in different fields.

One of the speakers I invited was one of my former PhD advisors, Dr. George Perry. George is at the University of Auckland, New Zealand. Rather than pay for him to fly half way around the world we thought we would save some CO2 (and money!) by doing the lecture via internet video conference. As you can see from the photo below we had a video feed from George up on a large screen (you can also see the video feed he had of our room down in the lower right of his screen) with his presentation projected onto a separate screen (at right).


George spoke about research he has done modelling habitat dynamics and fish population persistence in intermittent lowland streams in SE Australia [I’ll link here to his forthcoming paper on this work soon]. The emphasis was on the ecology of the system and how modeling combined with fieldwork can aid understanding and restoration of systems like this.

Everything went pretty well with only a couple of Max Headroom-type stutters (the stutters were purely technical – George’s presentation and material was much more coherent than the 80’s icon!). With the increasing availability of (free) technologies like this (I often use Skype to make video calls with my folks back home, and Google just released their new Voice and Video Chat) no doubt this sort of communication is here to stay. And it looks unlikely that eLectures will stop here. As highlighted this week, academic conferences and lectures in virtual environments like Second Life are beginning to catch on too.