Creating a Genuine Science of Sustainability

Previously, I wrote about Orrin Pilkey and Linda Pilkey-Jarvis’ book, Useless Arithmetic: Why Environmental Scientists Can’t Predict the Future. In a recent issue of the journal Futures, Jerome Ravetz reviews their book alongside David Waltner-Toews’ The Chickens Fight Back: Pandemic Panics and Deadly Diseases That Jump From Animals to Humans. Ravetz himself points out that the subject matter and approaches of the books are rather different, but suggests that “Read together, they provide insights about what needs to be done for the creation of a genuine science of sustainability”.

Ravetz (along with Silvio Funtowicz) has developed the idea of ‘post-normal’ science – a new approach to replace the reductionist, analytic worldview of ‘normal’ science. Post-normal science is a “systemic, synthetic and humanistic” approach, useful in cases where “facts are uncertain, values in dispute, stakes high and decisions urgent”. I used some of these ideas to experiment with some alternative model assessment criteria for the socio-ecological simulation model I developed during my PhD studies. Ravetz’s perspectives toward modelling, and science in general, shone through quite clearly in his review:

“On the philosophical side, the corruption of computer models can be understood as the consequence of a false metaphysics. Following on from the prophetic teachings of Galileo and Descartes, we have been taught to believe that Science is the sole and certain path to truth. And this Science is mathematical, using quantitative data and abstract reasonings. Such a science is not merely necessary for achieving genuine knowledge (an arguable position) but is also sufficient. We are all victims of the fantasy that once we have numerical data and mathematical argument (or computer programs), truth will inevitably follow. The evil consequences of this philosophy are quite familiar in neo-classical economics where partly true banalities about markets are dressed up in the language of the differential calculus to produce justifications for every sort of expropriation of the weak and vulnerable. ‘What you can’t count, doesn’t count’ sums it all up neatly. In the present case, the rule of models extends over nearly all the policy-relevant sciences, including those ostensibly devoted to the protection of the health of people and the environment.

We badly need an effective critical philosophy of mathematical science. … Now science has replaced religion as the foundation of our established order, and in it mathematical science reigns supreme. Systematic philosophical criticism is hard to find. (The late Imre Lakatos did pioneering work in the criticism of the dogmatism of ‘modern’ abstract mathematics but did not focus on the obscurities at the foundations of mathematical thinking.) Up to now, mathematical freethinking is mainly confined to the craftsmen, with their jokes of the ‘Murphy’s Law’ sort, best expressed in the acronym GIGO (Garbage In, Garbage Out). And where criticism is absent, corruption of all sorts, both deliberate and unaware, is bound to follow. Pseudo-mathematical reasonings about the unthinkable helped to bring us to the brink of nuclear annihilation a half-century ago. The GIGO sciences of computer models may well distract us now from a sane approach to coping with the many environmental problems we now face. The Pilkeys have done us a great service in providing cogent examples of the situation, and indicating some practical ways forward.”

Thus, Ravetz finds a little more value in the Useless Arithmetic book than I did. But equally, he highlights that the Pilkeys offer few, rather vague, solutions and instead turns to Waltner-Toews’ book for inspiration for the future:

Pilkey’s analysis of the corruptions of misconceived reductionist science shows us the depth of the problem. Waltner-Toews’ narrative about ourselves in our natural context (not always benign!) indicates the way to a solution.”

Using the outbreak of avian flu as an example of how to tackle complex environmental in the ‘risk society’ in which we now live, Waltner-Toews:

“… makes it very plain that we will never ‘conquer’ disease. Considering just a single sort of disease, the ‘zoonoses’ (deriving from animals), he becomes a raconteur of bio-social-cultural medicine …

What everyone learned, or should have learned, from the avian flu episode is that disease is a very complex entity. Judging from TV adverts for antiseptics, we still believe that the natural state of things is to be germ-free, and all we need to do is to find the germs and kill them. In certain limiting cases, this is a useful approximation to the truth, as in the case of infections of hospitals. But even there complexity intrudes … “

Complexity which demands an alternative perspective that moves beyond the next stage of ‘normal’ science to a post-normal science (to play on Kuhn’s vocabulary of paradigm shifts):

“That old simple ‘kill the germ’ theory may now be derided by medical authorities as something for the uneducated public and their media. But the practice of environmental medicine has not caught up with these new insights.

The complexity of zoonoses reflects the character of our interaction with all those myriads of other species. … the creatures putting us at risk are not always large enough to be fenced off and kept at a safe distance. … We can do all sorts of things to control our interactions with them, but one thing is impossible: to stamp them out, or even to kill the bad ones and keep the good ones.

Waltner-Toews is quite clear about the message, and about the sort of science that will be required, not merely for coexisting with zoonoses but also for sustainable living in general. Playing the philological game, he reminds us that the ancient Indo-European world for earth, dgghem, gave us, along with ‘humus’, all of ‘human’, ‘humane’ and ‘humble’. As he says, community by community, there is a new global vision emerging whose beauty and complexity and mystery we can now explore thanks to all our scientific tools.”

This global vision is a post-normal vision. It applies to far more than just avian flu – from coastal erosion and the disposal of toxic or radioactive waste (as the Pilekys discuss for example) to climate change. This post-normal vision focuses on uncertainty, value loading, and a plurality of legitimate perspectives that demands an “extended peer community” to evaluate the knowledge generated and decisions proposed.

In all fairness, it would not be easy to devise a conventional science-based curriculum in which Waltner-Toews’ insights could be effectively conveyed. For his vision of zoonoses is one of complexity, intimacy and contingency. To grasp it, one needs to have imagination, breadth of vision and humility, not qualities fostered in standard academic training. … “

This post-normal science won’t be easy and won’t be learned or fostered entirely within the esoteric confines of an ivory tower. Science, with its logical rigour, is important. It is still the best game in town. But the knowledge produced by ‘normal’ science is provisional and its march toward truth is seemingly Sisyphean when confronted faced with the immediacy of complex contemporary environmental problems. To contribute to the production a sustainable future, a genuine science of sustainability would do well to adopt a more post-normal stance toward its subject.

Model Types for Ecological Modelling

Sven Erik Jørgensen introduces a recent issue of Ecological Modelling that presents selected papers from the International Conference on Ecological Modelling in Yamaguchi, Japan (28 August – 1 September 2006). The paper provides an overview of the model types available for ecological modelling, briefly highlighting the shift from a dominance of bio-geo-chemical dynamic models and population dynamics models in the 1970s toward the application of a wider spectrum of models. The emergence of new model types has come as a response to questions such as:

  • How can we describe the spatial distribution which is often crucial to understand ecosystem reactions?
  • How do we model middle number systems?
  • How do we model hetergenous populations and databases (e.g. observations from many different ecosystems)?
  • How do we model ecosystems, when our knowledge is mainly based on a number of rules/properties/propositions?

Jørgensen suggests there are at least 10 types of model currently available for modelling ecological systems (purely mathematical and statistical aside):

  1. (Bio-geo-chemical and bio-energetics), dynamic models
  2. Static models
  3. Population dynamic models
  4. Structurally dynamic models
  5. Fuzzy models
  6. Artificial neural networks
  7. Individual-based models and cellular automata
  8. Spatial models
  9. Ecotoxicological models
  10. Stochastic models
  11. Hybrid models

Of these, my particular interest is in spatial models, individual-based models and cellular automata models (with a passing interest in population models). This is largely because of my background in geography and landscape ecology, but also because of the heterogeneity in patterns, processes and behaviour often exhibited in socio-ecological systems.

Jørgensen offers a short description of each type, before listing their advantages and disadvantages. Here are a couple with my comments in italics:

Individual-Based Models (IBMs)and Cellular Automata (CA)
First, counter to Jørgensen, I would argue that CA models should be placed with the ‘spatial models’ – the ability of CA to represent space for me outweighs their potential to represent (limited) heterogeneity between cells. This aside, their grouping does make sense when we consider that these models can be relatively easily combined to represent individuals’ interactions across space and with a heterogeneous environment (via the CA).

Advantages

  • Are able to account for individuality – agreed, especially for IBMs
  • Are able to account for adaptation within the spectrum of properties – yes
  • Software is available; although the choice is more limited than by bio-geo-chemical dynamic models – but excellent free modelling environments such as NetLogo make this type of modelling widely available
  • Spatial distribution can be covered – yes

Disadvantages

  • If many properties are considered, the models get very complex – and may require the adoption and development of new techniques to present/analyse/interpret output (e.g. POM, narratives)
  • Can be used to cover the individuality of populations; but they cannot cover mass and energy transfer based on the conservation principle – I see no reason why the principle of energy and mass conservation could not be achieved by models of these types
  • Require many data to calibrate and validate the models – yes, this often the case, and in some cases (again) may require new approaches and types of data to calibrate and evaluate models

Spatial Models
Advantages

  • Cover spatial distribution, that is often of importance in ecology – yes, particularly Landscape Ecology, an entire discipline that has arisen since the 1970s and ’80s
  • The results can be presented in many informative ways, for instance GIS – GIS is a means to organise and analyse data as well as present data

Disadvantages

  • Require usually a huge database, giving information about the spatial distribution – this can certainly give rise to the issue of ‘model but no data’ and increases the costs of performing ecological research by adding space to time. We have found that our large (~4,000 sq km) Upper Michigan study area demands high time and resources needed for data collection.
  • Calibration and validation are difficult and time-consuming – maybe more so than non-spatial models, but probably not as much as some individual-based models
  • A very complex model is usually needed to give a proper description of the spatial patterns – not necessarily. A model should be only as complex as the patterns and processes it seeks to examine and the inclusion of space does not imply patterns or processes any more complex than a system with less variables or interactions that is non-spatial.

This isn’t a bad review of the types of ecological modelling being done. However, more incisive and useful insight could have been made with respect to landscape ecology and those models that are now beginning to attempt to account for human activity in ecological systems. [And it definitely could have been better written.] Maybe I’ll stop criticising sometime and write one myself eh?

Aldo Leopold Legacy Center – The ‘Greenest Building in the US’

One of the fieldtrips we took during the US-IALE conference in Madison was to Aldo Leopold’s shack and the Aldo Leopold Legacy Center. Aldo Leopold is considered by many to be the ‘father’ of wildlife management. His significant and lasting mark is his book, A Sand County Almanac. I’ll look at the book in later post, but here I’ll talk briefly about what we saw on our excursion from Madison.

After graduating from the Yale Forest School in 1909, Aldo Leopold spent time working in Arizona and New Mexico before moving to Madison, Wisconsin, in 1924. In 1933 he published the first wildlife management textbook and accepted a new chair in game management at the University of Wisconsin – a first for both the university and the nation.

In 1935, Leopold and his family initiated their own ecological restoration experiment on a washed-out sand farm of 120 acres along the Wisconsin River near Baraboo, Wisconsin. Planting thousands of pine trees, restoring prairies, and documenting the ensuing changes in the flora and fauna informed and inspired Leopold. Many of his writings in the initial parts of A Sand County Almanac – the history of the local region as told through the rings of an oak tree, evening shows of sky dancing woodcock, fishing the Alder Fork, hunting ruffled-grouse in smoky gold tamarack – were penned in ‘the shack’ (above) on his farm which we stopped by at on a wet, grey day after visiting The Aldo Leopold Legacy Center (below).

In sharp contrast to ‘the shack’ the Legacy Center feels solid and dry. But consistent with the Land Ethic message of the writing that was done in the old dilapidated building, the new building ‘sustains the health, wildness, and productivity of the land, locally and globally‘. The Legacy Center has received Platinum Leadership in Energy and Environment Design (LEED) Certification from the U.S. Green Building Council and is currently the ‘greenest building in the U.S.’.

The Legacy Center is an example of how we can use energy more efficiently and construct building with a limited impact on our environment. Through energy efficiency, renewable energy, the Legacy Center is the first carbon neutral building certified by LEED — annual operations account for no net gain in carbon dioxide emissions.

The Legacy Center is also a net zero energy building, using 70 percent less energy than a building built just to code and meeting all of its energy needs on site using tools like a roof-mounted solar array and a ‘thermal flux zone’ to reduce heat flow between interior rooms and the outdoors. Many of the structural columns, beams, and trusses, as well as interior panelling and finish work, are from the pine trees Leopold planted himself on his farm between 1935-1948.

This really is a building that embodies Leopold’s Land Ethic – both conceptually through the principles used when designing the building, and physically by using material from Leopold’s own ecological restoration experiment. The Legacy Center contains the offices of the Leopold Foundation, has a small shop and ‘museum’ about Leopold, and can be hired for meetings. The building itself is what is really the attraction – and hopeully there will be more like this appearing more frequently elsewhere. Unless you’re passing by or really want to make a pilgrimage to gain an insight into the area where Leopold’s vision unfolded, there’s really no need to go out of your way to visit. Take a virtual tour instead to save energy and carbon and make the building even greener.

Michigan UP Seedling Experiment

I’ve been back from our study area in Michigan’s Upper Peninsula for over a week so it’s about time I posted something about what we were doing up there.

One of the main issues we will study with our integrated ecological-economic landscape model is the impact of whitetail deer (Odocoileus virginianus) herbivory on tree regeneration following cutting. Last November we spent a week planting 2 year-old seedlings in Northern Hardwood forest gaps created by selective timber harvest (like the one in the photo below).

Our plan was to return this spring to examine the impacts of deer browse on these seedlings. In particular, we wanted to examine how herbivory varies across space due to changes in deer population densities (in turn driven by factors such as snow depth).

To this end we selected almost 40 forest sites that would hopefully capture some spatial variation in snowfall and that had recently been selectively harvested. At each site we selected 10 gaps produced by timber harvest in which to plant our seedlings.

In each gap we planted six trees of each of three species: White Spruce (Picea glauca), White Pine (Pinus strobus) and Eastern Hemlock (Tsuga canadensis). We chose these coniferous species as these are examples of the mesic confer species the Michigan DNR are trying to restore across our study area, and because we expected a range of herbivory across these species.

At each site we would also undertake deer pellet counts in the spring to estimate the number of deer in the vicinity of the site during the winter (during which time the browse we were measuring would have occurred).

On returning to the study sites a couple of weeks ago we set about looking for the trees we had planted to measure herbivory and count deer pellets. In some cases, finding the trees we planted was easier said than done. We tried to get our field crews to plant the trees in straight lines with equal spacing between each tree. In general, this was done well but on occasion the line could only be described as crooked at best. Micro-topography, fallen tree trunks and limbs, and slash from previous cutting all contributed to hamper the planned planting system. However, we did pretty well and found well over 90% of the trees.

We haven’t begun analyzing our data as yet, but some anecdotal observations stand out. First, the deer preferentially browsed Hemlock over the other species, often removing virtually all non-woody biomass as shown by the ‘before and after’ examples below (NB – these photographs are not of the same tree and this is not a true before/after comparison).

In some cases, the deer not only removed all non-woody biomass but also pulled the tree out of the ground (as shown below).

In contrast, White Pine was browsed to a much lesser extent and White Spruce was virtually untouched (as shown below).

Having a species that was unaffected by deer (i.e. spruce) often made our job of finding the other trees much easier. Finding heavily browsed Hemlock that no longer had any green vegetation was often tricky against a background of forest floor litter.

The next step now is to start looking at this variation in browse through a more quantitative lens. Then we can start examining how browse and deer densities vary across space and how these variables are related to one another and other factors (such as snow depth and distance to conifer stands).

All-in-all the two weeks of work went pretty well. There were some issues with water-logged roads (due to snow melt) meaning we couldn’t get to one or two of the sites we planted at, but generally the weather was pretty good (it only rained heavily one day). I’ll write more once we have done more analysis and stop here with a shot I took at sunrise as I left for home.

US-IALE 2008 – Summary


A brief and belated summary of the 23rd annual US-IALE symposium in Madison, Wisconsin.

The theme of the meeting was the understanding of patterns, causes, and consequences of spatial heterogeneity for ecosystem function. The three keynote lectures were given by Gary Lovett, Kimberly With and John Foley. I found John Foley’s lecture the most interesting and enjoyable of the three – he’s a great speaker and spoke on a broader topic than the the others; Agriculture, Land Use and the Changing Biosphere. Real wide-ranging, global sustainability stuff. He highlighted the difficulties of studying agricultural landscapes because of the human cultural and institutional factors, but also stressed the importance of tackling these tricky issues because ‘agriculture is the largest disturbance the biosphere has ever seen’ and because of its large contribution to greenhouse gas emissions.

Presentations I was particularly interested in were mainly in the ‘Landscape Patterns and Ecosystem Processes: The Role of Human Societies’, ‘Challenges in Modeling Forest Landscapes under Climate Change’ and ‘Cross-boundary Challenges to the Creation of Multifunctional Agricultural Landscapes’ sessions.

In the ‘human societies’ session, Richard Aspinall discussed the importance of considering human decision-making at a range of scales and Dan Brown again highlighted the importance of human agency in spatial landscape process models. In particular, with regards modelling these systems using agent-based approaches he discussed the difficulty of model calibration at the agent level and stressed that work is still needed on the justification and evaluation phases of agent-based modelling.

The ‘modeling forest landscapes’ session was focused largely around use of the LANDIS and HARVEST models that were developed in and around Wisconsin. In fact, I don’t think I saw any mention of the USFS FVS at the meeting whilst I was there, largely because (I think) FVS has large data demands and is not inherently spatial. LANDIS and HARVEST work at more coarse levels of forest representation (grid cell compared to FVS’ individual tree) allowing them to be spatially explicit and to run over large time and space extents. We’re confident we’ll be able to use FVS in a spatially explicit manner for our study area though, capitalising on the ability of FVS to directly simulate specific timber harvest and economic scenarios.

The ‘multifunctional agricultural landscapes’ session had an interesting talk by Joan Nassauer on stakeholder science and the challenges it presents. Specific issues she highlighted were:
1. the need for a precise, operational definition of ‘stakeholder’
2. ambiguous goals for the use of stakeholders
3. the lack of a canon of replicable methods
4. ambivalence toward the quantification of stakeholder results

Other interesting presentations were given by Richards Hobbs and Carys Swanwick. Richard spoke about the difficulties of ‘integrated research’ and the importance of science and policy in natural resource management. He suggested that policy-makers ‘don’t get’ systems thinking or modelling, and that some of this may be down to the psychological profiles of the types of people that go into policy making. Such a conclusion suggests scientists need to work harder to bridge the gap to policy makers and do a better job of explaining the emergent properties of the complex systems they study. Carys Swanwick talked about the landscape character assessment, which was interesting for me having moved from the UK to the US about a year ago. Whilst ‘wilderness’ is an almost alien concept in the UK (and Europe as a whole), landscape character is something that is distinctly absent in the new world agricultural landscapes. Carys talked about the use of landscape character as a tool for conservation and management (in Europe) and the European Landscape Convention. It was a refreshing change from many of the other presentations about agricultural landscape (possibly just because I enjoyed seeing a few pictures of Blighty!).

Unfortunately the weather during the conference was wet which meant that I didn’t get out to see as much of Madison as I would have liked. Despite the rain we did go on the Biking Fieldtrip. And yes, we did get soaked. It was also pretty miserable weather for the other fieldtrip to and International Crane Foundation center and the Aldo Leopold Foundation (more on that in a future blog), but interesting nevertheless.

Other highlights of the conference for me were meeting the former members of CSIS and eating dinner one night with Monica Turner. I also got to meet up with Don McKenzie and some of the other ‘fire guys’, and a couple of people from the Great Basin Landscape Ecology lab where I visited previously. And now I’m already looking forward to the meeting next year in Snowbird, Utah (where I enjoyed the snow this winter).

Tackling Amazonian Rainforest Deforestation

This week’s edition of Nature devotes an editorial, a special report and an interview to the subject of tropical rainforests and their deforestation. The articles highlight both the proximate causes and underlying driving forces of tropical deforestation, and the importance of human activity as an agent of change (via fire for example), in these socio-ecological systems.

The editorial considers the economics of rainforest destruction, with regards to global carbon emissions. It suggests that deforestation must be integrated into international carbon markets, to reward those countries that have been able to control the removal of forest land (such as India and Costa Rica). Appropriate accounting of tropical rainforest carbon budgets is required however, and the authors point to the importance of carbon budget modelling and the monitoring of (via satellite imagery for example) change in rainforest areas over large spatial extents. Putting an economic price on ‘ecosystem services’ is key to this issue, and the editorial concludes:

One of the oddly positive effects of global warming is that it has given the world the opportunity to build a more comprehensive and inclusive economic model by forcing all of us to grapple with our impact on the natural environment. We are entering a phase in which new ideas can be developed, tested, refined and rejected as necessary. If we find just one that can beat the conventional economic measure of gross domestic product, and can quantify some of the basic services provided by rainforests and other natural ecosystems, it will more than pay for itself.


The special report focuses on the efforts of the Brazilian government to curb the rate of deforestation in the their Amazonian forests. The Brazilian police force is blockading roads, conducting aerial surveys and inspecting agricultural and logging operations, to monitor human activities on the ground. Brazilian scientists meanwhile are monitoring the situation from space, and have developed methodologies and techniques that are leading the way globally in the remote monitoring of forests. The Brazilian government is a keen advocate of the sort of economic approaches to the issues of rainforest destruction highlighted in the editorial outlined above, and sees this rigorous monitoring as key to be able to show how much carbon they can save by preventing deforestation.

Halting the removal of forest cannot simply be left to carbon trading alone, however, and local initiatives need to be pursued. To ensure the forest’s existence is sustainable, local communities need to be able make money for themselves without chopping down the trees – if they can do this it will be their in their interests NOT to remove forest. But developing this incentive has not been straightforward. For example, some researchers have have suggested that as commodity prices for crops such as soya beans have increased (possibly due to increased demand for corn-based ethanol in the US) deforestation has increased as a result. Although the price of soya beans may be a contributing factor to rainforest removal, Ruth DeFries (who will be visiting CSIS and MSU next week as part of the Rachel Carson Distinguished Lecture Series) suggests that it is not the main driver. Morton et al. found that during for the period 2001-04, conversion of forest to agriculture peaked in 2003. This situation makes it clear that there are both proximate causes and underlying driving forces of tropical deforestation. The Nature special report suggests:

If the international community is serious about tackling deforestation, it will probably need to use a hybrid approach: helping national governments such as Brazil to fund traditional policies for enforcement and monitoring and enabling communities to experiment with a market-based approach.


But how long do policy-makers have to discuss this and get these measures in place? One set of research suggests 55% of the Amazon rainforest could be removed over the next two decades, and the complexity of the rainforest system means that a ‘tipping point’ (i.e., an abrupt transition) beyond which the system might not recover (i.e., reforestation would not be possible). The Nature interview with Carlos Nobre highlights this issue – the interactions of climate change with soil moisture and the potential for fire indicate that the there is risk of rapid ‘savannization’ in the eastern to southeastern Amazon as the regional climate changes. When asked what the next big question scientists need to address in the Amazon is, Nobre replies that the role of human-caused fire will be key:

Fire is such a radical transformation in a tropical forest ecosystem that biodiversity loss is accelerated tremendously — by orders of magnitude. If you just do selective logging and let the area recover naturally, perhaps in 20–30 years only a botanist will be able to tell that a forest has been logged. If you have a sequence of vegetation fires going through that area, forget it. It won’t recover any more.


As I’ve previously discussed, considering the feedbacks and interactions between systems is important when examining landscape vulnerabilities to fire. Along with colleagues I have examined the potential effects of changing human activity on wildfire regimes in Spain (recently we had this paper published in Ecosystems and you can see more wildfire work here). However, the integrated study of socio-economic and ecological systems is still very much in its infancy. And the processes of landscape change in the northern Mediterranean Basin and the Amazonian rainforest are very different; practically inverse (increases in forest in the former and decreases in the latter). As always, plenty more work needs to be done on these subjects, and with the potential presence of ‘tipping points’, now is an important time to be doing it.

IALE-IUFRO WG Website


A while back the ‘new’ IALE-IUFRO Working Group website launched, so I thought I’d highlight it here. During the IALE World Congress 2007 in Wageningen, a new IALE-IUFRO working group was approved and sanctioned by both IALE (International Association of Landscape Ecology) and IUFRO (International Union of Forestry Research Organizations):

Forestry was the first major field to recognize the importance of landscape ecology, and today foresters widely know, use, and even develop landscape ecology principles based on experience and science. Landscape ecology is an exciting field for researchers and managers together. In this sense, landscape ecology is viewed as the nexus of ecology, resource management, and land use planning. It is within this framework of synergy and integration that we envisaged this formal link between the two groups.

Thus, the IALE-IUFRO WG aims to collate landscape ecologists with an interest in forest science and ecology including studies and methods for monitoring, planning, designing, and managing forest ecosystems and landscapes. Through the website, members of IALE-IUFRO WG will be able to exchange experiences and share common needs and interests to build up on the strength of the network. This group can serve as an international platform for advocating and updating research and management on forest landscapes.

Tom Veldkamp: Advances in Land Models

As I mentioned before, the Global Land Project website is experimenting with the use of webcasts to enable the wider network to “participate” and use the GLP webpage as a resource. For example, several presentations are available for viewing from the Third Land System Science (LaSyS) Workshop entitled ‘Handling complex series of natural and socio-economic processes’ and held in Denmark in October of 2007. One that caught my attention was by Tom Veldkamp, mainly because of its succinct title: Advances in Land Models [webcast works best in IE].


Presented in the context of other CHANS research, Veldkamp used an example from the south of Spain to discuss recent modelling approaches to examine the effects of human decisions on environmental processes and the feedbacks between human and natural systems. The Spanish example examined the interaction of human land-use decision making and soil erosion. A multi-scale erosion model, LAPSUS, represented the interactive natural and human processes occurring olive groves on steep hillslopes; gullying caused by extreme rainfall events and attempts to preserve soils and remove gullies by ploughing. Monte Carlo simulations were used to explore uncertainties in model results and highlighted the importance of path dependencies. As such, another example of the historical dimension of ‘open’ systems and the difficulties it presents for environmental modellers.

The LAPSUS model was coupled with the well known land use/cover change CLUE model to examine feedbacks between human land use and erosion. The coupled model was used to examine the potential implications of farmers adopting land use practices as a response to erosion. Interestingly, the model suggested that human adaptation strategy modelled would not lead to reduced erosion.

Veldkamp also discusses the issue of validating simulation models of self-organising processes, and suggests that ensemble and scenario approaches such as those used in global climate modelling are necessary for this class of models. However, rather than simply using ‘static’ scenarios that specify model boundary conditions, such as the IPCC SRES scenarios, scenarios that represent some form of feedback with the model itself will be more useful. Again, this comes back to his point about the importance of representing feedbacks in coupled human and natural systems.

For example, Veldkamp suggests the use of “Fuzzy Cognitive Maps” to generate ‘dynamic’ scenarios. Essentially, these fuzzy cognitive maps are produced by asking local stakeholders in the systems under study to quantify the effects of the different factors driving change. First, the appropriate components of the system are identified. Next, the feedbacks between these components are identified. Finally, the stakeholders are asked to estimate how strong these feedbacks are (on a scale of zero to one). This results in a semi-quantitative systems model that can be run for several iterations to examine the consequences of the feedbacks within the system. This method is still in development and Veldkamp highlighted several pros and cons:

Pros:

  • it is relatively easy and quick to do
  • it forces the stakeholders to be explicit
  • the emphasis is placed on the feedbacks within the system

Cons:

  • it is a semi-quantitative approach
  • often feedbacks are of incomparable units of measurement
  • time is ill defined
  • stakeholders are often more concerned with the exact values they put on an interaction rather than the relative importance of the feedbacks

I agree when Veldkamp suggests this ‘fuzzy cognitive mapping’ is a promising approach to scenario development and incorporation into simulation modelling. Indeed, during my PhD research I explored the use of an agent-based model of land use decision-making to provide scenarios of land use/cover change for a model of forest succession-disturbance dynamics (and which I am currently writing up for publication). ‘Dynamic’ model scenario approaches show real promise for representing feedbacks in coupled human natural systems. As Veldkamp concludes, these feedbacks, along with the non-linearities in system behaviour they produce, need to be explicitly represented and explored to improve our understanding of the interactions between humans and their environment.

Global Land Project

The Global Land Project is a proposed joint research project for land systems for the International Geosphere-Biosphere Programme (IGBP) and the International Human Dimensions Programme (IHDP). It plans to build upon previous work and the research network developed during the Global Change and Terrestrial Ecosystems (GCTE) and Land Use/Cover Change (LUCC) projects. The GLP website states:

The Global Land Project Science Plan represents the research framework for the coming decade for land systems. This development of a research strategy is designed to better integrate the understanding of the coupled human-environment system. These integrated science perspectives reflect the recognition of the fundamental nature of how human activities on land are affecting feedbacks to the earth system and the response of the human-environment system to global change.

The GLP will evidently be an important component of CHANS research in the coming years. Of the three research ‘Nodal Offices’ around the world, one is located in Aberdeen, Scotland and will be essentially run by the folks at the Macaulay Institute. They have several workshop coming up in 2008, the titles which seem to suggest discussion of the sort of work that I often insist on espousing on this blog. In late February 2008 Workshop 1. will examine The design of integrative models of natural and social systems in land change science, and 2 later in the year Workshop will discuss Data and model integration for coupled models of land use change. As I write it looks like those interested in such matters can still apply to attend. Future workshops will examine:

  • Integration of the economic and spatial modelling of land use change
  • Representation of land systems in the modelling of ecosystem services
  • Economic, social and environmental valuation of land use systems

Also on the GLP website are a series of webcasts from previous workshops for all those that missed out on attending (like me). There are some pretty interesting presentation on there, and in a couple of days I think I’ll post about the recent Advances in Land Models as presented by Tom Veldkamp.

To Catch a Panda


One of the main research foci of CSIS is the interaction of policies, people, and Giant Panda habitat in China. Recently, Vanessa Hull, one of the CSIS PhD students, set off for the mountains of Sichuan province with the aim of catching and radio-collaring four Giant Pandas. Once collared, she’ll be tracking the movements of the Panda so that we might learn more about these endangered animal’s habitat preferences.

The MSU Newsroom have picked up on her current fieldwork and have set up a website detailing her work. Tracking and capturing individuals from this elusive species is easier said than done though. So that we can keep track of how successful (or otherwise) she is, Vanessa is posting excerpts from her journal online. A contemporary account of conservation research in the field, it promises to be interesting…