Usefulness of Spatial Landscape Models

Turner et al.’s discussion about the usefulness of spatial models in land management is now a bit of a classic (written in 1995) but it had also been a while since I read it. Re-reading it after coming back from a trip to our study area, many of the paper’s points resonated with what people (many of them natural resource managers) I met with were saying.

Turner et al. suggest that (p.13) “Models that integrate ecological and economic components so that the models can be used to explore both sets of consequences simultaneously are even more valuable [than ecological alone]”. This is the driving rationale for our research project. As it was succinctly put by one potential landowner in the study area, models of this kind will contribute to the development of plans that are based on an ecological approach but backed up with economic justification.

Given the hierarchical nature of landscape ecological processes and the importance of human activity on those processes, Turner et al. highlight (p.15) that “Land ownership has a large impact on management decisions, and a useful contribution of spatially explicit models is the ability to explore the effects of management by various owners within a mosaic of public and private lands.” With a range land owners, including the state and private industrial companies, the UP study area is in this position and the model we are developing will be able to directly consider the impacts of different land owner management strategies for the landscape as a wider region. Thus, one of the driving questions of the research is “how should timber be harvested across space and time in multiple land ownerships to ensure a sustainable landscape?”

One of the most striking things I was told on my trip was that the most useful thing our model would be able to do for land managers would be if it could get people to sit down together to come up with a coherent, sustainable management plan. Again, the links with Turner et al. are clear (p.15); “Communication between land managers and ecologists remains an important challenge, and spatially explicit models have the potential to create a common working framework.”

However, not only is the communication and collaboration side of the research a challenge, but so too is the technical side of things. Turner et al. highlight the issue of data quality; the model will only be as good as the data used and the accurate up-to-date spatial data bases required are expensive to produce. Furthermore, the quality of the data will determine the modeller’s ability to parameterizes the model at a given spatial resolution and extent. I’m currently reviewing the data that has been collected over the past few years by the research group at CSIS regarding the interactions between deer density, tree regeneration and bid habitat, but also the data managed and made available by Michigan’s Department of Natural Resources. Producing an accurate representation of deer population dynamics and movement across the landscape is certainly going to be a challenge. Next, the relationships between deer browse pressure and vegetation regeneration need to be specified and parameterized. The estimates of deer population and location can then be combined with these relationships to dynamically represent the interactions across space.

Once the model is up and running we will be able to examine spatial scenarios of forest management to assess both ecological and economic sustainability. For example, with regard to the appropriate location of mesic confer regeneration “…increasing the [mesic confer] component is expected to increase the number of individuals of conifer-associated bird species. And over time reduce productivity of the summer deer range and expand areas potentially suitable for deer during winter, resulting in a smaller deer herd dispersed over a larger wintering area (Doepker et al, 2001) in turn resulting in less browsing pressure in WUP forests. The eventual size, configuration, contiguousness and/or juxtaposition of restored habitats to existing or historical mesic conifer habitats and winter deer-yards on non-MDNR lands (public and private) may affect the success of these outcomes” (DNR 2004). Right now this confer regeneration is not going well and areas of maple forest are increasing.

Economically, the model should be able to show how different harvest rotations and management plans by private industrial land owners can ensure the most productive use of their land whilst ensuring both ecological and economic sustainability of the landscape. And not only for single landowners. The model should be useful to examine how actions of neighbouring land under differing ownership can work in concert. For example, if the private industrial goal is intensive harvest, maybe the primary objective of the state should be to ensure conifer cover. But the question then is what are the spatial implications of this? Is there any point in confer regeneration (which provides thermal cover for deer in the winter) if the distance between state and corporate land is large and deer cannot move from thermal cover to find food?

These are the sorts of questions and challenges to which spatial landscape models can be applied, and which we are aiming to tackle. Right now though, it’s time to concentrate on the technical development of the model and the representation of the spatio-temporal deer-vegetation interactions.

Reference
Turner, M.G., Arthaud, G.J., Engstrom, R.T, Hejl, S.J., Liu, J., Loeb, S. & McKelvey, K. (1995) Usefulness of Spatially Explicit Population Models in Land Management Ecological Applications, 5:1 12-16.

Summary – Validating and Interpreting Socio-Ecological Simulation Models

So finally the summary to my set of posts about the validation and interpretation of Socio-Ecological Simulation Models (SESMs)that arose out of some of the thinking I did during my PhD thesis.

The nature of open systems requires SESMs to specify and place boundaries on the system such that it may analysed effectively. Recent debate in the geographical and environmental modelling communities has highlighted the importance of observer dependencies when identifying the appropriate model ‘closure’. Furthermore, because an ‘open’ system can be ‘closed’ for study in multiple ways whilst still adequately representing system behaviour, the issue of model ‘affirming the consequent’ is present when attempting to model these systems.

Because of these issues I suggested that a more reflexive approach, emphasising trust via practical adequacy over the establishment of true model structure via mimetic accuracy, will put SESMs in a better position to provide understanding for non-modellers and contribute more readily to the decisions and debates regarding contemporary problems facing many real world environmental systems.

This is not to say issues regarding mimetic accuracy and model structure should be totally ignored – these model validation criteria will still have a role to play. However, emphasising trust via practical adequacy over truth via mimetic accuracy, ensures the model validation question is ‘how good it this model for my purposes?’ and not ‘is this model true?’. Engagement with local stakeholders throughout the modelling processes, contributing to model development and application should ensure practical adequacy, but also, in parallel, trust. As a result of this participatory model evaluation exercise, confidence in the model should be built, hopefully to the level where it can be deemed to be ‘validated’ (i.e. fit for purpose).

Stakeholder Participation and Expertise

The problems of equifinality and affirming the consequent suggest alternative criteria by which to validate or evaluate socio-ecological simulation models (SESMs) will be useful. In my last post in this series I suggested that trust and practical adequacy might be useful additional criteria. In light of the ‘risk society’-type problems facing the systems that SESMs represent, and the proposed post-normal science approaches to examine and resolve them, the participation of local stakeholders in the model validation process seems an important and useful approach to ensure and improve model quality. If local stakeholders are to accept decisions and policies made based upon results from simulation models they will need to trust a model and, by consequence, the modeller(s).

Due to a perceived ‘crisis of trust’ in science over the last 20 years, Wilsdon and Willis suggest “scientists have been slowly inching their way towards involving the public in their work” and that we are now on the cusp of a new phase of public engagement that takes it ‘upstream’. This widely used, but somewhat vague term, is used to refer to the early involvement of the lay public in the processes of scientific investigation. As such, engagement is ‘upstream’ nearer the point at which the research and development agenda is set, as opposed to the ‘downstream’ end at which research results are applied and the consequences examined (see Figure 1).

Figure 1 Public participation in the scientific research process. Recently it has been suggested that public engagement with the scientific process needs to move ‘upstream’ nearer the point at which the research agenda is set. After Jackson et al

Whereas previously the theory of the ‘public understanding of science’ was a deficit model suggesting that the public would trust science ‘if only they understood it’, the contemporary shift is towards and engagement and dialogue between science and society. The implication of this new turn is that the public will trust science ‘if only they are involved in the process itself’. Recently, Lane et al. advocated this move upstream for forms of environmental modelling that address issues and concerns of rural populations. This position has been criticised as devaluing the worth of science, for patronising the public, and being a mask for political face-saving or insurance.

Regardless of other areas of science, in the case of developing simulation models for socio-ecological systems the participation of the public does not result in the first two of these criticisms. Engaging with local stakeholders to ensure a model is both built on a logically and factually coherent foundation and to ensure it examines the appropriate questions and scenarios is of great value to the modelling process and should improve representation of the empirical system. Contributing to successful iterations of this process, local stakeholders will gain both trust and understanding. However, the inclusion of local stakeholders in the modelling process does raise the issue of expertise.

With parallels in the three phases Wilsdon and Willis have suggested, Collins and Evans have suggested we are entering a third wave in the sociology of science. This third wave demands a shift from an emphasis on technical decision-making and truth to expertise and experience. Collins and Evans suggest there are three types of expert in technical decision-making (i.e. decision-making at the intersection of science and politics); ‘No Expertise’, ‘Interactional Expertise’, and ‘Contributory Expertise’.

Individuals possessing interactional expertise are able to interact ‘interestingly’ with individuals undertaking the science, but not to contribute to the activities of science itself (contributory expertise). Brian Wynne’s well-known study of the (inadequate) interaction between Cumbrian sheep farmers and UK government scientists investigating the ecological impacts of the Chernobyl disaster is a prime example of a situation in which two parties possessed contributory expertise, but neither interactional expertise. As a result, the ‘certified’ expertise of the government scientists was given vastly more weight than the ‘non-certified’ expertise of the farmers (to the detriment of the accuracy of knowledge produced). Such non-certified expertise might also be termed ‘experience-based’ expertise, arising as it does from the day-to-day experiences of particular individuals.

The importance of considering non-certified, contributory experience is particularly acute for SESMs. Specifically, local stakeholders are likely to be an important, if not the primary, source of knowledge and understanding regarding socio-economic processes and decision-making within the study area. Furthermore, the particular nature of the interactions between human activity and ecological (and other biophysical) processes within the study area will be best understood and incorporated into the simulation model via engagement with stakeholders. This local knowledge will be vital to ensure the logical and factual foundations of the model are as sound as possible.

Furthermore, engagement with local stakeholders will highlight model omissions, areas for improved representation, and guide application of the model. It provides an opportunity to enlighten experts as to the ‘blind spots’ in their knowledge and questions. As such, the local stakeholders become an ‘extended peer community’, lending alternative forms of knowledge and expertise to the model (and research) validation process than that of the scientific peer community. This knowledge and expertise may be less technical and objective than that of the scientific community, but this nature does not necessarily reduce its relevance or utility to the modelling of a system that contains human values and subjects.

I pursued this idea of stakeholder participation in the modelling I undertook for my PhD. Early in the development of my agent-based model of land use decision-making, local stakeholders were interviewed with regards to how they made decisions and their understanding about landscape dynamics. Upon completion of model construction I went to talk with stakeholders about the model as they offered the prime source of criticism about the model representation of their decision-making activities. By engaging with these stakeholders a form of qualitative, reflexive model validation was performed that overcame some of the problems of a more deductive approach.

BSG – Modelling Human Impacts on Geomorphic Processes

This week sees the Annual Conference of the British Society for Geomorphology (BSG – formerly the British Geomorphological Research Group, BGRG). Running from Wednesday 4th to Friday 6th, the conference is being held at the University of Birmingham in the UK. With the theme Geomorphology: A 2020 Vision, recent developments and advances in the field, such as models and modelling approaches, will be explored and debated, and the potential to exploit emerging approaches to solve key challenges throughout pure and applied Geomorphology will be discussed.

With these recent and future advances in mind, one of my PhD advisors, Prof. John Wainwright, will present a paper entitled Modelling Human Impacts on Geomorphic Processes which contains work originating from my thesis. He’ll be presenting it in the first session of Wednesday afternoon, Process Modelling: Cross-Cutting Session. I’m sure it will turn out to be an interesting session, and one that continues the recent thirst for inter- and cross-disciplinary research. Here’s the abstract:

Modelling Human Impacts on Geomorphic Processes
John Wainwright and James Millington

Despite the recognition that human impacts play a strong – if not now predominant – rôle in vegetation and landscape evolution, there has been little work to date to integrate these effects into geomorphic models. This inertia has been the result partly of philosophical considerations and partly due to practical issues.

We consider different ways of integrating human behaviour into numerical models and their limitations, drawing on existing work in artificial intelligence. Practical computing issues have typically meant that most work has been very simplistic. The difficulty of estimating time-varying human impacts has commonly led to the use of relatively basic scenario-based models, particularly over the longer term. Scenario-based approaches suffer from two major problems. They are typically static, so that there is no feedback between the impact and its consequences, even though the latter might often lead to major behavioural modifications. Secondly, there is an element of circularity in the arguments used to generate scenarios for understanding past landform change, in that changes are known to have happened, so that scenarios big enough to produce them are often generated without considering the range of possible alternatives.

In this paper we take examples from two systems operating in different contexts and timescales, but employing a similar overall approach. First, we consider human occupations in prehistoric Europe, in particular in relation to the transition from hunter-gatherer to simple agricultural strategies. The consequences of this transition for patterns of soil erosion are investigated. Secondly, an example from modern Spain will be used to evaluate the effects of farmers’ decision-making processes on land use and vegetation cover, with subsequent impacts on fire régime. From these agent-based models and from other examples in the literature, conclusions will be drawn as to future progress in developing these models, especially in relation to model definition, parameterization and testing.

Call for Papers: Environmental Micro-simulation

This call for papers for a special issue of Ecological Complexity addresses some of the issues I’ve been discussing recently, and hopes to present examples of multi-model approaches to assess environmental simulation model. If I’d seen this earlier or I might have tried to put something together. As it is I’ll just have to keep my eye open for the issue when it comes out next year sometime.

Call for Papers

Ecological Complexity is pleased to announce a special issue on: Environmental micro-simulation: From data approximation to theory assessment

Spatial micro-simulation has recently become a mainstream element in environmental studies. Essentially, different models, representing the same phenomena, are being extensively published and the “next step” sought is hypothesis testing, regarding the factors that determine system dynamics. However, the problem arises that assessment of environmental theories using spatial micro-simulation lacks a leading paradigm. While the Occam’s razor of positivism, which works perfectly in physics and chemistry, demands datasets covering the entire space of model parameters, the experimental abilities of environmentalists are limited and the data collected in the field represent only a small part of the always multi-dimensional parameter space. Consequently, any given model can be considered as merely approximating the few data sets available for verification and its theoretical validity is thus brought into question.

To overcome this limitation, we propose to generate a virtual world that will allow hypothesis testing based on environmental theory. That is, we propose to implement micro-simulation models using high-resolution GIS database and use them as a surrogate for reality, instead of the limited empirical database. GIS enables a realistically looking virtual world to be generated that, unlike the real one, provides the parameters characteristic of every trajectory. The almost unlimited data that can be generated from such a virtual world can then be used to assess our ability to extract rules and dependencies, estimate parameters and, finally, make applicable forecasts.

This special issue will focus on investigating models as representations of environmental theory with the help of a combination of real data and artificial worlds. We invite innovative research papers that employ different high-resolution models for generating virtual worlds, comparing them to each other, with the aim being to develop a better understanding of environmental theory. Examples can be studies of a model’s robustness, a comparative study of dynamic models, investigation of the limitations of data fitting methods and of a model’s sensitivity to changes in spatial and temporal resolution.

Scope
All sorts of micro-simulation, including cellular automata, agent-based systems, fuzzy systems, ANN and genetic algorithms, are welcome. The environmental systems of interest include, but are not limited, to:

  • Complex ecosystems
  • Landscape ecology
  • Terrain analysis and landscape evolution
  • Agriculture and pastoralism
  • Human-environment interaction
  • Land-use and land-cover changes
  • Urban dynamics

Submission instructions
Abstracts of 2 pages in length should be submitted to the Guest Editors by July 14, 2007. The review process of those abstracts considered to be the most relevant will continue and authors will be required to upload the full manuscript to the Ecological Complexity website by November 1, 2007.

Guest Editors
Tal Svoray
Ben-Gurion University of the Negev,
tsvoray@bgu.ac.il

Itzhak Benenson
Tel Aviv University,
bennya@post.tau.ac.il

Alternative Model Assessment Criteria

Given the discussion in the previous posts regarding the nature of socio-ecological systems, equifinality and relativism in environmental modelling, how should we go about assessing the worth and performance of our simulation models of human-environment systems?

Simulation models are tangible manifestations of a modellers’ ‘mental model’ of the structure of the system being examined. Socio-Ecological Simulation Models (SESMs) may be thought of as logical and factual arguments made by a modeller, based on their mental model. If the model assumptions hold, these arguments should provide a cogent and persuasive indication of how system states may change under different scenarios of environmental, economic and social conditions. However, the resulting simulation model, based upon a logical and factually coherent mental model, is unlikely to be validated on these two criteria (logic and fact) alone.

First, the problems of equifinality suggest that there are multiple logical model structures that could be implemented for any particular system. Second, accurate mimetic reproduction of an empirical system state by a model may be the most persuasive form of the factual proof of a model in many eyes, but the dangers of affirming the consequent make it impossible to prove temporal predictions in models of open systems are truly accurate. Simulation models may be based on facts about empirical systems, but their results cannot be taken as facts about the modelled empirical system.

Thus, some other criteria alongside the logical and factual criteria will be useful to evaluate or validate a SESM. A third and fourth criteria, for environmental simulation models that consider the interaction of social and ecological systems at least, are available by specifically considering the user(s) of a model and its output. These criteria are closely linked.

My third proposed criterion is the establishment of user trust in the model. Trust is used here in the sense of ‘confidence in the model’. If a person using a model or its results does not trust the model it will likely not be deemed fit for its intended purpose. If confidence is lacking in the model or its results, confidence will consequently be lacking in any knowledge derived, decision made, or policy recommended based upon the model. Thus, the use of trust as a criterion for validation is a form of ‘social validation’, ensuring that user(s) agree the model is a legitimate representation of the system.

The fourth criteria by which a model might achieve legitimacy and receive a favourable evaluation (i.e. be validated), is the provision of some form of utility to the user. This utility will be termed ‘practical adequacy’. If a model is not trusted then it will not be practically adequate for its purpose. However, regardless of trust, if the model is not able to address the problems or questions set by the user then the model is equally practically inadequate.

The addition of these two criteria, centred on the model user rather than the model itself, suggests a shift away from falsification and deduction as model validation techniques, toward more reflexive approaches. The shift in emphasis is away from establishing the truth and mimetic accuracy of a model and toward ensuring trust and practical adequacy. By considering trust and practical adequacy, validation becomes an exercise in model evaluation and reclaims its more appropriate meaning of ‘establising a model’s legitimacy’.

From his observation of experimental physicists and work on the ‘experimenter’s regress’, Collins has arrived at the view that there is no distinction between epistemological criteria and social forces to resolve a scientific dispute. The position outlined previously seems to imply a similar situation for models of open, middle-numbered systems where modellers are required to resort to social criteria to justify their models due the inability to do so convincingly epistemologically. This is not necessarily an idea that many natural scientists will sit comfortably with. However, the shift away from truth and mimetic accuracy should not necessarily be something modellers would object to.

First, all modellers know that their models are not true, exact replications of reality. A model is an approximation of reality – there is no need to create a model system if experimentation on the existing empirical system is possible. Furthermore, accepting the results of a model are not ‘true’ (i.e. in the sense that they are perfect predictions of the future) in no way requires the model be built on incorrect logic or facts. As Hesse notes in criticism of Collins, whilst the resolution of scientific disputes might result from a social decision that is not forced by the facts, “it does not follow that social decision has nothing to do with objective fact”.

Second, regardless of truth and mimetic accuracy, modellers have several options to build trust and ensure practical adequacy scientifically. Ensuring models are logically coherent and not factually invalid (i.e. criteria one and two) will already have come some way to make a scientific case. Furthermore, the traditions of scientific methodological and theoretical simplicity and elegance can be observed, and the important unifying potential across theories and between disciplines that modelling offers can be emphasised. Thus, regardless of the failures of epistemological methods for justifying them, socio-ecological and other environmental simulation models must be built upon solid logical and factual foundations;

“The postmodern world may be a nightmare for … normal science (Kuhn 1962), but science still deserves to be privileged, because it is still the best game in town. … [Scientists] need to continue to be meticulous and quantitative. But more than this, we need scientific models that can inform policy and action at the larger scales that matter. Simple questions with one right answer cannot deliver on that front. The myth of science approaching singular truth is no longer tenable, if science is to be useful in the coming age.”
(Allen et al. p.484)

Post-normal science highlights the importance of finding alternative ways for science to engage with both the problems faced in the contemporary world and the people living in that world. As they have been defined here, SESMs will inherently address questions that will be of concern to more than just scientists, including problems of the ‘risk society’. From a modelling perspective, a post-normal science approach highlights the need to build trust in the eyes of non-scientists such that understanding is fostered.

Further, it emphasises the need for SESMs to be practically adequate such that good decisions can be made promptly. It also implies that the manner in which a ‘normal’ scientist will go about assessing the trustworthiness or practical adequacy of a model (such as the methods described above) will differ markedly from that of a non-scientist. For example, scientific model users will often, but not always, have also been the person to develop and construct the model. In such a case the model will be constructed to ensure the model is practically adequate to address their particular scientific problems and questions.

When the model is to be used by other parties the issue of ensuring practical adequacy will not be so straight-forward, and particularly so when the user is a non-scientist. In such situations, the modeller needs to ask the question ‘practically adequate for what’? The inhabitants of the study areas investigated will have a vested interest in the processes being examined and will themselves have questions that could be addressed by the model. In all probability many of these questions will be ones that the modeller themselves has not considered or, if they have, may not have considered relevant. Further, the questions asked by local stakeholders may be non-scientific – or at least may be questions that environmental scientists are not used to attempting to answer.

The use and improvements in technical approaches (such a spatial error matrices from pixel-by-pixel model assessment) will remain useful and necessary in the future. Here however, I have emphasised potential alternative methods for model validation (assessment) might be useful to utilise the additional information and knowledge which is available from those actors driving change in a socio-ecological system. In other words, there is information within the system of study that is not utilised for model assessment by simply comparing observed and predicted system states. This information is present in the form of local stakeholders’ knowledge and experience.

Relativism in Environmental Simulation Modelling

Partly as a result of the epistemological problems described in my previous few posts, Keith Beven has forwarded a modelling philosophy that accepts uncertainty and a more relativist perspective. This realist approach demands greater emphasis on pluralism, use of multiple hypotheses, and probabilistic approaches when formulating and parameterising models. When pressed to comment further on his meaning of relativism, Beven highlights the problems of rigidly objective measures of model performance and of ‘observer dependence’ throughout the modelling process;

“Claims of objectivity will often prove to be an illusion under detailed analysis and for general applications of environmental models to real problems and places. Environmental modelling is, therefore, necessarily relativist.”

Beven suggests the sources of relativistic operator dependencies include;

  1. Operator dependence in setting up one or more conceptual model(s) of the system, including subjective choices about system structure and how it is closed for modelling purposes; the processes and boundary conditions it is necessary to include and the ways in which they are represented.
  2. Operator dependence in the choice of feasible values or prior distributions (where possible) for ‘free’ parameters in the process representations, noting that these should be ‘effective’ values that allow for any implicit scale, nonlinearity and heterogeneity effects.
  3. Operator dependence in the characterization of the input data used to drive the model predictions and the uncertainties of the input data in relation to the available measurements and associated scale and heterogeneity effects.
  4. Operator dependence in deciding how a model should be evaluated, including how predicted variables relate to measurements, characterization of measurement error, the choice of one or more performance measures, and the choice of an evaluation period.
  5. Operator dependence in the choice of scenarios for predictions into the future.

The operator dependencies have been highlighted in the past, but have re-emerged in the thoughts of geographers (Demeritt, Brown, O’Sullivan, Lane et al.), environmental scientists (Oxley and Lemon), social scientists (Agar) and philosophers of science (Collins, Winsberg).

Notably, although with reference to experimental physics rather than environmental simulation modelling, Collins identified the problem of the ‘experimenter’s regress’. This problem states that a successful experiment occurs when experimental apparatus is functioning properly – but in novel experiments the proper function of the apparatus can only be confirmed by the success or failure of the experiment. So in situations at the boundaries of established knowledge and theory, not only are hypotheses contested, but so too are the standards and methods by which those hypotheses are confirmed or refuted. As a result, Collins suggests experimentation becomes a ‘skilful practice’ and that experimenters accept results based not on epistemological or methodological grounds, but on a variety of social (e.g. group consensus) and expert (e.g. perceived utility) factors.

This stance is echoed in many respects by Winsberg’s ‘epistemology of simulation’, which suggests simulation is a ‘motley’ practice and has numerous ingredients of which theoretical knowledge is only one. The approximations, idealisations and transformations used by simulation models to confront analytically intractable problems (often in the face of sparse data), need to be justified internally (within the model construction process) on the basis of existing theory, available data, empirical generalisations, and the modeller’s experience of the system and other attempts made to model it.

Similarly, Brown suggests that in the natural sciences uncertainty is rarely viewed as being due to the interaction of social and physical worlds (though Beven’s environmental modelling philosophy outlined above does) and that modellers of physical environmental processes might learn from the social sciences where the process of gaining knowledge is understood to be important for assessing uncertainty.

However, whilst an extreme rationalist perspective prevents validation and useful analysis of the utility of a model, its output, and the resulting knowledge (because of things like affirming the consequent), so too does an extreme relativist stance which understands model and model builder are inseparable. Rather, as Kleindorfer et al. suggest, modellers need to develop the means to increase the credibility of the model such that “meaningful dialogue on a model’s warrantability” can be conducted. How and why this might be achieved will be discussed in future posts.

Daniel Botkin’s Renegade Blog

Daniel Botkin, eminent Ecologist and author of Discordant Harmonies, has recently started a blog called Reflections of a renegade naturalist. Two recent posts caught my eye.

The days of Smokey Bear, an enduring American icon of wildland management and its efforts to communicate with the public, are apparently numbered. Whilst his message about taking precautions against starting wildfires remains necessary, the underlying ethos of forest (and environmental) management has changed. Once, ecologists’ theoretical foundation was the ‘balance of nature’ and the presence of equilibrium and stability within ecosystems. But over the past three decades this perception has dramatically shifted and now ‘change is natural’ would be a more apt motto. Ecosystems are dynamic. Disturbance, such a wildfire, is now seen as an inherent and necessary component of many landscapes to ensure ecosystem health. This shift in thinking is evident on the Smokey website, with sections discussing the use of prescribed fire, fire’s role in ecosystem function, and the potential pitfalls of excluding fire entirely. George Perry has written an excellent review of these shifts in ecological understanding.


So what about Smokey Bear? His message about taking precautions in wilderness areas still remain of course. But with this new ecological ethos in mind, Botkin was asked for suggestions for a new management mascot. He came up with Morph the Moose. I haven’t seen anything about Morph previously, and a quick Google search currently only throws up 7 hits, so we’ll have to watch out for Morph wandering around with his new message soon.

The second post that got my eye is related to the evaluation of the forest growth model JABOWA that Botkin developed. JABOWA is an individual-based model that considers the establishment, growth and senescence of individual trees. In 1991 JABOWA was used to forecast how potential global warming would influence the Kirtland’s warbler, an endangered species that nests only in Michigan. Botkin and his colleagues forecast that by 2015 the Jack pine habitat of the warbler would decline significantly with detrimental consequences for the warbler. On his blog he suggests that matching this prediction with contemporary observations will be an ideal test to validate the predictions of the JABOWA model. Given my previous discussion about ‘affirming the consequent’ (i.e. deeming a model a true representation of reality if its predictions match observed reality, and false if it does not) it’s good to see Botkin does not suggest a valid prediction indicates the validity of the model itself. We’re advised us to stay tuned for the results. Given the subject matter and quality of the articles on the new renegade blog I certainly will.

Affirming the Consequent

A third epistemological problem of knowing whether a given (simulation) model structure is appropriate, after Equifinality and Interactive Kinds, regards the comparison of model results with real-world empirical data. Comparison of models’ predictions with empirical events has frequently been used in an attempt to show that the model structure is an accurate representation of the system being modelled (i.e. demonstrate it is ‘true’). Such an idea arises from the hypothetico-deductive scientific method of isolating a system and then devising experiments to logical prove a hypothesis via deduction. As I’ve discussed, such an approach may be useful in closed laboratory-type situations and systems, but less so in open systems.

The issue here is that predictions about real-world environmental systems are temporal predictions about events occurring at explicit points in time or geographical space, not logical predictions that are independent of space and time and that allow the generation of science’s ‘universal laws’. These temporal predictions have often been treated with the same respect given to the logical prediction of the hypothetico-deductive method. However, as Naomi Oreskes points out, it is unclear whether the comparison of a temporal prediction produced by a simulation model with empirical events is a test of the input data, the model structure, or the established facts upon which the structure is based. Furthermore, if the model is refuted (i.e. temporal predictions are found to be incorrect) given the complexity of many environmental simulation models it would be hard to pin-point which part of the model was at fault.

In the case of spatial models, the achievement of partially spatially accurate prediction does little to establish where or why the model went wrong. If the model is able to predict observed events, this is still no guarantee that the model will be able predict into the future given it cannot be guaranteed that the stationarity assumption will be maintained. This assumption is that the processes being modelled are constant thought time and space within the scope of the model. Regardless, Oreskes et al. (1994) have argued that temporal prediction is not possible by numerical simulation models of open, middle-numbered systems because of theoretical, empirical, and parametric uncertainties within the model structure. As a consequence, Oreskes et al. (1994) warn that numerical simulation modellers must beware of making the fallacy of ‘affirming the consequent’ by deeming a model invalid (i.e. false) if it does not reproduce the observed real-world data, or valid (i.e. true) if it does.

Interactive vs. Indifferent Kinds

Models that consider human activity are particularly difficult to ‘close’ because of their consideration of ‘interactive’ kinds. Ian Hacking highlights the distinction between the classification of ‘interactive’ and ‘indifferent’ kinds. Different kinds of people are ‘interactive kinds’ because people are aware and can respond to how they are being classified. Hacking contrasts the interactive kinds that are often studied in the social sciences with the indifferent kinds of the natural sciences. Indifferent kinds – such as trees, rocks, or fish – are not aware that they are being classified by an observer. This indifference to classification means their behaviour does not change because of it [but see my point at the end of this post].

The representation of interactive kinds potentially results in a ‘looping effect’ that has implications for model closure and validation – socioecological simulation models have the potential to feedback into, and therefore transform, the systems they represent via the conscious awareness of local stakeholders using the model or its results (or participating in the modelling process). If this transformation occurs it is likely that the model will be a less accurate representation of the empirical system than previously. Such a situation implies that a simulation model of a socioecological system may never truly represent that system (if it used by those it represents). Therefore in the case where a model is to be used by those being represented (for decision-making for example) I’d suggest that an iterative modelling process would be most appropriate to ensure continued utility.

[If anyone has any thoughts on how Hacking’s kinds relate to the whole Schrödinger’s Cat problem I’m all ears – interactive or indifferent?]