Stakeholder Participation and Expertise

The problems of equifinality and affirming the consequent suggest alternative criteria by which to validate or evaluate socio-ecological simulation models (SESMs) will be useful. In my last post in this series I suggested that trust and practical adequacy might be useful additional criteria. In light of the ‘risk society’-type problems facing the systems that SESMs represent, and the proposed post-normal science approaches to examine and resolve them, the participation of local stakeholders in the model validation process seems an important and useful approach to ensure and improve model quality. If local stakeholders are to accept decisions and policies made based upon results from simulation models they will need to trust a model and, by consequence, the modeller(s).

Due to a perceived ‘crisis of trust’ in science over the last 20 years, Wilsdon and Willis suggest “scientists have been slowly inching their way towards involving the public in their work” and that we are now on the cusp of a new phase of public engagement that takes it ‘upstream’. This widely used, but somewhat vague term, is used to refer to the early involvement of the lay public in the processes of scientific investigation. As such, engagement is ‘upstream’ nearer the point at which the research and development agenda is set, as opposed to the ‘downstream’ end at which research results are applied and the consequences examined (see Figure 1).

Figure 1 Public participation in the scientific research process. Recently it has been suggested that public engagement with the scientific process needs to move ‘upstream’ nearer the point at which the research agenda is set. After Jackson et al

Whereas previously the theory of the ‘public understanding of science’ was a deficit model suggesting that the public would trust science ‘if only they understood it’, the contemporary shift is towards and engagement and dialogue between science and society. The implication of this new turn is that the public will trust science ‘if only they are involved in the process itself’. Recently, Lane et al. advocated this move upstream for forms of environmental modelling that address issues and concerns of rural populations. This position has been criticised as devaluing the worth of science, for patronising the public, and being a mask for political face-saving or insurance.

Regardless of other areas of science, in the case of developing simulation models for socio-ecological systems the participation of the public does not result in the first two of these criticisms. Engaging with local stakeholders to ensure a model is both built on a logically and factually coherent foundation and to ensure it examines the appropriate questions and scenarios is of great value to the modelling process and should improve representation of the empirical system. Contributing to successful iterations of this process, local stakeholders will gain both trust and understanding. However, the inclusion of local stakeholders in the modelling process does raise the issue of expertise.

With parallels in the three phases Wilsdon and Willis have suggested, Collins and Evans have suggested we are entering a third wave in the sociology of science. This third wave demands a shift from an emphasis on technical decision-making and truth to expertise and experience. Collins and Evans suggest there are three types of expert in technical decision-making (i.e. decision-making at the intersection of science and politics); ‘No Expertise’, ‘Interactional Expertise’, and ‘Contributory Expertise’.

Individuals possessing interactional expertise are able to interact ‘interestingly’ with individuals undertaking the science, but not to contribute to the activities of science itself (contributory expertise). Brian Wynne’s well-known study of the (inadequate) interaction between Cumbrian sheep farmers and UK government scientists investigating the ecological impacts of the Chernobyl disaster is a prime example of a situation in which two parties possessed contributory expertise, but neither interactional expertise. As a result, the ‘certified’ expertise of the government scientists was given vastly more weight than the ‘non-certified’ expertise of the farmers (to the detriment of the accuracy of knowledge produced). Such non-certified expertise might also be termed ‘experience-based’ expertise, arising as it does from the day-to-day experiences of particular individuals.

The importance of considering non-certified, contributory experience is particularly acute for SESMs. Specifically, local stakeholders are likely to be an important, if not the primary, source of knowledge and understanding regarding socio-economic processes and decision-making within the study area. Furthermore, the particular nature of the interactions between human activity and ecological (and other biophysical) processes within the study area will be best understood and incorporated into the simulation model via engagement with stakeholders. This local knowledge will be vital to ensure the logical and factual foundations of the model are as sound as possible.

Furthermore, engagement with local stakeholders will highlight model omissions, areas for improved representation, and guide application of the model. It provides an opportunity to enlighten experts as to the ‘blind spots’ in their knowledge and questions. As such, the local stakeholders become an ‘extended peer community’, lending alternative forms of knowledge and expertise to the model (and research) validation process than that of the scientific peer community. This knowledge and expertise may be less technical and objective than that of the scientific community, but this nature does not necessarily reduce its relevance or utility to the modelling of a system that contains human values and subjects.

I pursued this idea of stakeholder participation in the modelling I undertook for my PhD. Early in the development of my agent-based model of land use decision-making, local stakeholders were interviewed with regards to how they made decisions and their understanding about landscape dynamics. Upon completion of model construction I went to talk with stakeholders about the model as they offered the prime source of criticism about the model representation of their decision-making activities. By engaging with these stakeholders a form of qualitative, reflexive model validation was performed that overcame some of the problems of a more deductive approach.

BSG – Modelling Human Impacts on Geomorphic Processes

This week sees the Annual Conference of the British Society for Geomorphology (BSG – formerly the British Geomorphological Research Group, BGRG). Running from Wednesday 4th to Friday 6th, the conference is being held at the University of Birmingham in the UK. With the theme Geomorphology: A 2020 Vision, recent developments and advances in the field, such as models and modelling approaches, will be explored and debated, and the potential to exploit emerging approaches to solve key challenges throughout pure and applied Geomorphology will be discussed.

With these recent and future advances in mind, one of my PhD advisors, Prof. John Wainwright, will present a paper entitled Modelling Human Impacts on Geomorphic Processes which contains work originating from my thesis. He’ll be presenting it in the first session of Wednesday afternoon, Process Modelling: Cross-Cutting Session. I’m sure it will turn out to be an interesting session, and one that continues the recent thirst for inter- and cross-disciplinary research. Here’s the abstract:

Modelling Human Impacts on Geomorphic Processes
John Wainwright and James Millington

Despite the recognition that human impacts play a strong – if not now predominant – rôle in vegetation and landscape evolution, there has been little work to date to integrate these effects into geomorphic models. This inertia has been the result partly of philosophical considerations and partly due to practical issues.

We consider different ways of integrating human behaviour into numerical models and their limitations, drawing on existing work in artificial intelligence. Practical computing issues have typically meant that most work has been very simplistic. The difficulty of estimating time-varying human impacts has commonly led to the use of relatively basic scenario-based models, particularly over the longer term. Scenario-based approaches suffer from two major problems. They are typically static, so that there is no feedback between the impact and its consequences, even though the latter might often lead to major behavioural modifications. Secondly, there is an element of circularity in the arguments used to generate scenarios for understanding past landform change, in that changes are known to have happened, so that scenarios big enough to produce them are often generated without considering the range of possible alternatives.

In this paper we take examples from two systems operating in different contexts and timescales, but employing a similar overall approach. First, we consider human occupations in prehistoric Europe, in particular in relation to the transition from hunter-gatherer to simple agricultural strategies. The consequences of this transition for patterns of soil erosion are investigated. Secondly, an example from modern Spain will be used to evaluate the effects of farmers’ decision-making processes on land use and vegetation cover, with subsequent impacts on fire régime. From these agent-based models and from other examples in the literature, conclusions will be drawn as to future progress in developing these models, especially in relation to model definition, parameterization and testing.

Call for Papers: Environmental Micro-simulation

This call for papers for a special issue of Ecological Complexity addresses some of the issues I’ve been discussing recently, and hopes to present examples of multi-model approaches to assess environmental simulation model. If I’d seen this earlier or I might have tried to put something together. As it is I’ll just have to keep my eye open for the issue when it comes out next year sometime.

Call for Papers

Ecological Complexity is pleased to announce a special issue on: Environmental micro-simulation: From data approximation to theory assessment

Spatial micro-simulation has recently become a mainstream element in environmental studies. Essentially, different models, representing the same phenomena, are being extensively published and the “next step” sought is hypothesis testing, regarding the factors that determine system dynamics. However, the problem arises that assessment of environmental theories using spatial micro-simulation lacks a leading paradigm. While the Occam’s razor of positivism, which works perfectly in physics and chemistry, demands datasets covering the entire space of model parameters, the experimental abilities of environmentalists are limited and the data collected in the field represent only a small part of the always multi-dimensional parameter space. Consequently, any given model can be considered as merely approximating the few data sets available for verification and its theoretical validity is thus brought into question.

To overcome this limitation, we propose to generate a virtual world that will allow hypothesis testing based on environmental theory. That is, we propose to implement micro-simulation models using high-resolution GIS database and use them as a surrogate for reality, instead of the limited empirical database. GIS enables a realistically looking virtual world to be generated that, unlike the real one, provides the parameters characteristic of every trajectory. The almost unlimited data that can be generated from such a virtual world can then be used to assess our ability to extract rules and dependencies, estimate parameters and, finally, make applicable forecasts.

This special issue will focus on investigating models as representations of environmental theory with the help of a combination of real data and artificial worlds. We invite innovative research papers that employ different high-resolution models for generating virtual worlds, comparing them to each other, with the aim being to develop a better understanding of environmental theory. Examples can be studies of a model’s robustness, a comparative study of dynamic models, investigation of the limitations of data fitting methods and of a model’s sensitivity to changes in spatial and temporal resolution.

Scope
All sorts of micro-simulation, including cellular automata, agent-based systems, fuzzy systems, ANN and genetic algorithms, are welcome. The environmental systems of interest include, but are not limited, to:

  • Complex ecosystems
  • Landscape ecology
  • Terrain analysis and landscape evolution
  • Agriculture and pastoralism
  • Human-environment interaction
  • Land-use and land-cover changes
  • Urban dynamics

Submission instructions
Abstracts of 2 pages in length should be submitted to the Guest Editors by July 14, 2007. The review process of those abstracts considered to be the most relevant will continue and authors will be required to upload the full manuscript to the Ecological Complexity website by November 1, 2007.

Guest Editors
Tal Svoray
Ben-Gurion University of the Negev,
tsvoray@bgu.ac.il

Itzhak Benenson
Tel Aviv University,
bennya@post.tau.ac.il

Relativism in Environmental Simulation Modelling

Partly as a result of the epistemological problems described in my previous few posts, Keith Beven has forwarded a modelling philosophy that accepts uncertainty and a more relativist perspective. This realist approach demands greater emphasis on pluralism, use of multiple hypotheses, and probabilistic approaches when formulating and parameterising models. When pressed to comment further on his meaning of relativism, Beven highlights the problems of rigidly objective measures of model performance and of ‘observer dependence’ throughout the modelling process;

“Claims of objectivity will often prove to be an illusion under detailed analysis and for general applications of environmental models to real problems and places. Environmental modelling is, therefore, necessarily relativist.”

Beven suggests the sources of relativistic operator dependencies include;

  1. Operator dependence in setting up one or more conceptual model(s) of the system, including subjective choices about system structure and how it is closed for modelling purposes; the processes and boundary conditions it is necessary to include and the ways in which they are represented.
  2. Operator dependence in the choice of feasible values or prior distributions (where possible) for ‘free’ parameters in the process representations, noting that these should be ‘effective’ values that allow for any implicit scale, nonlinearity and heterogeneity effects.
  3. Operator dependence in the characterization of the input data used to drive the model predictions and the uncertainties of the input data in relation to the available measurements and associated scale and heterogeneity effects.
  4. Operator dependence in deciding how a model should be evaluated, including how predicted variables relate to measurements, characterization of measurement error, the choice of one or more performance measures, and the choice of an evaluation period.
  5. Operator dependence in the choice of scenarios for predictions into the future.

The operator dependencies have been highlighted in the past, but have re-emerged in the thoughts of geographers (Demeritt, Brown, O’Sullivan, Lane et al.), environmental scientists (Oxley and Lemon), social scientists (Agar) and philosophers of science (Collins, Winsberg).

Notably, although with reference to experimental physics rather than environmental simulation modelling, Collins identified the problem of the ‘experimenter’s regress’. This problem states that a successful experiment occurs when experimental apparatus is functioning properly – but in novel experiments the proper function of the apparatus can only be confirmed by the success or failure of the experiment. So in situations at the boundaries of established knowledge and theory, not only are hypotheses contested, but so too are the standards and methods by which those hypotheses are confirmed or refuted. As a result, Collins suggests experimentation becomes a ‘skilful practice’ and that experimenters accept results based not on epistemological or methodological grounds, but on a variety of social (e.g. group consensus) and expert (e.g. perceived utility) factors.

This stance is echoed in many respects by Winsberg’s ‘epistemology of simulation’, which suggests simulation is a ‘motley’ practice and has numerous ingredients of which theoretical knowledge is only one. The approximations, idealisations and transformations used by simulation models to confront analytically intractable problems (often in the face of sparse data), need to be justified internally (within the model construction process) on the basis of existing theory, available data, empirical generalisations, and the modeller’s experience of the system and other attempts made to model it.

Similarly, Brown suggests that in the natural sciences uncertainty is rarely viewed as being due to the interaction of social and physical worlds (though Beven’s environmental modelling philosophy outlined above does) and that modellers of physical environmental processes might learn from the social sciences where the process of gaining knowledge is understood to be important for assessing uncertainty.

However, whilst an extreme rationalist perspective prevents validation and useful analysis of the utility of a model, its output, and the resulting knowledge (because of things like affirming the consequent), so too does an extreme relativist stance which understands model and model builder are inseparable. Rather, as Kleindorfer et al. suggest, modellers need to develop the means to increase the credibility of the model such that “meaningful dialogue on a model’s warrantability” can be conducted. How and why this might be achieved will be discussed in future posts.

Affirming the Consequent

A third epistemological problem of knowing whether a given (simulation) model structure is appropriate, after Equifinality and Interactive Kinds, regards the comparison of model results with real-world empirical data. Comparison of models’ predictions with empirical events has frequently been used in an attempt to show that the model structure is an accurate representation of the system being modelled (i.e. demonstrate it is ‘true’). Such an idea arises from the hypothetico-deductive scientific method of isolating a system and then devising experiments to logical prove a hypothesis via deduction. As I’ve discussed, such an approach may be useful in closed laboratory-type situations and systems, but less so in open systems.

The issue here is that predictions about real-world environmental systems are temporal predictions about events occurring at explicit points in time or geographical space, not logical predictions that are independent of space and time and that allow the generation of science’s ‘universal laws’. These temporal predictions have often been treated with the same respect given to the logical prediction of the hypothetico-deductive method. However, as Naomi Oreskes points out, it is unclear whether the comparison of a temporal prediction produced by a simulation model with empirical events is a test of the input data, the model structure, or the established facts upon which the structure is based. Furthermore, if the model is refuted (i.e. temporal predictions are found to be incorrect) given the complexity of many environmental simulation models it would be hard to pin-point which part of the model was at fault.

In the case of spatial models, the achievement of partially spatially accurate prediction does little to establish where or why the model went wrong. If the model is able to predict observed events, this is still no guarantee that the model will be able predict into the future given it cannot be guaranteed that the stationarity assumption will be maintained. This assumption is that the processes being modelled are constant thought time and space within the scope of the model. Regardless, Oreskes et al. (1994) have argued that temporal prediction is not possible by numerical simulation models of open, middle-numbered systems because of theoretical, empirical, and parametric uncertainties within the model structure. As a consequence, Oreskes et al. (1994) warn that numerical simulation modellers must beware of making the fallacy of ‘affirming the consequent’ by deeming a model invalid (i.e. false) if it does not reproduce the observed real-world data, or valid (i.e. true) if it does.

Initial Michigan UP Ecological Economic Modelling Webpage


We now have a very basic webpage online, (very) briefly outlining the Michigan UP Ecological-Economic Modeling project. This is just so that we have an online presence for now – in time we will develop this into a much more comprehensive document detailing the model, its construction and use. Hopefully, at some point in the future we’ll also mount a version of the model online. I’ll keep you posted on the online development of the project.

Critical Realism for Environmental Modelling?

As I’ve discussed before, Critical Realism has been suggested as a useful framework for understanding the nature of reality (ontology) for scientists studying both the environmental and social sciences. The recognition of the ‘open’ and middle-numbered nature of real world systems has led to a growing acceptance of both realist (and relativist – more on that in a few posts time) perspectives toward the modelling of these systems in the environmental and geographical sciences.

To re-cap, the critical realist ontology states that reality exits independently of our knowledge, and that it is structured into three levels: real natural generating mechanisms; actual events generated by those mechanisms; and empirical observations of actual events. Whilst mechanisms are time and space invariant (i.e are universal), actual events are not because they are realisations of the real generating mechanisms acting in particular conditions and contingent circumstances. This view seems to fit well with the previous discussion on the nature of ‘open’ systems – identical mechanisms will not necessarily produce identical events at different locations in space and time in the real world.

Richards initiated debate on the possibility of adopting a critical realist perspective toward research in the environmental sciences by criticising emphasis on rationalist (hypothetico-deductive) methods. The hypothetico-deductive method states that claims to knowledge (i.e. theories or hypotheses) should be subjected to tests that are able to falsify those claims. Once a theory has been produced (based on empirical observations) a consequence of that theory is deduced (i.e. a prediction is made) and an experiment constructed to examine whether the predicted consequences are observed. By replicating experiments credence is given to the theory and knowledge based upon it (i.e. laws and facts) is held as provisional until evidence is found to disprove the theory.

However, critical realism does not value regularity and replication as highly as rationalism. The separation of real mechanisms from empirical observations, via actual events, means that “What causes something to happen has nothing to do with the number of times we have observed it happening”. Thus, in the search for the laws of nature, a rationalist approach leaves open the possibility of the creation of laws as artefacts of the experimental (or model) ‘closure’ of the inherently open system it seeks to represent (more on model ‘closure’ next time).

The separation of the three levels of reality means that whilst reality exists objectively and independently, we cannot observe it. This separation causes a problem – how can science progress toward understanding the true nature of reality if the real world is unobservable? How do critical realists assess whether they have reached the real underlying mechanisms of a system and can stop studying it?

Whilst critical realism offers reasons for why the nature of reality makes the modelling of ‘open’ systems tricky for scientists, it doesn’t seem to provide a useful method by which to overcome the remaining epistemological problem of knowing whether a given (simulation) model structure is appropriate. In the next few posts I’ll examine some of these epistemological issues (equifinality, looping effects, and affirming the consequent) before switching to examine some potential responses.

Validating Models of Open Systems

A simulation model is an internally logically-consistent theory of how a system functions. Simulation models are currently recognised by environmental scientists as powerful tools, but the ways in which these tools should be used, the questions they should be used to examine, and the ways in which they can be ‘validated’ are still much debated. Whether a model aims to represent an ‘open’ or ‘closed’ systems has implications for the process of validation.

Issues of validation and model assessment are largely absent in discussions of abstract models that purport to represent the fundamental underlying processes of ‘real world’ phenomena such as wildfire, social preferences and human intelligence. These ‘metaphor models’ do not require empirical validation in the sense that environmental and earth systems modellers use it, as the very formulation of the system of study ensures it is ‘closed’. That is, the system the model examines is logically self-contained and uninfluenced by, nor interactive with, outside statements or phenomena. The modellers do not claim to know much about the real world system which their model is purported to represent, and do not claim their model is the best representation of it. Rather, the modelled system is related to the empirical phenomena via ‘rich analogy’ and investigators aim to elucidate the essential system properties that emerge from the simplest model structure and starting conditions.

In contrast to these virtual, logically closed systems, empirically observed systems in the real world are ‘open’. That is, they are in a state of disequilibrium with flows of mass and energy both into and out of them. Examples in environmental systems are flows of water and sediment into and out of watersheds and flows of energy into (via photosynthesis) and out of (via respiration and movement) ecological systems. Real world systems containing humans and human activity are open not only in terms of conservation of energy and mass, but also in terms of information, meaning and value. Political, economic, social, cultural and scientific flows of information across the boundaries of the system cause changes in the meanings, values and states of the processes, patterns and entities of each of the above social structures and knowledge systems. Thus, system behaviour is open to modification by events and phenomena outside the system of study.

Alongside being ‘open’, these systems are also ‘middle-numbered’. Middle-numbered systems differ from small-numbered systems (controlled situations with few interacting components, e.g. two billiard balls colliding) that can be described and studied well using Cartesian methods, and large-numbered systems (many, many interacting components, e.g. air molecules in a room) that can be described and studied using techniques from statistical physics. Rather, middle-numbered systems have many components, the nature of interactions between which is not homogenous and is often dictated or influenced by the condition of other variables, themselves changing (and potentially distant) in time and space. Such a situation might be termed complex (though many perspectives on complexity exist). Systems at the landscape scale in the real world are complex and middle-numbered. They exist in a unique time and place. In these systems history and location are important and their study is necessarily a <a href="http://dx.doi.org/10.1130/0016-7606(1995)1072.3.CO;2&#8243; target=”_blank” class=”regular”>‘historical science’ that recognises the difficulty of analysing unique events scientifically through formal, laboratory-type testing and the hypothetico-deductive method. Most real-world systems possess these properties, and coupled human-environment systems are a prime example.

Traditionally laboratory science has attempted to isolate real world systems such that they become closed and amenable to the hypothetico-eductive method. The hypothetico-deductive method is based upon logical prediction of phenomena independent of time and place and is therefore useful for generating knowledge about logically, energetically and materially ‘closed’ systems. However, the ‘open’ nature of many real-world, environmental systems (which cannot be taken into the laboratory and instead must be studies in situ) is such that the hypothetico-deductive method is often problematic to implement in order to generate knowledge about environmental systems from simulation models. Any conclusions draw using the hypothetico-deductive method for open systems using a simulation model will implicitly be about the model rather than the open system it represents. Validation has also frequently been used, incorrectly, as synonymous with demonstrating that the model is a truly accurate representation of the real world. By contrast, validation in the discussion presented in this series of blog posts refers to the process by which a model constructed to represent a real-world system has been shown to represent that system well enough to serve that model’s intended purpose. That is, validation is taken to mean the establishment of model legitimacy – usually of arguments and methods.

In the next few posts I’ll examine the rise of (critical) realist philosophies in the environmental sciences and environmental modelling and will explore the philosophy underlying these problems of model validation in more detail.

Validating and Interpreting Socio-Ecological Simulation Models

Over the next 9 posts I’ll discuss the validation, evaluation and interpretation of environmental simulation modelling. Much of this discussion is taken from chapter seven of my PhD thesis, arising out of my efforts to model the impacts of agricultural land use change on wildfire regimes in Spain. Specifically, the discussion and argument are focused on simulation models that represent socio-ecological systems. Socio-Ecological Simulation Models (SESMs), as I will refer to them, are those that represent explicitly the feedbacks between the activities and decisions of individual actors and their social, economic and ecological environments.

To represent such real-world behaviour, models of this type are usually spatially explicit and agent-based (e.g. Evans et al., Moss et al., Evans and Kelley, An et al., Matthews and Selman) – the model I developed is an example of a SESM. One motivating question for the discussion that follows is, considering the nature of the systems and issues they are used to examine, how we should go about approaching model evaluation or ‘validation’. That is, how do we identify the level of confidence that can be placed in the knowledge produced by the use of a SESM? A second question is, given the nature of SESMs, what approaches and tools are available and should be used to ensure models of this type provide the most useful knowledge to address contemporary environmental problems?

The discussion that follows adopts a (pragmatic) realist perspective (in the tradition of Richards and Sayer) and recognises and the importance of the open, historically and geographically contingent nature of socio-ecological systems. The difficulties of attempting to use general rules and theories (i.e. a model) to investigate and understand a unique place in time are addressed. As increasingly acknowledged in environmental simulation modelling (e.g. Sarewitz et al.), socio-ecological simulation modelling is a process in itself in which human decisions come to the fore – both because human decision-making is being modelled but also, importantly, because modellers’ decisions during model construction are a vital component of the process.

If these models intended to inform policy-makers and stakeholders about potential impacts of human activity, the uncertainty inherent in them needs to be managed to ensure their effective use. Fostering trust and understanding via a model that is practically adequate for purpose may aid traditional scientific forms of model validation and evaluation. The list below gives the titles of the posts that will follow over the next couple of weeks (and will become links when the post is online).

The Nature of Open Systems
Realist Philosophy in the Environmental Sciences
Equifinality
Interactive vs. Indifferent Kinds
Affirming the Consequent
Relativism in Modelling
Alternative Model Assessment Criteria
Stakeholder Participation and Expertise
Summary

getting my head round things

Now that I’m into my second week at MSU, things have calmed down a little. I’ve ploughed through most of the necessary admin, met many of the people I’ll be working with here at CSIS and throughout MSU (although being summer campus is quiet right now – the undergrads are gone and the postgrads are away on their fieldwork), and finally got my apartment into a liveable state. The next few weeks will no doubt be spent really getting my head around what we’re aiming to achieve with this integrated ecological-economic modelling project. For example, during the next month or two I’ll take a trip up to our study area to get a feel for the landscape, see the experimental plots that have been put in place previously, and gain a better understanding regarding the subsequent effects of timber harvesting. Also I plan on meeting and interviewing several key management stakeholders from organisations such as Michigan’s Department of Natural Resources and The Nature Conservancy to get their perspective on the landscape and what they might gain from our work. I’ve also been examining some of the tools that we hope to utilise and build upon, such as the USFS’ Forest Vegetation Simulator.

So whilst I get my head around exactly what this new project is all about, I’ll continue to blog about some of the work coming out of my Phd thesis. I’ve been threatening to do this for a while, and now I really mean it. Specifically, I’ll walk through the later stages of my thesis where I explored the potential of more reflexive forms of model validation – seeing the modelling process as an end in itself, a learning process, rather than a means to an end (i.e. the model) which is then used to ‘predict’ the future. I’ll discuss the philosophy underlying this perspective before re-examining my efforts to engage the model I produced with local stakeholders after the model had been ‘completed’ with their minimal input.

And of course, I’ll throw in the odd comment to let you know how things are going here in this new world I’ve recently landed in. Like my trip to the grey and windswept Lake Michigan at the weekend – I’m going to have to look into this kite-surfing stuff…

kitesurfer