Predicting 2009

Over the holiday period the media offer us plenty of fodder to discuss the past year’s events and what the future may hold. Whether it’s current affairs, music, sport, economics or any other aspect of human activity, most media outlets have something to say about what people did that was good, what they did that was bad, and what they’ll do next, in the hope that they can keep their sales up over the holiday period.

Every year The Economist publishes a collection of forecasts and predictions for the year ahead. The views and and opinions of journalists, politicians and business people accompany interactive maps and graphs that provide numerical analysis. But how good are these forecasts and predictions? And what use are they? This year The Economist stopped to look back on how well it performed:

“Who would have thought, at the start of 2008, that the year would see crisis engulf once-sturdy names from Freddie Mac and Fannie Mae to AIG, Merrill Lynch, HBOS, Wachovia and Washington Mutual (WaMu)?

Not us. The World in 2008 failed to predict any of this. We also failed to foresee Russia’s invasion of Georgia (though our Moscow correspondent swears it was in his first draft). We said the OPEC cartel would aim to keep oil prices in the lofty range of $60-80 a barrel (the price peaked at $147 in July)…”

And on the list goes. Not that any of us are particularly surprised, are we? So why should we bother to read their predictions for the next year? In its defence, The Economist offers a couple of points. First, the usual tactic (for anyone defending their predictions) of pointing out what they actually did get right (slumping house prices, interest-rate cuts, etc). But then they highlight a perspective which I think is almost essential when thinking about predictions of future social or economic activity:

“The second reason to carry on reading is that, oddly enough, getting predictions right or wrong is not all that matters. The point is also to capture a broad range of issues and events that will shape the coming year, to give a sense of the global agenda.”

Such a view is inherently realist. Given the multitudes of interacting elements and potential influences affecting economic systems, given that it is an ‘open’ historical system, producing a precise prediction about future system states is nigh-on impossible. Naomi Oreskes has highlighted the difference between ‘logical prediction’ (if A and B then C) and ‘temporal prediction’ (event C will happen at time t + 10), and this certainly applies here [I’m surprised I haven’t written about this distinction on this this blog before – I’ll try to remedy that soon]. Rather than simply developing models or predictions with the hope of accurately matching the timing and magnitude of future empirical events, I argue that we will be better placed (in many circumstances related to human social and economic activity) to use models and predictions as discussants to lead to better decision-making and as means to develop an understanding of the relevant causal structures and mechanisms at play.

In a short section of his recent book and TV series, The Ascent of Money, Niall Ferguson talks about the importance of considering history in economic markets and decision-making. He presents the example of Long Term Capital Management (LTCM) and their attempt to use mathematical models of the global economic system to guide their trading decision-making. In Ferguson’s words, their model was based on the following set of assumptions about how the system worked:

“Imagine another planet – a planet without all the complicating frictions caused by subjective, sometimes irrational human beings. One where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximise profits; where they never stopped trading; where markets were continuous, frictionless and completely liquid. Financial markets on this plan would follow a ‘random walk’, meaning that each day’s prices would be quite unrelated to the previous day’s but would reflect all the relevant information available.” p.320

Using these assumptions about how the world works, the Nobel prize-winning mathematicians Myron Scholes and Robert C. Merton derived a mathematical model. Initially the model performed wonderfully, allowing returns of 40% on investments for the first couple of years. However, crises in the Asian and Russian financial systems in 1997 and 1998 – not accounted for in the assumptions of the mathematical model – resulted in LTCM losing $1.85 billion through the middle of 1998. The model assumptions were unable to account for these events, and subsequently its predictions were inaccurate. As Ferguson puts it:

“…the Nobel prize winners had known plenty of mathematics, but not enough history. They had understood the beautiful theory of Planet finance, but overlooked the messy past of Planet Earth.” p.329

When Ferguson says ‘not enough history’, his implication is that the mathematical model was based on insufficient empirical data. Had the mathematicians used data that covered the variability of the global economic system over a longer period of time it may have included a stock market downturn similar to that caused by Asian and Russian economic crises. But a data set for a longer time period would likely have been characterised by greater overall variability, requiring a greater number of parameters and variables to account for that variability. Whether such a model would have performed as well as the model they did produce is questionable, as is the potential to predict the exact timing and magnitude of any ‘significant’ event (e.g. a market crash).

But further, Ferguson also points out that the problem with the LTCM model wasn’t just that they hadn’t used enough data to develop their model, but that their assumptions (i.e. their understanding of Planet Finance) just aren’t realistic enough to accurately predict Planet Earth over ‘long’ periods of time. Traders and economic actors are not perfectly rational and do not have access to all the data all the time. Such a situation has led (more realistic) economists to develop ideas like bounded rationality.

Assuming that financial traders try to be rational is likely not a bad assumption. But it has been pointed out that “[r]ationality is not tantamount to optimality”, and that in situations where information, memory or computing resources are not complete (as is usually the case in the real world) the principle of bounded rationality is a more worthwhile approach. For example, Herbert Simon recognised that rarely do actors in the real world optimise their behaviour, but rather they merely try to do ‘well enough’ to satisfy their goal(s). Simon termed this non-optimal behaviour ‘satisficing’, the basis for much of bounded rationality theory since. Thus, satisficing is essentially a cost-benefit tradeoff, establishing when the utility of an option exceeds an aspiration level.

Thinking along the same lines George Soros has developed his own ‘Human Uncertainty Principle’. This principle “holds that people’s understanding of the world in which they live cannot correspond to the facts and be complete and coherent at the same time. Insofar as people’s thinking is confined to the facts, it is not sufficient to reach decisions; and insofar as it serves as the basis of decisions, it cannot be confined to the facts. The human uncertainty principle applies to both thinking and reality. It ensures that our understanding is often incoherent and always incomplete and introduces an element of genuine uncertainty – as distinct from randomness – into the course of events.

The human uncertainty principle bears a strong resemblance to Heisenberg’s uncertainty principle, which holds that the position and momentum of quantum particles cannot be measured at the same time. But there is an important difference. Heisengberg’s uncertainty principle does not influence the behavior of quantum particles one iota; they would behave the same way if the principle had never been discovered. The same is not true of the human uncertainty principle. Theories about human behavior can and do influence human behavior. Marxism had a tremendous impact on history, and market fundamentalism is having a similar influence today.” Soros (2003) Preface

This final point has been explored in more detail by Ian Hacking and his discussion of the issue of the differences between interactive and indifferent kinds. Both of these views (satisficing and the uncertainy principle) implicitly understand that the context in which an actor acts is important. In the perfect world of Planet Finance and associated mathematical models context is non-existent.

In response to the problems encountered by LTCM, “Merrill Lynch observed in its annual reports that mathematical risk models, ‘may provide a greater sense of security than warranted; therefore, reliance on these models should be limited’“. I think it is clear that humans need to make decisions (whether they be social, economic, political, or about any resource) based on human understanding derived from empirical observation. Quantitative models will help with this but cannot be used alone, partly because (as numerous examples have shown) it is very difficult to make (accurate) predictions about future human activity. Likely there are general behaviours that we can expect and use in models (e.g. aim of traders to make profit). But how those behaviours play out in the different contexts provided by the vagaries of day-to-day events and changes in global economic, political and physical conditions will require multiple scenarios of the future to be examined.

My personal view is one of the primary benefits of developing quantitative models of human social and economic activity is that they allow us to make explicit our implicitly held models. Developing quantitative models forces us to be structured about our worldview – writing it down (often in computer code) allows other to scrutinise that model, something that is not possible if the model remains implicit. In some situations, such a private financial strategy-making, the publication this approach may not be welcome (because it is not beneficial for a competitor to know your model of the world). But in other decision-making situations, for example about environmental resources, this approach will be useful to foster greater understanding about how the ‘experts’ think the world works.

By writing down their expectations for the forthcoming year the experts at The Economist are making explicit their understanding of the world. It’s not terribly important that that they don’t get everything right – there’s very little possibility that will happen. What is important is that it helps us to think about potential alternative futures, what factors are likely to be most important in determining future events, how these factors and events are (inter)related, and what the current state of the world implies for the likelihood of different future states. This information might then be used to shape the future as we would like it to be, based on informed expectations. Quantitative models of human social and economic activity also offer this type of opportunity.

Modelling Pharmaceuticals in the Environment

On Friday I spoke at a workshop at MSU that examined a subject I’m not particularly well acquainted with. Participants in Pharmaceuticals in the Environment: Current Trends and Research Priorities convened to consider the natural, physical, social, and behavioral dimensions regarding the fate and impact of pharmaceutical products in the natural environment. The primary environmental focus of this issue is the presence of toxins in our water supply as a result of the disposal of human or veterinary medicines. I was particularly interested in what Dr. Shane Synder had to say about water issues facing Las Vegas, Nevada.

So what did I have to do with all this? Well the organisers wanted someone from our research group at the Center for Systems Integration and Sustainability to present some thoughts on how modelling of coupled human and natural systems might contribute to the study of this issue. The audience contained experts from a variety of disciplines (including toxicologists, chemists, sociologists, political scientists) and given my limited knowledge about the subject matter I decided I would keep my presentation rather broad in message and content. I drew on several of the topics I have discussed previously on this blog: the nature of coupled human-natural systems, reasons we might model, and potential risks we face when modelling CHANS.

In particular, I suggested that if prediction of a future system state is our goal we will be best served focusing our modelling efforts on the natural system and then using that model with scenarios of future human behaviour to examine the plausible range of states the natural system might take. Alternatively, if we view modelling as an exclusively heuristic tool we might better envisage the modeling process as a means to facilitate communication between disparate groups of experts or publics and explore what different conceptualisations allow and prevent from happening with regards our stewardship or management of the system. Importantly, in both cases the act of making our implicitly held models of how the world works explicit by laying down a formal model structure is the primary value of modelling CHANS.

There was brief talk towards the end of the meeting about setting up a workshop website that might even contain audio/video recordings of presentations and discussions that took place. If such a website appears I’ll link to it here. In the meantime, the next meeting I’ll be attending on campus is likely to be the overview of Coupled Human-Natural Systems discussion in the Networking for Environmental Researchers program.

Why Model?

When asked this question, Joshua Epstein would reply:

‘You are a modeler.’

In his recent article in JASSS he continues:

“Anyone who ventures a projection, or imagines how a social dynamic—an epidemic, war, or migration—would unfold is running some model.

But typically, it is an implicit model in which the assumptions are hidden, their internal consistency is untested, their logical consequences are unknown, and their relation to data is unknown. But, when you close your eyes and imagine an epidemic spreading, or any other social dynamic, you are running some model or other. It is just an implicit model that you haven’t written down.”(1.2-1.3)

Epstein goes on to imply that he thinks evaluating models by showing that their output matches empirical data isn’t a particularly useful test (as I have discussed previously). He emphasises that by making our implicit models explicit, we allow others to scrutinise the logic and coherence of that model and provide the opportunity for attempts at replication it (and the results).

In our paper reviewing concepts and examples of succession-disturbance dynamics in forest ecosystems George Perry and I used the distinction between modelling for explanation and modelling for prediction to structure our discussion. Epstein takes a similar tack, but the majority of his article seems to imply that he is more interested in the former than the latter. He suggests 16 reasons to model other than to predict. These are to:

  1. Explain (very distinct from predict)
  2. Guide data collection
  3. Illuminate core dynamics
  4. Suggest dynamical analogies
  5. Discover new questions
  6. Promote a scientific habit of mind
  7. Bound (bracket) outcomes to plausible ranges
  8. Illuminate core uncertainties
  9. Offer crisis options in near-real time
  10. Demonstrate tradeoffs / suggest efficiencies
  11. Challenge the robustness of prevailing theory through perturbations
  12. Expose prevailing wisdom as incompatible with available data
  13. Train practitioners
  14. Discipline the policy dialogue
  15. Educate the general public
  16. Reveal the apparently simple (complex) to be complex (simple)

After briefly discussing a couple of these points Epstein notably highlights the dictum attributed to George Box: “All models are wrong, but some are useful” (something I hope the students in my class are really beginning to appreciate). This idea leads neatly into Epstein’s final and, for him, most important point:

“To me, however, the most important contribution of the modeling enterprise—as distinct from any particular model, or modeling technique—is that it enforces a scientific habit of mind, which I would characterize as one of militant ignorance—an iron commitment to ‘I don’t know.’ That is, all scientific knowledge is uncertain, contingent, subject to revision, and falsifiable in principle. … One does not base beliefs on authority, but ultimately on evidence. This, of course, is a very dangerous idea. It levels the playing field, and permits the lowliest peasant to challenge the most exalted ruler—obviously an intolerable risk.”(1.16)

So, Why Model? To predict or to explain? As usual that’s probably a false dichotomy. The real point is that there are plenty of reasons to model other than to predict.

eLectures

During the second half of the course I’m teaching at MSU this semester (FW852 Systems Modeling and Simulation) I’ve invited several colleagues to give guest lectures on the modelling work they do. These lecture serve as examples to the students of modeling and simulation in practice, and provide the opportunity to tap the brains of experts in different fields.

One of the speakers I invited was one of my former PhD advisors, Dr. George Perry. George is at the University of Auckland, New Zealand. Rather than pay for him to fly half way around the world we thought we would save some CO2 (and money!) by doing the lecture via internet video conference. As you can see from the photo below we had a video feed from George up on a large screen (you can also see the video feed he had of our room down in the lower right of his screen) with his presentation projected onto a separate screen (at right).


George spoke about research he has done modelling habitat dynamics and fish population persistence in intermittent lowland streams in SE Australia [I’ll link here to his forthcoming paper on this work soon]. The emphasis was on the ecology of the system and how modeling combined with fieldwork can aid understanding and restoration of systems like this.

Everything went pretty well with only a couple of Max Headroom-type stutters (the stutters were purely technical – George’s presentation and material was much more coherent than the 80’s icon!). With the increasing availability of (free) technologies like this (I often use Skype to make video calls with my folks back home, and Google just released their new Voice and Video Chat) no doubt this sort of communication is here to stay. And it looks unlikely that eLectures will stop here. As highlighted this week, academic conferences and lectures in virtual environments like Second Life are beginning to catch on too.

Seeds and Quadtrees

The main reason I haven’t blogged much recently is because all my spare time has been taken up working on revisions to a paper submitted to Environmental Modelling and Software. Provisionally entitled ‘Modelling Mediterranean Landscape Succession-Disturbance Dynamics: A Landscape Fire-Succession Model’, the paper describes the biophysical component of the coupled human-natural systems model I started developing during my PhD studies. This biophysical component is a vegetation state-and-transition model combined with a cellular-automata to represent wildfire ignition and spread.

The reviewers of the paper wanted to see some changes to the seed dispersal mechanism in the model. Greene et al. compared three commonly used empirical seed dispersal functions and concluded that the log-normal distribution is generally the most suitable approximation to observed seed dispersal curves. However, dispersal functions using an exponential function have also been used. A good example is the LANDIS forest landscape simulation model that calculates the probability of seed fall (P) in a region between the effective (ED) and maximum (MD) seed distance from the seed source. For distances from the seed source (x) < ED, P = 0.95. For x > MD, P = 0.001. For all other distances P is calculated using the negative exponential distribution function is used as follows:
where b is a shape parameter.

Recently Syphard et al. modified LANDIS for use in the Mediterranean Type Environment of California. The two predominant pine species in our study area in the Mediterran Basin have different seed types: one (Pinus pinaster) has has wings and can fly large distances (~1km), but the other (Pinus pinea) does not. In this case a negative exponential distribution is most appropriate. However, research on the dispersal of acorns (from Quercus ilex) found that the distance distribution of acorns was best modeled by a log-normal distribution. I am currently experimenting with these two alternative seed dispersal distributions and comparing them with spatially random seed dispersal (dependent upon quantity but not locations of seed sources).

The main thing that has kept me occupied the last couple of weeks has been the implementation of these approaches in a manner that is computationally feasible. I need to run and test my model over several hundred (annual) timesteps for a landscape grid of data ~1,000,000 pixels. Keeping computation time down so that model execution does not take hundreds of hours is clearly important if sufficient model executions are to be run to ensure some form of statistical testing is possible. A brute-force iteration method was clearly not the best approach.

One of my co-authors suggested I look into the use of Quadtrees. Quadtrees are a tree data structure that are often used to partition a two dimensional space by recursively subdividing regions into quadrants (nodes). A region Quadtree partitions a region of interest into four equal quadrants. Each of these quadrants is subdivided into four subquadrants, each of which is subdivided and so on to the finest level of spatial resolution required. The University of Maryland have a nice Java applet example that helps illustrate the concept.

For our seed dispersal purposes, a region quadtree with n levels of may be used to represent an landscape of 2n × 2n pixels, where each pixel is assigned a value of 0 or 1 depending upon whether it contains a seed source of the given type or not. The distance of all landscape pixels to a seed source can then be quickly calculated using this data structure – staring at the top level we work our way down the tree querying whether each quadrant contains a pixel(s) that is a seed source. In this way, large areas of the grid can be discounted as not containing a seed source, thereby speeding the distance calculation.

Now that I have my QuadTree structure in place model execution time is much reduced and a reasonable number of model executions should be possible over the next month or so of model testing, calibration and use. My next steps are concerned with pinning down the appropriate values for ED and MD in the seed dispersal functions. This process of parameterization will take into account values previously used by similar models in similar situations (e.g. Syphard et al.) and empirical research and data on species found within our study area (e.g. Pons and Pausas). The key thing to keep in mind with these latter studies is their focus on the distribution of individual seeds from individual trees – the spatial resolution of my model is 30m (i.e. each pixel is 30m square). Some translation of values for individuals versus aggregated representation of individuals (in pixels) will likely be required. Hopefully, you’ll see the results in print early next year.

ABM of Mediterranean LUCC Paper Published in JASSS

Apparently blogging is just soooo 2004 and we should just leave it to the pros. The blog you’re reading may not be dead, but has been anaemic of late. Although this may not be the place to catch breaking news and cutting edge analysis in the 24-hour current affairs news cycle, it is a place where I can highlight some of my recent thoughts and activities. Maybe others will benefit from these notes, maybe they won’t. But in writing things down for public view it forces me to refine my thoughts so that I can express them concisely. Hopefully this blog has some life it yet and I will try to write soon about what has been taking up all my spare time recently – QuadTrees, seed dispersal and fire.

For now I will just let you know that the paper describing the agent-based model of Mediterranean agricultural Land-Use/Cover Change that I began developing as part of my PhD studies has now officially been published in the latest issue of JASSS.

Millington, J.D.A., Romero-Calcerrada, R., Wainwright, J. and Perry, G.L.W. (2008) An Agent-Based Model of Mediterranean Agricultural Land-Use/Cover Change for Examining Wildfire Risk. Journal of Artificial Societies and Social Simulation 11(4)4 http://jasss.soc.surrey.ac.uk/11/4/4.html

Science Fictions

What’s happened to this blog recently? I used to write things like this and this. All I seem to have posted recently are rather vacuous posts about website updates and TV shows I haven’t watched (yet).

Well, one thing that has prevented me from posting recently has been that I’ve spent some of my spare time (i.e., when I’m not at work teaching or having fun with data manipulation and analysis for the UP modelling project) working on a long-overdue manuscript.

Whilst I was visiting at the University of Auckland back in 2005, David O’Sullivan, George Perry and I started talking about the benefits of simulation modelling over less-dynamic forms of modelling (such as statistical modelling). Later that summer I presented a paper at the Royal Geographical Society Annual Conference that arose from these discussions. We saw this as our first step toward writing a manuscript for publication in a peer review journal. Unfortunately, this paper wasn’t at the top of our priorities, and whilst on occasions since I have tried to sit down to write something coherent, it has only been this month [three years later!] that I have managed to finish a first draft.

Our discussions about the ‘added value’ of simulation modelling have focused on the narrative properties of of this scientific tool. The need for narratives in scientific fields that deal with ‘historical systems’ has been recognised by several authors previously (e.g. Frodeman in Geology), and in his 2004 paper on Complexity Science and Human Geography, David suggested that there was room, if not a need, for greater reference to the narrative properties of simulation modelling.

What inspired me to actually sit down and write recently was some thinking and reading I had been doing related to the course I’m teaching on Systems Modelling and Simulation. In particular, I was re-acquainting myself with Epstein’s idea of ‘Generative Social Science‘ to explain the emergence of macroscopic societal regularities (such as norms or price equilibria) arising from the local interaction of heterogeneous, autonomous agents. The key tool for the generative social scientist is agent-based simulation that considers the local interactions of heterogeneous, autonomous agents acting in a spatially-explicit environment and possessing bounded (i.e. imperfect) information and computing power. The aim of the generative social scientist is to ‘grow’ (i.e. generate) the observed macroscopic regularity from the ‘bottom up’. In fact, for Epstein this is the key to explanation – the demonstration of a micro-specification (properties or rules of agent interactions and change) able generate the macroscopic regularity of interest is a necessary condition for explanation. Describing the final aggregate characteristics and effects of these processes without accounting for how they arose due to the interactions of the agents is insufficient in the generativist approach.

As I was reading I was reminded of the recent suggestion of the potential of a Generative Landscape Science. Furthermore, the generative approach really seemed to ring true to the critical realist perspective of investigating the world – understanding that regularity does not imply causation and explanation is achieved by identifying causal mechanisms, how they work, and under what conditions they are activated.

Thus, in the paper (or the first draft I’ve written at least – no doubt it will take on several different forms before we submit for publication!) after discussing the characteristics of the ‘open, middle-numbered’ systems that we study in the ‘historical sciences’, reviewing Epstein’s generative social science and presenting examples of the application of generative simulation modelling (i.e., discrete element or agent-based) to land use/cover change, I go on to dicuss how a narrative approach might complement quantitative analysis of these models. Specifically, I look at how narratives could (and do) aid model explanation and interpretation, and the communication of these findings to others, and how the development of narratives will help to ‘open up’ the process of model construction for increased scrutiny.

In one part of this discussion I touch upon the keynote speech given by William Cronon at the RGS annual meeting in 2006 about the need for ‘sustainable narratives‘ of the current environmental issues we are facing as a global society. I also briefly look at how narrative might act as mediators between models and society (related to calls for ‘extended peer communities‘ and the like), and highlight where some of the potential problems for this narrative approach lie.

Now, as I’ve only just [!] finished this very rough initial draft, I’m going to leave the story of this manuscript here. David and George are going to chew over what I’ve written for a while and then it will be back to me to try to draw it all together again. As we progress on this iterative writing process, and the story becomes clearer, I’ll add another chapter here on the blog.

Forest Fire Cellular Automata


One of the examples I used in class this week when talking about ‘Complex Systems’ and associated modelling approaches was the Forest Fire Cellular Automata model. I’ve produced an implementation of the model in NetLogo, complete with plots to illustrate the frequency-area scaling relationship of the resulting wildfire regime. I’ve updated the wildfire behaviour page on my website to include an applet of the NetLogo model (if that page gets changed in the future, you can view and experiment with the model here).

Systems Modeling and Simulation

No sooner am I back from a fun weekend in Toronto (photos on the photos page soon) than the fall semester starts at MSU (is summer over already?!).

Today was the first day of the graduate-level class I am teaching, FW852 Systems Modeling and Simulation. During the course we will:

  1. Review systems theory and the systems modeling and simulation process
  2. Introduce modeling and simulation methods and tools, specifically the STELLA and NetLogo modeling environments
  3. Apply modeling theory, methods and tools to natural resource management and other areas of research

Term projects are a critical component of the course and students will have opportunities to develop their own models, usually related to their dissertation and thesis research. Students will peer-review others’ work, and present their results in class. Through regular and guest lectures, discussion, and hands-on experience, the course will provide students with a holistic view and integrative tools for their future research, decision-making, and management activities.

As the course progresses I may post some of the examples and topics we look at, and anything interesting that arises out of our discussions in class.

‘Mind, the Gap’ Manuscript

Earlier this week I submitted a manuscript to Earth Surface Processes and Landforms with one of my former PhD advisors, John Wainwright. Provisionally entitled Mind, the Gap in Landscape-Evolution Modelling (we’ll see what the reviewers think of that one!), the manuscript argues that agent-based models (ABMs) are a useful tool for overcoming the limitations of existing, highly empirical approaches in geomorphology. This, we suggest, would be useful because despite an increasing recognition that human activity is currently the dominant force modifying landscapes geomorphically, and that this activity has been increasing through time, there has been little integrative work to evaluate human interactions with geomorphic processes.

In the manuscript we present two case studies of models that consider landscape change with the aid of an ABM – SPASIMv1 (developed during my PhD) and CybErosion (a model to simulate the dynamic interaction of prehistoric communities in Mediterranean environments John has developed). We evaluate the advantages and disadvantages of the ABM approach, and consider some of the major challenges to implementation. These challenges include potential process scale mis-matches, differences in perspective between investigators from different disciplines, and issues regarding model evaluation, analysis and interpretation.

I’ll post more here as the review process progresses. Hopefully progress with ESPL will be a little quicker than it has been for the manuscript I submitted to Environmental Modelling and Software detailing the biophysical component of SPASIMv1 (still yet to receive the review after 5 months!)…