Christopher X J. Jensen
Professor, Pratt Institute

ESA 2012 Workshop #8, Getting off the Ground with Individual-Based Modeling: A Primer for Instructors and Researchers

Posted 05 Aug 2012 / 0

I got off to a running start at this year’s Ecological Society of America (ESA) annual meeting with a workshop orchestrated by Steven Railsback and Volker Grimm. Entitled “Getting off the Ground with Individual-Based Modeling: A Primer for Instructors and Researchers“, the workshop was a practical introduction to the material provided in their new textbook Agent-Based and Individual-Based Modeling: A Practical Introduction. Although I have read both this book and their other text on this topic (Individual-based Modeling and Ecology), I learned a lot from this workshop, in particular because I got to see how Grimm and Railsback introduce agent-based modeling through the object-oriented programming language NetLogo.

Individual-based models (IBMs, also known as agent-based models) allow researchers to build virtual words where individual organisms interact with their environment and, potentially, each other. Whereas traditional modeling in ecology has been inspired by the differential equations employed by physics, individual-based models are inspired by behavioral simulations that are more commonly employed in the social sciences. The advantage of IBMs is that they allow modelers to explicitly include variation in individual traits and/or environmental conditions.

For all their potential value, IBMs suffer from a number of problems. The first is that the kind of training needed to construct IBMs (some understanding of programming languages and their use in creating simulated worlds) is generally not provided to biology students. Whereas a graduate program in the biological sciences might require that all students have taken calculus (which in theory should prepare one to do traditional ‘implicit’ modeling via differential equations), few if any assume or require any training in computer science. As such, those interested in teaching students to implement IBMs will have to provide this additional dimension of training themselves, a task which probably impedes a lot of IBM-based modeling from being taught at the undergraduate and graduate levels.

A second problem with IBMs exists as the downside of one of this modeling approach’s greatest strengths: IBMs are extremely versatile in their potential to incorporate mechanisms. While such versatility lends itself to realistic and meaningful modeling, it also makes communicating the nature of one’s IBM somewhat difficult. Whereas a differential equation model can be quickly summarized in a standardized manner familiar to all modelers, the IBM exists in computer code and therefore must be translated in a manner that allows others to at least understand — and ideally replicate — the structure of the model.

Grimm and Railsback effectively address these issues in their new textbook, which formed the basis for their rapid introduction to IBMs during this workshop. Their solution to the ‘programming problem’ is to teach students to use NetLogo, an impressively-realized user-friendly platform for constructing IBMs. Their solution to the ‘communication problem’ is the ODD, a protocol developed by a team of individual-based modelers to provide a standardized means of depicting IBMs. Both of these valuable solutions were infused throughout the training, which was composed of equal parts lecture and hands-on programming.

After a brief introduction by Railsback, Grimm introduced the topic at hand by describing modeling in general and individual-based modeling in specific. Reacting to the tendency by early simulation-programmers to create purely heuristic model worlds, Grimm pushed for a modeling process compeled to travel along the path of scientific inquiry: models must be built to answer specific research questions, because only through the clear identification of such questions can the appropriate model be constructed. Grimm also suggested that models should be “simplified to the point of pain and beyond”, as in order to truly understand the emergent dynamics of a model, one must first understand its most basic properties.

Grimm also provided a basic overview of what characterizes IBMs and why NetLogo is a good platform for instituting them. After this introduction, we dived into our first NetLogo programming exercise, the “Mushroom Hunt” problem from Chapter 2 of the textbook. Throughout these hands-on exercises the participants programmed their own versions of the sample models under the guidance of either Grimm or Railsback, who both patiently demonstrated a great variety of NetLogo concepts through a stepwise programming approach. An important emphasis throughout the workshop was the importance of constantly verifying the functionality and accuracy of programmed code; participants were strongly discouraged from succumbing to the temptation to program the entire model before checking to see how well it works.

We also got a basic introduction to the ODD protocol, whose name stems from the structure of representing models with an overview, design concepts, and details. While the ODD was initially developed to provide a standardized means of communicating the purpose and structure of IBMs, Grimm pointed out that once adopted the protocol quickly showed an unforeseen utility: by forcing modelers to systematically lay out their model plan, the ODD also became a potent planning tool. Having created some of my own IBMs in an ad-hoc and somewhat undirected manner, I can appreciate the value of forcing oneself to make an ODD formulation the first step of model construction.

Another big push made by Railsback and Grimm is for what they call “pattern-oriented modeling”. While most modeling seeks to identify mechanisms that produce a particular observed pattern, Railsback and Grimm advocate for modeling with the goal of reproducing multiple patterns observed in nature. This approach gets around one of the major criticisms leveled at IBMs, which is that they contain so many moving parts (i.e. potential mechanisms) that they can produce any single pattern, imagined or observed. If a model actually effectively captures multiple patterns, it is far more likely to be an accurate representation of the mechanisms that drive the patterns observed in nature. IBMs still need to be ‘calibrated’ (read: fitted) to some data, but additional patterns provide a challenge for models that should in theory prevent overfitting from creating a false sense of discovery through modeling.

Working with three different IBMs in NetLogo, we got a pretty good introduction to its basic features. Railsback and Grimm modeled best teaching practices by providing  a lot of guidance but also plenty of questions to test our understanding and a few opportunities to develop and explore our own code creations. The role of the NetLogo user manual in the programming process was emphasized, as was the many ways in which code can be tested within and without the NetLogo interface. We even got to do some parameter space exploration using BehaviorSpace, one of the more brilliant extensions built into NetLogo. By the end of the workshop I found myself a bit stunned that I had not known about the power of NetLogo, and more disappointed that I had wasted so much time working on other platforms.

Although Grimm and Railsback’s ‘live presentation’ is not quite as polished as their textbook, they did an impressive job of bringing a diverse audience up to speed with the basics of individual-based modeling. There was a lot of information presented here, but the hands-on approach of asking us as an audience to program in NetLogo was an excellent door-opener: it would have been easy to skip the activities and run a shorter, more didactic training, but the use of NetLogo is what seems to really pull newcomers into individual-based modeling. Both Grimm and Railsback maintain a humility about their own programming skills that conveyed the idea that you do not have to be a natural programmer to get the most out of NetLogo modeling. Their personal anecdotes relating to how they decided to adopt NetLogo as the programming platform for the book provided additional confidence in their choice: it was clear that despite initial skepticism, both authors were won over by NetLogo.

One funny idea kept coming up throughout the workshop, and that was the idea that NetLogo has a “kindergarten feel”. I never quite grasped what Grimm — who made this reference several times — exactly means by this critique, but it is curious. It is absolutely true that many of the library models provided with NetLogo are canned and not very clearly directed at scientific questions, and the whole interface has a bit of the look of an Atari 2600 game. But I think that neither of these perception-generating realities should be allowed to distort our impression of what NetLogo is capable of: you can discover some pretty complex emergent dynamics using NetLogo, and fancy graphics would do nothing to make the interface more useful.

This is the kind of workshop that really expands the horizons of meeting participants; Railsback and Grimm are to be commended for bringing this invaluable training to this year’s ESA meeting.

A Major Post, Conferences, Ecological Modeling, Ecological Society of America, Individual-based Models, Spatially Explicit Modeling, Talks & Seminars

Leave a Reply