Karan K. Budhraja and Tim Oates
Abstract. Agent-based modeling is a paradigm of modeling dynamic systems of interacting agents that are individually governed by specified behavioral rules. Training a model of such agents to produce an emergent behavior by specification of the emergent (as opposed to agent) behavior is easier from a demonstration perspective. Without the involvement of manual behavior specification via code or reliance on a defined taxonomy of possible behaviors, the demonstrator specifies spatial motion of the agents over time, and retrieves agent-level parameters required to execute that motion. A framework for reproducing emergent behavior, given an abstract demonstration, is discussed in . Each query to the framework is independent of previous queries. Our work addresses this information communication deficit and incorporates a feedback mechanism to iteratively improve the quality of the reproduced behavior. This is explored by variation of regression parameters and data points used. Using data point selection to improve demonstration replication is established as a means of iterative optimization. Using optimization also shows potential for improved demonstration replication capability for the framework.
An Agent-Based Model (ABM) is a computational model to simulate behavior ofinteracting agents by specification of agent-level behavioral rules. Through interactions, the behaviors ofindividual agents produce more complex emergent collective behavior. Examples of ABMs include motion of humans in a crowd, spreading of diseases, and motion of groups of animals. The individual behavior of an agent is governed by control parameters calledAgent-Level Parameters (ALPs) for the purpose of this work. Different values of ALPs lead to different emergent behaviors in the ABM. Values quantifying such emergent behavior are calledSwarm-Level Parameters (SLPs). When demonstrating swarm behavior, it is easier for the demonstrator to specify SLPs (demonstration SLPs) than it is to specify ALPs , , . They may just specify locations of agents at subsequent time instances. A model is then required to interpret this information and estimate the ALPs needed to produce this behavior.
Work in ,  (the AMF + framework) addresses a level of abstraction as in visual demonstration with a lower time complexity than stochastic search algorithms and without depending on prior knowledge about probable collective behaviors to follow demonstrations. Work in  improves on ,  by establishing that dataset selection has an impact on the performance of the AMF+ framework. It does not, however…
Complete technical paper available as a PDF.