Agent-Environment Interactions
Context-aware agents must be constantly informed about changes in their own state and the state of their surroundings. Similarly, the environment must react to influences imposed upon it by virtual agents and users outside of the simulation.
In DIVAs, agent-environment interactions follow the action-potential / result (APR) model. The APR model extends the influence-reaction model to handle open environments, external stimuli, agent perception and influence combination.
This model is driven by continuous time ticks. At every tick, when agents execute actions, they produce stimuli that are synchronously communicated to their cell controller (i.e., the cell controller managing the cell in which the agent is situated). Cell controllers interpret and combine these agent stimuli as well as any external stimuli triggered by the user of the simulation. Once cell controllers determine the total influence of the stimuli, they update the state of their cells. Every cell controller publishes their new state to agents located within their cell boundaries. Upon receiving the updated state of the environment, agents perceive the environment, memorize the perceived information and decide how to act. The cycle repeats when the next tick occurs.