In DIVAs, an agent is defined as a software entity which is driven by a set of tendencies in the form of individual objectives; can communicate, collaborate, coordinate and negotiate with other agents; possesses resources of its own; executes in an environment that is partially perceived; possesses skills and can offer services.
DIVAs’ agent architecture consists of four modules:
The interaction module handles the agent’s interaction with external entities, separating environment interaction from agent interaction. The Environment Perception Module contains various perception modules emulating human-like senses and is responsible for perceiving information about an agent’s environment. The Agent Communication Module provides an interface for agent-to-agent communication. The knowledge module is partitioned into External Knowledge Module (EKM) and Internal Knowledge Module (IKM). The EKM serves as the portion of the agent’s memory that is dedicated to maintaining knowledge about entities external to the agent, such as acquaintances and objects situated in the environment. The IKM serves as the portion of the agent’s memory that is dedicated for keeping information that the agent knows about itself, including its current state, physical constraints, and social limitations. The task module manages the specification of the atomic tasks that the agent can perform in the domain in which it is deployed (e.g., walking, carrying, etc.). The planning and control module serves as the brain of the agent. It uses information provided by the other modules to plan, initiate tasks, make decisions, and achieve goals.
The agents’ perception plays an important role in the influence-reaction model since it informs the agents about their environment and provides a basis for planning and decision making. The DIVAs agent architecture uses the environment perception module as the primary interface to environment information. The environment perception module contains a variable number of perception sensors which extract specific perceivable knowledge from the provided information.
The environment perception module has a limited interface and is isolated from other agent functions to achieve lower coupling. It receives states and events from the interaction module and provides facts and beliefs to the knowledge module.
The environment perception module utilizes various perception sensors to perceive environment information. Each sensor represents a single sense (vision, hearing, smelling, etc.) and uses an algorithm to determine what the agent can perceive about its environment. These percepts are sent to the agent’s brain to be memorized.
The perception module maintains a modifiable list of modular sensors, allowing senses to easily be added, modified, and removed. Since many types of agents utilize similar senses, perception sensors can be reused by various agents in different applications. Additionally, each sensor may have multiple perception algorithms since there may be many algorithms which implement the same sense. With a plug-in interface, users are able to dynamically switch which algorithm is used by any of an agent’s perception sensors.
In addition to switching algorithms, sensors are parameterized to allow modifications to a particular sense. For example, an agent with impaired hearing is made less sensitive to audible information by decreasing its minimum audible intensity. This would reduce the amount of information extracted by the sensor, thus realizing the impairment. Similarly, an agent’s vision can be adjusted to account for near- or far-sightedness by limiting the visible distance or field of view. These parameters are initially assigned during specification but can be modified dynamically to accommodate situations involving agents whose sensitivity changes while interacting with their environment. For example, an agent that is near an explosion may lose hearing during a simulation. By modifying the auditory receptivity at runtime, DIVAs agents are capable of modeling such situations.
This video demonstrates various agent perception sensors. An explosion is triggered by the user. When agents hear or see the explosion they try to run away from the threat. As the smoke lingers, agents who can see or smell the smoke avoid it. The user selects an agent and disables its ability to see and smell. When this agent encounters the smoke, it does not avoid it since it cannot perceive it. When the smoke subsides, the selected agent is unable to move around the fountain since it cannot see the fountain. When the user reenables the agent’s vision, the agent sees the fountain and begins to run around it.