Agent-Based Simulation Concepts

Overview 1
Our Agent Model 2
Our Environment Model 3
Agent-Environment Interactions 4
Related Publications 5

For sim­u­la­tions to be mean­ing­ful, it is nec­es­sary to define mod­els for both vir­tu­al agents and their envi­ron­ment. Thus far, much atten­tion has been giv­en to the def­i­n­i­tion of mod­els for agents (e.g., behav­ior, deci­sion-mak­ing and inter­ac­tion mod­els), while lit­tle has been done in defin­ing vir­tu­al envi­ron­ments that mim­ic the com­plex­i­ty of real-world envi­ron­ments.

We define an agent as a soft­ware enti­ty that is dri­ven by a set of ten­den­cies in the form of indi­vid­ual objec­tives and is capa­ble of com­mu­ni­cat­ing, col­lab­o­rat­ing, coor­di­nat­ing and nego­ti­at­ing with oth­er agents. Each agent pos­sess­es resources of its own, exe­cutes in an envi­ron­ment that is per­ceived through sen­sors, pos­sess­es skills and can offer ser­vices.

Agent Architecture

Our agent archi­tec­ture con­sists of four mod­ules:

The inter­ac­tion mod­ule han­dles the agent’s inter­ac­tion with exter­nal enti­ties, sep­a­rat­ing envi­ron­ment inter­ac­tion from agent inter­ac­tion. The Envi­ron­ment Per­cep­tion Mod­ule con­tains var­i­ous per­cep­tion mod­ules emu­lat­ing human-like sens­es and is respon­si­ble for per­ceiv­ing infor­ma­tion about an agent’s envi­ron­ment. The Agent Com­mu­ni­ca­tion Mod­ule pro­vides an inter­face for agent-to-agent com­mu­ni­ca­tion.

The knowl­edge mod­ule is par­ti­tioned into Exter­nal Knowl­edge Mod­ule (EKM) and Inter­nal Knowl­edge Mod­ule (IKM). The EKM serves as the por­tion of the agent’s mem­o­ry that is ded­i­cat­ed to main­tain­ing knowl­edge about enti­ties exter­nal to the agent, such as acquain­tances and objects sit­u­at­ed in the envi­ron­ment. The IKM serves as the por­tion of the agent’s mem­o­ry that is ded­i­cat­ed for keep­ing infor­ma­tion that the agent knows about itself, includ­ing its cur­rent state, phys­i­cal con­straints, and social lim­i­ta­tions.

The task mod­ule man­ages the spec­i­fi­ca­tion of the atom­ic tasks that the agent can per­form in the domain in which it is deployed (e.g., walk­ing, car­ry­ing, etc.).

The plan­ning and con­trol mod­ule serves as the brain of the agent. It uses infor­ma­tion pro­vid­ed by the oth­er mod­ules to plan, ini­ti­ate tasks, make deci­sions, and achieve goals.

We sub­scribe to the idea that a vir­tu­al envi­ron­ment plays a crit­i­cal role in a sim­u­la­tion and as such, should be treat­ed as a first-class enti­ty in sim­u­la­tion design. In DIVAs vir­tu­al agents and envi­ron­ment are ful­ly decou­pled.

For large scale sim­u­la­tions to be mean­ing­ful, it is nec­es­sary to imple­ment real­is­tic mod­els for vir­tu­al envi­ron­ments. This is a non-triv­ial task since real­is­tic vir­tu­al envi­ron­ments (also called open envi­ron­ments) are:

Inac­ces­si­ble: vir­tu­al agents sit­u­at­ed in these envi­ron­ments do not have access to glob­al envi­ron­men­tal knowl­edge but per­ceive their sur­round­ings through sen­sors (e.g., vision, audi­to­ry, olfac­to­ry);

Non-deter­min­is­tic: the effect of an action or event on the envi­ron­ment is not known with cer­tain­ty in advance;

Dynam­ic: the envi­ron­ment con­stant­ly under­goes changes as a result of agent actions or exter­nal events;

 Con­tin­u­ous: the envi­ron­ment states are not enu­mer­able.

We achieve the sim­u­la­tion of open­ness by:

  • Struc­tur­ing the sim­u­lat­ed envi­ron­ment mod­el as a set of cells.
  • Assign­ing a spe­cial-pur­pose design agent called cell con­troller to each cell.
  • env-arch-picture

     A cell con­troller does not cor­re­spond to a real-world con­cept but is defined for sim­u­la­tion engi­neer­ing pur­pos­es. It is respon­si­ble for:

    • Man­ag­ing envi­ron­men­tal infor­ma­tion about its cell.
    • Inter­act­ing with its local vir­tu­al agents to inform them about changes in their sur­round­ings.
    • Com­mu­ni­cat­ing with oth­er cell con­trollers to inform them of the prop­a­ga­tion of exter­nal events.
    Cell Controller Architecture

    The cell con­troller archi­tec­ture con­sists of an inter­ac­tion mod­ule, an infor­ma­tion mod­ule, a task mod­ule, and a plan­ning and con­trol mod­ule.

     CellControllerArch

    The Inter­ac­tion Mod­ule is used to per­ceive exter­nal events and han­dles asyn­chro­nous com­mu­ni­ca­tion among con­trollers as well as syn­chro­nous com­mu­ni­ca­tion between con­trollers and agents. Con­troller-to-Agent inter­ac­tion must be syn­chro­nous in order to ensure that agents receive a con­sis­tent state of the envi­ron­ment. Since Con­troller-to-Con­troller inter­ac­tion involves high-lev­el self-adap­ta­tion and has no bear­ing on the con­sis­ten­cy of the state of the sim­u­lat­ed envi­ron­ment, these inter­ac­tions occur asyn­chro­nous­ly.

    The Knowl­edge Mod­ule con­tains the data a con­troller needs to func­tion. It is com­posed of exter­nal and inter­nal knowl­edge mod­ules

    The Linked Cell Model main­tains a list of neigh­bor­ing cells, that is, those which share a bor­der with the cell. The cell con­troller uses this infor­ma­tion to han­dle events that occur near bound­aries and poten­tial­ly affect adja­cent cells.

    The Self Mod­el con­tains infor­ma­tion about the con­troller (e.g., id) as well as the essen­tial char­ac­ter­is­tics of the cell assigned to the con­troller such as its iden­ti­fi­er and region bound­aries.

    The Agent Mod­el con­tains min­i­mal infor­ma­tion, such as iden­ti­fi­ca­tion and loca­tion, about the agents with­in the cell’s envi­ron­ment region.

    The Object Mod­el includes infor­ma­tion detail­ing phys­i­cal enti­ties that are sit­u­at­ed with­in the cell region but are not actu­al agents (e.g., bar­ri­cades, build­ings, road signs).

    The Resource Mod­el con­tains infor­ma­tion about the resources avail­able for the con­troller.

    The Con­straint Mod­el defines the spe­cif­ic prop­er­ties and laws of the envi­ron­ment.

    The Task Mod­ule man­ages the spec­i­fi­ca­tion of the atom­ic tasks that the con­troller can per­form.

    The Plan­ning and Con­trol Mod­ule serves as the brain of the con­troller. It uses infor­ma­tion pro­vid­ed by the oth­er mod­ules to plan, ini­ti­ate tasks and make deci­sions.

Con­text-aware agents must be con­stant­ly informed about changes in their own state and the state of their sur­round­ings. Sim­i­lar­ly, the envi­ron­ment must react to influ­ences imposed upon it by vir­tu­al agents and users out­side of the sim­u­la­tion.

We have defined the Action-Poten­tial /Result (APR) mod­el for agent-envi­ron­ment inter­ac­tions. The APR mod­el extends the influ­ence-reac­tion mod­el to han­dle open envi­ron­ments, exter­nal stim­uli, agent per­cep­tion and influ­ence com­bi­na­tion.

apr_simple

This mod­el is dri­ven by con­tin­u­ous time ticks. At every tick, when agents exe­cute actions, they pro­duce stim­uli that are syn­chro­nous­ly com­mu­ni­cat­ed to their cell con­troller (i.e., the cell con­troller man­ag­ing the cell in which the agent is sit­u­at­ed). Cell con­trollers inter­pret and com­bine these agent stim­uli as well as any exter­nal stim­uli trig­gered by the user of the sim­u­la­tion. Once cell con­trollers deter­mine the total influ­ence of the stim­uli, they update the state of their cells. Every cell con­troller pub­lish­es their new state to agents locat­ed with­in their cell bound­aries. Upon receiv­ing the updat­ed state of the envi­ron­ment, agents per­ceive the envi­ron­ment, mem­o­rize the per­ceived infor­ma­tion and decide how to act. The cycle repeats when the next tick occurs.

  • Rym Zalila-Mili, E. Oladime­ji, and Renee Stein­er. Archi­tec­ture of the DIVAs Sim­u­la­tion Sys­tem. In Pro­ceed­ings of Agent-Direct­ed Sim­u­la­tion Sym­po­sium ADS06, Huntsville, Alaba­ma, April 2006. Soci­ety for Mod­el­ing and Sim­u­la­tion, Soci­ety for Mod­el­ing and Sim­u­la­tion.
  • Rym Zalila-Mili, Renee Stein­er, and E. Oladime­ji. DIVAs: Illus­trat­ing an Abstract Archi­tec­ture for Agent-Envi­ron­ment Sim­u­la­tion Sys­tems. Mul­ti­a­gent and Grid Sys­tems. Spe­cial issue on Agent-Ori­ent­ed Soft­ware Devel­op­ment Method­olo­gies, 2(4):505{textendash}525, Jan­u­ary 2006.
  • Renee Stein­er, G. Leask, and Rym Mili. An Archi­tec­ture for MAS Sim­u­la­tion Envi­ron­ments, vol­ume 3830, pages 50{textendash}67. Springer Ver­lag, 2006.
  • R. Stein­er, G. Leask, and R. Mili. An Archi­tec­ture for MAS Sim­u­la­tion Envi­ron­ments. In Pro­ceed­ings of ACM Con­fer­ence on Autonomous Agents and Mul­ti Agent Sys­tems, pages 50–67, Utrecht, The Nether­lands, July 2005.
  • Rym Mili, G. Leask, U. Shakya, and Renee Stein­er. Archi­tec­tur­al Design of the DIVAs Envi­ron­ment. In Pro­ceed­ings of Envi­ron­ments for Mul­ti-Agent Sys­tems (E4MAS04), Colum­bia Uni­ver­si­ty, NY, July 2004.

More pub­li­ca­tions avail­able here