Agent-Based Social Simulation

Overview 1
Social Virtual Agent 2
Agent Vision 3
Agent Perception Combination 4
Evacuation Scenarios 5
Related Papers 6
Demos 7

The MAVSVille project is relat­ed to the devel­op­ment of a social sim­u­la­tion sys­tem where vir­tu­al agents rep­re­sent­ing humans evolve in an envi­ron­ment rep­re­sent­ing a city. The city con­sists of 814 envi­ron­ment objects which include com­mer­cial build­ings, res­i­den­tial areas, roads, trees, bench­es, bas­ket­ball courts and parks. Thou­sands of vir­tu­al agents per­ceive their sur­round­ings through advanced vision, audi­to­ry and olfac­to­ry sen­sors. They exe­cute com­plex path-find­ing and col­li­sion avoid­ance algo­rithms to move with­in the envi­ron­ment. In addi­tion, they inter­act with oth­er vir­tu­al agents and plan and delib­er­ate to achieve their goals.

A social vir­tu­al agent is an instance of the DIVAs agent archi­tec­ture that is used to mod­el a human. As such, a social vir­tu­al agent has the fol­low­ing archi­tec­ture:

agent

A per­cep­tion sys­tem that mod­els human sen­so­ry sys­tems is crit­i­cal for sim­u­lat­ing vir­tu­al agents evolv­ing in open envi­ron­ments (i.e., inac­ces­si­ble, non-deter­min­is­tic, dynam­ic, con­tin­u­ous) . In order to for­mu­late plans and make deci­sions, vir­tu­al agents need to obtain visu­al envi­ron­men­tal infor­ma­tion. Tech­niques to acquire this infor­ma­tion have two main goals: 1) pro­vide accu­rate results, and 2) exe­cute as quick­ly as pos­si­ble. These goals con­flict with each oth­er since increas­ing the accu­ra­cy of vision tech­niques results in greater pro­cess­ing time.

Our vir­tu­al agent vision per­cep­tion algo­rithms approx­i­mate real­is­tic vision while main­tain­ing low exe­cu­tion time. Sim­i­lar­ly to human vision, in our approach, each agent’s per­cep­tion mod­ule process­es envi­ron­ment infor­ma­tion uti­liz­ing the vision sen­sor to deter­mine exact­ly what the agent is able to see with­in a vision cone (the human eye takes in light in the shape of an increas­ing cone).

agent-vis1agent-vis2     agent-vis3

 

 

 

 

 

The vision sen­sor exe­cutes two tests: the vision test and the vision obstruc­tion test. These tests make use of the agent vision cones in com­bi­na­tion with bound­ing box­es around envi­ron­ment objects in order to effi­cient­ly deter­mine which objects are seen. After the vision tests are com­plete, the infor­ma­tion regard­ing seen objects is stored in the agent’s knowl­edge mod­ule. The accu­ra­cy and effi­cien­cy of our vision tech­niques have been ver­i­fied on mul­ti-agent sce­nar­ios involv­ing thou­sands of agents exe­cut­ing in sim­u­lat­ed real-time.

Most Mul­ti-Agent Based Sim­u­la­tion Sys­tems (MABS) have tack­led the chal­lenge of per­cep­tion by pro­vid­ing agents with glob­al or local envi­ron­men­tal knowl­edge. Even though these approach­es are straight­for­ward and easy to imple­ment, they are unfit to sim­u­late real­is­tic sce­nar­ios. To date, most MABS that imple­ment some form of per­cep­tion have focused heav­i­ly on a sin­gle sense, vision. Since the inte­gra­tion of oth­er sens­es such as smell or hear­ing is almost non-exis­tent with­in MABS, the com­bi­na­tion of per­cep­tion data has drawn very lim­it­ed atten­tion.

We have devel­oped an agent per­cep­tion mod­ule which inte­grates vision, audi­to­ry and olfac­to­ry sen­sors. The var­i­ous sen­sors para­me­ters can be con­fig­ured indi­vid­u­al­ly for each agent and can be mod­i­fied at exe­cu­tion time. The per­cep­tion com­bi­na­tion algo­rithm com­bines the sen­so­ry data col­lect­ed from the var­i­ous sen­sors to pro­duce knowl­edge.

Agent Per­cep­tion Mod­ule

perception-module-agent

The per­cep­tion mod­ule receives as input the event data which includes infor­ma­tion about the event as well as its sen­sor inter­faces. For exam­ple, an explo­sion includes audi­ble, vis­i­ble, olfac­to­ry, and tac­tile inter­faces, where­as a siren only includes an audi­ble inter­face. Once a sen­sor receives event data, it deter­mines whether the infor­ma­tion is per­ceiv­able by exam­in­ing the inter­faces. Using the agent cur­rent state and the raw sen­so­ry data, the sen­sor exe­cutes a spe­cial­ized algo­rithm to deter­mine what is actu­al­ly per­ceived by the agent.