welcome to buy rolex replica in our site.

Agent Concept

Overview

In DIVAs, an agent is defined as a soft­ware enti­ty which is dri­ven by a set of ten­den­cies in the form of indi­vid­ual objec­tives; can com­mu­ni­cate, col­lab­o­rate, coor­di­nate and nego­ti­ate with oth­er agents; pos­sess­es resources of its own; exe­cutes in an envi­ron­ment that is par­tial­ly per­ceived; pos­sess­es skills and can offer ser­vices.

Agent Architecture

DIVAs’ agent archi­tec­ture con­sists of four mod­ules:

The inter­ac­tion mod­ule han­dles the agen­t’s inter­ac­tion with exter­nal enti­ties, sep­a­rat­ing envi­ron­ment inter­ac­tion from agent inter­ac­tion. The Envi­ron­ment Per­cep­tion Mod­ule con­tains var­i­ous per­cep­tion mod­ules emu­lat­ing human-like sens­es and is respon­si­ble for per­ceiv­ing infor­ma­tion about an agen­t’s envi­ron­ment. The Agent Com­mu­ni­ca­tion Mod­ule pro­vides an inter­face for agent-to-agent com­mu­ni­ca­tion.
The knowl­edge mod­ule is par­ti­tioned into Exter­nal Knowl­edge Mod­ule (EKM) and Inter­nal Knowl­edge Mod­ule (IKM). The EKM serves as the por­tion of the agen­t’s mem­o­ry that is ded­i­cat­ed to main­tain­ing knowl­edge about enti­ties exter­nal to the agent, such as acquain­tances and objects sit­u­at­ed in the envi­ron­ment. The IKM serves as the por­tion of the agen­t’s mem­o­ry that is ded­i­cat­ed for keep­ing infor­ma­tion that the agent knows about itself, includ­ing its cur­rent state, phys­i­cal con­straints, and social lim­i­ta­tions.
The task mod­ule man­ages the spec­i­fi­ca­tion of the atom­ic tasks that the agent can per­form in the domain in which it is deployed (e.g., walk­ing, car­ry­ing, etc.).
The plan­ning and con­trol mod­ule serves as the brain of the agent. It uses infor­ma­tion pro­vid­ed by the oth­er mod­ules to plan, ini­ti­ate tasks, make deci­sions, and achieve goals.

Agent Perception

The agents’ per­cep­tion plays an impor­tant role in the influ­ence-reac­tion mod­el since it informs the agents about their envi­ron­ment and pro­vides a basis for plan­ning and deci­sion mak­ing. The DIVAs agent archi­tec­ture uses the envi­ron­ment per­cep­tion mod­ule as the pri­ma­ry inter­face to envi­ron­ment infor­ma­tion. The envi­ron­ment per­cep­tion mod­ule con­tains a vari­able num­ber of per­cep­tion sen­sors which extract spe­cif­ic per­ceiv­able knowl­edge from the pro­vid­ed infor­ma­tion.

The envi­ron­ment per­cep­tion mod­ule has a lim­it­ed inter­face and is iso­lat­ed from oth­er agent func­tions to achieve low­er cou­pling. It receives states and events from the inter­ac­tion mod­ule and pro­vides facts and beliefs to the knowl­edge mod­ule.

The envi­ron­ment per­cep­tion mod­ule uti­lizes var­i­ous per­cep­tion sen­sors to per­ceive envi­ron­ment infor­ma­tion. Each sen­sor rep­re­sents a sin­gle sense (vision, hear­ing, smelling, etc.) and uses an algo­rithm to deter­mine what the agent can per­ceive about its envi­ron­ment. These per­cepts are sent to the agen­t’s brain to be mem­o­rized.

The per­cep­tion mod­ule main­tains a mod­i­fi­able list of mod­u­lar sen­sors, allow­ing sens­es to eas­i­ly be added, mod­i­fied, and removed. Since many types of agents uti­lize sim­i­lar sens­es, per­cep­tion sen­sors can be reused by var­i­ous agents in dif­fer­ent appli­ca­tions. Addi­tion­al­ly, each sen­sor may have mul­ti­ple per­cep­tion algo­rithms since there may be many algo­rithms which imple­ment the same sense. With a plug-in inter­face, users are able to dynam­i­cal­ly switch which algo­rithm is used by any of an agen­t’s per­cep­tion sen­sors.

In addi­tion to switch­ing algo­rithms, sen­sors are para­me­ter­ized to allow mod­i­fi­ca­tions to a par­tic­u­lar sense. For exam­ple, an agent with impaired hear­ing is made less sen­si­tive to audi­ble infor­ma­tion by decreas­ing its min­i­mum audi­ble inten­si­ty. This would reduce the amount of infor­ma­tion extract­ed by the sen­sor, thus real­iz­ing the impair­ment. Sim­i­lar­ly, an agen­t’s vision can be adjust­ed to account for near- or far-sight­ed­ness by lim­it­ing the vis­i­ble dis­tance or field of view. These para­me­ters are ini­tial­ly assigned dur­ing spec­i­fi­ca­tion but can be mod­i­fied dynam­i­cal­ly to accom­mo­date sit­u­a­tions involv­ing agents whose sen­si­tiv­i­ty changes while inter­act­ing with their envi­ron­ment. For exam­ple, an agent that is near an explo­sion may lose hear­ing dur­ing a sim­u­la­tion. By mod­i­fy­ing the audi­to­ry recep­tiv­i­ty at run­time, DIVAs agents are capa­ble of mod­el­ing such sit­u­a­tions.

This video demon­strates var­i­ous agent per­cep­tion sen­sors. An explo­sion is trig­gered by the user. When agents hear or see the explo­sion they try to run away from the threat. As the smoke lingers, agents who can see or smell the smoke avoid it. The user selects an agent and dis­ables its abil­i­ty to see and smell. When this agent encoun­ters the smoke, it does not avoid it since it can­not per­ceive it. When the smoke sub­sides, the select­ed agent is unable to move around the foun­tain since it can­not see the foun­tain. When the user reen­ables the agen­t’s vision, the agent sees the foun­tain and begins to run around it.

Demos