• Ingen resultater fundet

To show how the AIS agents perform, this section describes two simulations in which the AIS agents has to fight against either three teams of dummy agents or three other teams of AIS agents. Primarily their ability to gather information and sabotage enemy agents is analyzed.

7.3.1 Simulation 1

In this simulation, one team of AIS agents are up against three teams of dummy

agents. Each team has 10 agents, 2 of each of the predefined roles. There are 50

nodes in a 12x6 grid. The generated graph has 125 edges, and the length of the

simulation is 1000 steps. Every 100th step, starting from step 0, can be seen in

appendix A.2 on page 70. The milestones used can be seen in appendix A.1 on

page 68, as well as in table 7.3.6.

7.3 Testing/results 55 Simulation 1 Simulation 2

All nodes probed: 151/-/-/- steps 312/252/168/285 steps All edges surveyed: 44/-/-/- steps 79/71/76/88 steps All enemy agents inspected: 83/-/-/- steps 69/63/68/75 steps Table 7.1: Gathering information in the two simulations

7.3.2 Simulation 2

This simulation has the same basic setup as simulation 1, except that there are 4 teams with AIS agents and no teams with dummy agents. The generated graph has 125 edges.

Every 100th step, starting from step 0, can be seen in appendix A.3 on page 82.

Note: A lot of agents from the red, green and yellow teams gather at a single node rather fast (before the 100th step), and for the remainder of the simulation, this node is very populated.

7.3.3 Gathering information

One of the main targets with the AIS agent is that it should prioritize gathering information over all other actions.

The most important piece of information is the probing of nodes, as this will increase the potential score for the team. The figures in table 7.3.3 gives an idea of how fast this information can be gathered, against both passive (simulation 1) and aggressive (simulation 2) enemies. The best case is from simulation 1, in which it took 151 steps to probe 50 nodes, with 2 agents able to probe.

This gives approximately one probe per 6 steps per agent, which means an

average travel/recharge time of 5 steps between each probe for each agent. The

worst case is in simulation 2, in which it took the red team 312 steps to probe

all nodes. This gives approximately one probe per 12 steps per agent, which

means an average travel/recharge time of 11 steps between each probe for each

agent. This is however against aggressive opponents, which means that the

two red agents that are able to probe, might have been disabled some of the

time. In both simulations however, the probing agents may also have spent

time surveying edges, as they are able to perform that action as well, and it is

prioritized higher than movement.

56 Artificial intelligence

Surveying of edges shows the same tendencies as probing nodes: Aggressive opponents increase the amount of time taken by at most a factor 2. In the best case (simulation 1) it takes 44 steps to survey all 125 edges, with 10 agents able to survey. This means that there is approximately 3.5 steps between each survey for each agent in the best case, and 7 steps in the worst case. This is faster than probing because the agents can survey multiple edges at a time, but only probe one node at a time.

Inspecting enemy agents take approximately the same amount of time with or without aggressive opponents. With two agents able to inspect, and 30 enemies to inspect, there is approximately 4.2 steps between each inspection in the best case, and 5.5 in the worst case. As with surveying, this is faster than probing because multiple opponents can be inspected at the same time.

Information about the layout of the graph is needed as well; however, this information is automatically gathered when the agents attempt to probe and survey everything. Once all nodes has been probed, or all edges surveyed, all nodes has with 100% certainty been visited, and all information about the graph has been available at one time or another to at least one agent on the team.

Assuming perfect sharing of information, all agents on the team is expected to know the layout of the entire graph once all nodes has been probed or all edges surveyed.

7.3.4 Attacking/repairing

The AIS agents are supposed to be aggressive and thus attack enemies very often.

In simulation 1, all enemies of the single AIS team are dummy agents, and thus not able to attack or parry. The result of this, and the aggression from the AIS agents, is that all but three enemy agents are disabled in the end of the simulation. As can be seen in table 7.3.4, 37 attacks successful attacks has been performed. As two agents was able to attack, this gives one attack per 54 simulation steps per agent. This figure suggests that the agents aren’t as aggressive as they could be.

In simulation 2, a lot of agents from team 1, 2 and 4 are gathered at a single node

through most of the simulation. The cause of this is the priority of actions, such

7.3 Testing/results 57 Simulation 1 Simulation 2

Total number of attacks: 37/0/0/0 steps 357/394/80/328 steps Total number of parries: 0/0/0/0 steps 11/126/1/234 steps

Table 7.2: Total number of attacks and parries for the various teams in the two simulations

Simulation 1 Simulation 2

Total score 140.677/11.263/5.337/

2.931

61.678/58.289/111.381/

48.474

Money in last step 24/4/0/0 22/28/24/26

AvgMin 11.67/0.73/0.53/0.29 3.97/3.03/8.74/2.25 Table 7.3: Scores over the course of an entire simulation, i.e. 1000 steps that attacking and repairing is prioritized above moving away from the lump of agents. This means that the agents that can repair, will be caught on the node as long as there are agents to repair, and agents that can attack will likewise be caught, as long as there are enemies to attack. This yields an infinite loop given the right circumstances. The effect of this is seen in the number of successful attacks and parries for those teams, which are rather large compared to team 3, with the exception of the number of parries for team 1. In this case, aggression might be too high, due to the simple priority of actions.

7.3.5 Forming groups/zones

When moving around, the agents should attempt to form groups, and thus increase the team score, while trying to not stand at the same node as other active agents.

Table 7.3.5 shows the ending score and amount of money for all teams in both simulations. The last row of the table displays the minimum possible average zone-score per agent per step, which is calculated using the following formula:

Avg

M in,i

= Score

i,end

− M oney

i,end

· Steps Steps · Agents

i

(7.2)

In simulation 1, the AIS agents each scored at least 11.67 points in zone-score per

step, which is higher than the uncooperative maximum of 9 (the highest possible

58 Artificial intelligence Simulation 1 Simulation 2

Zone score: 50 40/-/-/- steps 112/-/58/514 steps Probed vertices: 25 59/-/-/- steps 102/87/88/105 steps Surveyed edges: 100 12/215/-/- steps 18/14/27/26 steps Inspected vehicles: 20 44/-/-/- steps 21/17/16/37 steps Successful attacks: 40 -/-/-/- steps 136/182/194/151 steps Successful parries: 30 -/-/-/- steps -/216/-/125 steps

Table 7.4: Time taken for completion of milestones

node-weight). This number can’t be compared to the other teams in simulation 1 though, as the amount of active agents differ throughout the simulation, but it does suggest some cooperation.

In simulation 2, the three teams that went into partial deadlock scored well below the team that didn’t. The agents that wasn’t in deadlock did however cooperate to some extent, which is evident from both the images from simulation 2 as well as the data in table 7.3.5.

7.3.6 Achieving milestones

An unintended, but positive, side-effect of gathering information and attacking enemy agents, is that the milestones (if any such are defined) can be achieved.

This will result in money, which in turn will return in a higher score for the team.

Table 7.3.6 shows the amount of time take for the various teams to achieve the various milestones.

In simulation 1, the AIS team was very fast to achieve the first four milestones, while only a single dummy team achieved a single milestone.

In simulation 2, all teams took almost the same time to achieve the 2nd, 3rd,

4th and 5th milestones, while the other two (zone score and parries) were up

to chance. This suggests that information-seeking and aggression will yield two

thirds of the milestones consistently.

Chapter 8

Discussion

The previous chapters has had a rather objective view on the development process and product. In this chapter, a more subjective view on the process and product is given, along with a few comments on the future potential of multi-agent systems.

8.1 The competition

In the beginning of this project, there was no official simulator available. Ac-cording to the time-schedule for the competition, the simulator should have been released prior to the start of this project though, and as such it wasn’t possible to tell when and in what state an official simulator would be released. This enforced the creation of a simulator in this project. Looking back, the creation of this took far too much time and removed time from the development of an artificial intelligence.

Along with the release of the official simulator, a changed scenario description

was given. The new scenario was changed on some key points to simplify the

development of AI’s, but as the development of the simulator in this report had

already begun, some of the old requirements was kept.

60 Discussion

One of the changes in the new scenario was the removal of enforced communica-tion through the simulacommunica-tion server, and the removal of mixed teams. This would enable agents to communicate with friendly agents without a 1-tick latency. In this project, the agents aren’t strictly required to communicate through the simulation server, but it is possible and suggested as the teams can consist of several different types of AI’s. This reduces flexibility in the development of the AI’s, but highlights another interesting problem: multi-agent systems with delayed communication, which will be further discussed in section 8.4.

Time in the simulation is discrete, which causes both the complexity of solutions

and realism of the simulation to be reduced. However, if the simulation ran in

realtime it would increase the complexity of not only the AI development, but

also the simulation process as the agents might be located on remote machines.