• Ingen resultater fundet

Multi-agent systems has many uses in the real world. Robotic cars for instance can be considered agents in a multi-agent system, where all other agents have varying characteristics.

The development of a true multi-agent theory, considering both friendly and enemy agents, will likely have huge impacts on areas such as financial markets, robotic sports and cars, and to some extent the analysis of human behavior.

Multi-agent systems with delayed communication can for instance be used to improve the performance of high-speed trading, where several systems can be made to cooperate, and thus exploit the strategies of other high-speed trading systems. Due to the speed at which trading occurs, communication might not be possible in real-time though, which is why the notion of delayed communi-cation can be used to improve this area. Furthermore, any multi-agent system with insecure communication might have to perform delayed communication:

imagining that there is some state with (relatively) secure communication, the

strategy until the next state with secure communication should be agreed upon

before entering a state with insecure communication. For example, in

(ameri-can) football, the players of one team agrees upon a strategy before each play,

but changing strategy mid-play will reveal the strategy to the opponents.

62 Discussion

Chapter 9

Conclusion

During this project, a simulator and an artificial intelligence has been created.

The scenario description [2] [3] that this project built upon changed during the project period, which has caused the scenario used to be a mix of the two different versions of the scenario description. In detail, mainly the newest version of the scenario was used, with an added requirement from the old version, which increased the complexity of the AI’s to be used. This in turn opened up another interesting problem, that of delayed communication in multi-agent systems. This however is in slight conflict with an assumption in the simulator, namely that of implied secure communication, whereas delayed communication is mostly relevant for systems with insecure communication.

The simulator is flexible in that it allows the different AI’s to display their world model in the GUI. Ensuring that AI’s can display whatever they want has how-ever meant that saving a simulation to the disk will have to save the graphics, instead of simply the simulation state. This means that a saved simulation will take up much more disk space than would be necessary if only the actual simu-lation state was saved. One might assume that this will also have an influence on how much memory a running simulation will consume.

The language F# has been used to create the simulator and the AI. This has in

both cases meant a mix of purely functional programming and object oriented

programming. To truly be able to take advantage of the potentials of functional

64 Conclusion

programming, a clear distinction between functions with and without side effects has been kept, while trying to keep the amount of functions with side effects to a minimum.

The GUI was created using existing object oriented frameworks, but this proved to be no problem as F# is able to handle that with grace, and possible even easier than its object oriented counterpart, C#. As F# is able to do nearly everything C# can and more, one might ask: Why use C# at all? Multi-paradigm languages gives the programmer more flexibility, but at what cost?

The created AI, named Aggressive Information Seeker, uses a single strategy, which makes it vulnerable to opponents that can predict behavior – but one is always vulnerable to smarter opponents. The strategy used is more than adequate to beat the built-in dummy agent, which is the only other AI is has been tested against. Against it self it does have a flaw though, in that several agents from several teams might lump together at the same one node, and become deadlocked in perpetual attacks and repairs.

The AIS agents cooperate to some extent, but the amount of cooperation is

limited in that they don’t communicate about plans at all.

Bibliography

[1] Multi-Agent Contest, 2011 version, http://www.multiagentcontest.org/2011 [2] Tristan Behrens, Michael K¨ oster, Federico Schlesinger, J¨ urgen Dix, Jomi

H¨ ubner, Multi-Agent Programming Contest Scenario Proposal 2011 Edi-tion, November 20th, 2010

[3] Tristan Behrens, Michael K¨ oster, Federico Schlesinger, J¨ urgen Dix, Jomi H¨ ubner, Multi-Agent Programming Contest Scenario Description 2011 Edi-tion, February 16th, 2011

[4] Mono, http://www.mono-project.com

[5] Mono documentation, http://www.go-mono.com/docs/

[6] GTK+ Reference Manual, Widget Gallery,

http://www.gtk.org/api/2.6/gtk/ch02.html (gtk.org)

[7] F# programming language, http://en.wikipedia.org/wiki/F_Sharp_%28programming_la (wikipedia.org)

[8] F# Programming, http://en.wikibooks.org/wiki/F_Sharp_Programming (wikibooks.org)

[9] Adjacency matrix, http://en.wikipedia.org/wiki/Adjacency_matrix (wikipedia.org)

[10] Lambda calculus, http://en.wikipedia.org/wiki/Lambda_calculus (wikipedia.org)

[11] Floyd-Warshall algorithm, http://en.wikipedia.org/wiki/Floyd%E2%80%93Warshall_algo

(wikipedia.org)

66 BIBLIOGRAPHY

[12] Delaunay triangulation, http://en.wikipedia.org/wiki/Delaunay_triangulation (wikipedia.org)

[13] S-hull: a fast sweep-hull routine for Delaunay triangulation http://www.s-hull.org/

[14] Managed Extensibility Framework (MEF), Microsoft, http://mef.codeplex.com/

[15] Jos´e M. Vidal, Multiagent Systems, 2009, http://multiagent.com/

Appendix A

Tests and results

This section contains the screenshots from two simulations. Only every 100th step is kept here.

The first section displays the settings used. The next two sections display the

two simulations, the first of which consists of one team of AIS agents and three

teams of dummy agents, the second of which consists of four teams of AIS

agents.

68 Tests and results

A.1 Settings for simulations 1 and 2

Figure A.1: General settings used in simulations 1 and 2

A.1 Settings for simulations 1 and 2 69

Figure A.2: Agent settings used in simulations 1 and 2

Figure A.3: Milestone settings used in simulations 1 and 2

70 Tests and results