• Ingen resultater fundet

How have other tested?

In document Detecting network intrusions (Sider 91-96)

In this section we will cover relevant articles, which focus on testing IDS. We will not list the article's results and conclusions, because the main goal is to explain the techniques and tools that the authors has used for testing the relevant IDS.

4.2.1 Performance evaluation of Snort and Suricata

Alhomoud et al. [35] they have tested and analysed the performance of Snort and Suricata. Both programs were implemented in three dierent platforms (ESXi virtual server, Linux 2.6 and FreeBSD) to simulate a real environment.

7http://www.iscx.ca/datasets and http://ali.shiravi.com/84

Figure 4.1: Network design setup

Figure 4.2: Network component specications

Test scenarios were designed to test the performance of Suricata and Snort on dierent operating systems. Both IDS were subject to the same tests and under the exact same conditions. In order to get more accurate results, all scenarios were tested with packet sizes (1470, 1024, 512) for both TCP and UDP. The test was performed for the speed ranging from 250Mbps to 2.0Gbps. In all the scenarios Suricata and Snort were congured to load and run similar number of rules to monitor.

4.2.2 A performance analysis of Snort and Suricata

Recently, there has been shift to core processors and consequently multi-threaded application design. Suricata is a multimulti-threaded open source NIDPS, being developed via the Open Information Security Forum (OISF). Day et al.

[36] describes an experiment, comprising of a series of innovative tests to estab-lish whether Suricata shows an increase in accuracy and system performance over the de facto standard, single threaded NIDPS Snort.

Figure 4.38, illustrates some of the metrics that constitute capacity.

Figure 4.3: Metrics of Capacity

The test-bed was setup in a virtual environment, facilitating experiment porta-bility and security. It also allowed for faster experiment initialisation. This was necessary for frequent repetition and re-conguration of the experiment tests.

Vmware workstation 6.5 was used as the virtualisation platform, largely due to superior IO and disk performance over competitors Virtual Box and Virtual PC.

Snort and Suricata were congured to run using identical rule-sets.

It was decided to capture background trac from a busy universities web and application server. This was then merged with exploit trac, created using the Metasploit Framework. The Metasploit Framework contains a total of 587 ex-ploit modules, allowing attack data to be easily generated in quantity.

The capacity of a NIDPS is closely connected to the CPU capacity of the sys-tem. Thus, Snort and Suricata should be subjected to CPU impairment, to evaluate their eciency under stressful conditions. VMware was used to allow the number of logical and physical cores to be reduced. The cores themselves were stressed by generating threads, causing an adjustable and measureable

8informed by Hall and Wiley "Capacity Verication for High Speed Network Intrusion Detection Systems"

workload. This was performed using the application cpulimit, which generates congurable workloads across the processor, allowing for the total amount of stress applied by each thread, to be limited by a percentage of the CPU capac-ity.The following resources were monitored: CPU utilisation, memory utilisation, persistent storage bandwidth and network interface bandwidth. This was per-formed using the Linux command line utility dstat.

4.2.3 Evaluating intrusion detection systems in high speed networks

Alserhani et al. [37] they have focused on signature-based IDS with an emphasis on evaluating their performance in high-speed trac conditions. They have selected Snort as a test platform because of its popularity and status as a de facto IDS standard.

The test bench setup is as follows: The network is composed of six machines using ProCurve Series 2900 switch as shown in 4.4. The test bench comprises a number of high performance PCs running open source tools to generate back-ground trac, run attack signatures and monitor network performance. The hardware description of the network is shown in 4.5. Snort was also tested for its accuracy on the dierent operating systems (OS) platforms (Windows and Linux). The platforms were tested by injecting a mixture of heavy network trac and scripted attacks through the Snort host. Snort.conf in its default conguration was selected for evaluation. The performance of Snort was also evaluated under the following variant conditions:

• Generating attacks from dierent operating system hosts.

• Varying trac payload, protocol and attack trac in dierent scenarios.

• Subjecting it to hardware constraints of virtual machine congurations.

Figure 4.4: Test Bench

Figure 4.5: Network component specications

4.2.4 An analysis of packet fragmentation attacks vs Snort

Fu et al. [38], Snort IDS was tested. VMware virtual machines were used as both the host and victim. Other tools were also implemented in order to gen-erate attacks against the IDS. The experiment results show the performance of Snort IDS when it was being attacked, and the ability of Snort to detect attacks

in dierent ways.

This research started with the creation of a virtual network using the virtu-alization software VMware workstation 6.0. In order to carry on the pack-ets fragmentation attacks experiments, three virtual machines were included in the network. One victim, one attacker, and one test machine were created in VMware workstation. The attacker generated the attacks and sent them to the victim, in order to test the Snort IDS installed on the victim. Test machine was used to record the packets sent in the network, in order to analyze and replay the packets.

A variety of tools were installed and congured in three virtual machines. The victim was equipped with sning tool Wireshark and intrusion detection tool Snort IDS for recording the network trac and testing the intrusion detection capability. Testing tool Metasploit framework and scanning tool Nmap were running on the attacker for exploiting the vulnerabilities of the victim. Attack packets were generated by Scapy, which was also installed in the attacker, Tcp-dump was installed in the test machine, it was used to capture and save the packets sent in the network. Tcpreplay was also carried by the test machine, for replaying the collected network trac.

In document Detecting network intrusions (Sider 91-96)