• Ingen resultater fundet

Other programs

In document Detecting network intrusions (Sider 105-111)

4.3 How will we test?

4.4.5 Other programs

In this section we evaluate pytbull and snorby, focusing on installation and usage.

4.4.5.1 Pytbull installation

The installation of the program was straight forward, based on instructions you could easily install the client and server. The instructions can be seen on pytbulls homepage18. The program "ncrack" was needed to be installed on the client, so the bruteForce test module could run. The installation of the "ncrack"

requires additional packages.

4.4.5.2 Pytbull usage

The program's usage instructions is insucient, it contains only one example of starting the program from a terminal. So in order to get to know the available terminal arguments we looked in the pytbull code. It was easy to misunderstand the program, because it starts with a ipv6 warning, and then the menu. You could interpret this as a problem, and that the tests have not been executed. It turns out that the tests wasn't executed, it was only the initialization. Another issue was about the ftp on the server (victim machine), there was no documenta-tion about setting users and password on the pytbull homepage. One bad thing about the pytbull program is that you could not save or export your results.

Every time you run pytbull it will erase your previous results.

18http://pytbull.sourceforge.net/index.php?page=documentation

(a) Overview (b) Diagrams

(c) Results (d) Alert

Figure 4.7: Pictures of pytbull

4.4.5.3 Snorby installation and usage

The keyword for this installation was "dependencies" between dierent packages.

It has a nice look, and has easy understandable menus. Some times when you use the program your login fails. This must be a bug in the program. When you delete your discovered sensors you need to reboot the computer and this is a bit annoying. Snorby has some predened categories of signatures but it fails to classify the alerts signatures which have been found by Snort.

Chapter 5

Best practice

In this chapter we rst cover measurable IDS characteristics and challenges of IDS testing. Next we list the appropriate tools to use, and suggest test procedures.

This chapter contribute, to give an advise for others who have the intention of testing an IDS.

5.1 Quantitatively measurable IDS characteris-tics

In this section we list a partial set of measurements that can be made on IDSs Mell et al. [41]. The focus is specically upon those measurements that are quantitative and that relate to detection accuracy:

• Coverage: This measurement determines which attacks an IDS can de-tect under ideal conditions. For signature-based systems, this would sim-ply consist of counting the number of signatures and mapping them to a standard naming scheme. For non signature based systems, one would

need to determine which attacks out of the set of all known attacks could be detected by a particular methodology.

• Probability of False Alarms: This measurement determines the rate of false positives produced by an IDS in a given environment during a particular time frame. A false positive or false alarm is an alert caused by normal non malicious background trac.

• Probability of Detection: This measurement determines the rate of attacks detected correctly by an IDS in a given environment during a particular time frame. The diculty in measuring the detection rate is that the success of an IDS is largely dependent upon the set of attacks used during the test.

• Resistance to Attacks Directed at the IDS: This measurement demon-strates how resistant an IDS is to an attacker's attempt to disrupt the correct operation of the IDS. Attacks against an IDS may take the form of:

1. Sending a large amount of non attack trac with volume exceeding the IDSs processing capability. With too much trac to process, an IDS may drop packets and be unable to detect attacks.

2. Sending to the IDS non attack packets that are specially crafted to trigger many signatures within the IDS, thereby overwhelming the IDSs human operator with false positives or crashing alert processing or display tools.

3. Sending to the IDS a large number of attack packets intended to distract the IDSs human operator while the attacker instigates a real attack hidden under the smokescreen created by the multitude of other attacks.

4. Sending to the IDS packets containing data that exploit a vulnera-bility within the IDS processing algorithms. Such attacks will only be successful if the IDS contains a known coding error that can be exploited by a clever attacker. Fortunately, very few IDSs have had known exploitable buer overows or other vulnerabilities .

• Ability to Handle High Bandwidth Trac: This measurement demon-strates how well an IDS will function when presented with a large volume of trac. Most network-based IDSs will begin to drop packets as the traf-c volume intraf-creases, thereby traf-causing the IDS to miss a pertraf-centage of the attacks.

• Ability to Correlate Events: This measurement demonstrates how well an IDS correlates attack events. These events may be gathered from IDSs, routers, rewalls, application logs, or a wide variety of other devices.

• Ability to Detect Never Before Seen Attacks: This measurement demonstrates how well an IDS can detect attacks that have not occurred before. For commercial systems, it is generally not useful to take this mea-surement since their signature-based technology can only detect attacks that had occurred previously.

• Ability to Identify an Attack:This measurement demonstrates how well an IDS can identify the attack that it has detected by labeling each attack with a common name or vulnerability name or by assigning the attack to a category.

• Ability to Determine Attack Success: This measurement demon-strates if the IDS can determine the success of attacks from remote sites that give the attacker higher- level privileges on the attacked system. In current network environments, many remote privilege- gaining attacks (or probes) fail and do not damage the system attacked.

• Capacity Verication for NIDS: The NIDS demands higher- level pro-tocol awareness than other network devices such as switches and routers;

it has the ability of inspection into the deeper level of network packets.

• Other Measurements: There are other measurements, such as ease of use, ease of maintenance, deployments issues, resource requirements, avail-ability and quality of support etc. These measurements are not directly related to the IDS performance but may be more signicant in many com-mercial situations.

In document Detecting network intrusions (Sider 105-111)