Expert answer:For this assignment you will be required review the attached reading assignment. 1)Based on the white paper provided? What are the four goals of effective metrics as defined in the paper. In your own words explain your understanding of the metric and where and how it can be beneficial. (25 pts per goal clarified).
snort_ids.pdf
web_application_attack_analysis_using_bro_ids___web_application_attack_analysis.pdf
Unformatted Attachment Preview
Interested in learning
more about security?
SANS Institute
InfoSec Reading Room
This paper is from the SANS Institute Reading Room site. Reposting is not permitted without express written permission.
How Can You Build and Leverage SNORT IDS Metrics
to Reduce Risk?
Many organizations have deployed Snort sensors at their ingress points. Some may have deployed them between
segmented internal networks. Others may have IDS sensors littered throughout the organization. Regardless of
how the sensor is placed the IDS can provide a significant view into traffic crossing the network. With this
data already being generated, how many organizations create metrics for further analysis? What metrics are
valuable to security teams and how are they used? What insights can one gain by good metric…
AD
Copyright SANS Institute
Author Retains Full Rights
How!Can!You!Build!and!Leverage!SNORT!IDS!Metrics!to!
Reduce!Risk?!
GIAC (GCIA) Gold Certification
Author:!Tim!Proffitt,!tim@timproffitt.com!
Advisor:!David!Shinberg!
Accepted:!August!19th!2013!!
!
Abstract
Many organizations have deployed Snort sensors at their ingress points. Some
may have deployed them between segmented internal networks. Others may have IDS
sensors littered throughout the organization. Regardless of how the sensor is placed the
IDS can provide a significant view into traffic crossing the network. With this data
already being generated, how many organizations create metrics for further analysis?
What metrics are valuable to security teams and how are they used? What insights can
one gain by good metrics and how can that be used to reduce risk to the organization?
The paper will cover current technologies and techniques that can be used to create
valuable metrics to aide security teams into making informed decisions.
!
!
!
[VERSION!June!2012]!
!
!
How Can You Build and Leverage IDS Metrics! 2
!
1. Introduction
Metrics are used in many facets of a person’s life and can be quite beneficial to
the decision making process. Is a car still getting the miles per gallon it should be? Are
invested stocks increasing in value and at a rate that is desirable? What should the
thermostat be set to this summer to minimize the amount of energy consumed? How
much weight can one loose each week to get ready for trips to the community pool? Can
a family afford to lease a car over the next three years with a variable based income?
Technologists should be asking in the same vein, questions about IDS activities that can
be answered by metrics. Is the sensor operating as intended? Is the sensor alarming on the
correct events? Is the IDS seeing an increase or decrease in events and should this
matter? What events should an analyst find interesting? Is a sensor being managed in a
manner that offers the best protection for the organization? Is the organization being
attacked and not aware of it?
The Snort IDS, created in 1998, has seen a very large deployment with a long
history. The open source community has richly supported this tool and offers additional
GUI tools that generate graphical views and metrics. Several tools such as SQUIL and
Snorby have emerged to provide a nice platform for analysis by a security team.
Regardless of the technology utilized to generate the sensor alarms, security teams
can create processes that will generate valuable data. Utilizing statistical techniques
against collected data will allow security teams to build metrics. These metrics can then
be used to in decision making by management to reduce risk to the organization.
2. Creating Metrics
“On any given network, on any given day, Snort can fire thousands of alerts. Your
task as an intrusion analyst is to sift through the data, extract events of interest, and
separate the false positives from the actual attacks.” (Beale, Baker, et al, 2006)
The term “metrics” describes a broad range of tools and techniques used to
evaluate data (Greitzer 2005). The evaluation of that data is then used as a measurement
compared to one or more reference points to produce a result. A simple technology
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 3
!
security example would be to collect incomplete 3 way TCP handshake packets to a
destination, over a period of time, with the intent to show a trend. This extremely simple
example is one of many situations where technology metrics can help a manager make
informed decisions about their security infrastructure. A good security team is concerned
if the above IDS metric was trending upward by a factor of two every month. What if the
same metric trended downward by half every month? In the first case one could have an
IDS showing that a resource is under a prolonged attack. In the second case the IDS
could have a rule misconfiguration allowing conversations to be conducted but not
monitored. Either way this would be valuable data to a decision maker or at least a
situation that would need attention by a member of the team responsible for the IDS.
The technology auditing focused organization ISACA defines information
security as the protection of information assets against the risk of loss, operational
discontinuity, misuse, unauthorized disclosure, inaccessibility, or damage (Brotby 2009).
Technology security is concerned with the potential for legal liability that entities may
face as a result of information inaccuracy, loss, or the neglect of care in its protection. A
more current definition from CSO management circles describes information security as
the triad of confidentiality, integrity and availability. This definition can cause an issue
for security teams. How do security teams go about measuring confidentiality or
integrity? One can measure availability as it pertains to networks outages and systems
uptimes but how can metrics be applied to availability as it pertains to technology
security? These are very difficult questions to answer. The above simple metric of the
sensor recording the TCP 3 way handshake does not answer these questions, at least not
standing on its own. A metrics program needs to develop sound metrics to answer these
questions and others that executive management will need for steering an organization.
2.1. What makes a good metric?
Bad metrics can be found most everywhere. Vendor dashboards are littered with
them, presentations contain them and security teams expect management to make
decisions off them. Take a traditional, out of the package IDS metric that shows the
number of signatures being seen by the sensor. This can be valuable data, especially for
the IDS team, intrusion response team or the individuals responsible for hardening
infrastructure. Knowing that a SYN flood is being executed against a critical web server
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 4
!
is important but the metric says little of the overall security of the organization. Are the
intrusion sensors in their current configuration protecting the organization? Is the
protection the security team provides now better or worse than last year? Can the budget
being allocated on managing the IDS be utilized better in a different control? Smaller,
technical metrics should be rolled up into a more comprehensive security picture if
security teams are going to be successful in creating good metrics and getting the point
across to the upper management of the organization. A good start on metrics,
measurements and monitoring information can be summarized as being manageable,
meaningful, actionable, unambiguous, reliable, accurate, timely and predictive (Brotby,
2009)
To create quality metrics security teams should strive to:
1. Develop a set of metrics that are repeatable and automated where
applicable
2. Create baselines or timelines from the repeatable metrics
3. Have actionable enough metrics to make decisions
4. Be meaningful for management decisions
Teams should constantly be asking what needs to be measured and why. If there is not a
good answer to “why”, the team should consider whether this would make a good metric.
Could the metric be used with other metrics to produce an aggregate picture of an overall
security control? Many organizations have multiple technologies to combat malware;
often at the end point, mail gateway, the firewall and server. Each of these technologies
can produce metrics that can be grouped or aggregated to produce a metric that can show
insight into the organizations ability to combat malware.
2.2. Statistical Techniques
There are several commonly used techniques for analyzing data that can be
applied to create IDS metrics. Mean, median, aggregation, standard deviation, grouping,
cross sectional, time series, correlation matrix, quartile analysis and Statistical Process
Control can each be leveraged to build meaningful security metrics offering visibility into
large data sets. Many of these techniques can be used in conjunction with one another to
build more complex and often more insightful metrics.
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 5
!
The mean, or average as it is commonly known, is a standard aggregation metric.
The average is the easiest of these techniques to compute. Add the elements in the data
set and divide by the number of elements in the set. It should be pointed out; averages can
be a poor choice for highly variegated data sets as they can obscure hidden spikes that
might be interesting. A data set containing the number of thousands of SYN connections
per hour {10,10,10,10,10,10,10,10,10,10} has the same average as the data set
{1,1,1,1,90,1,1,1,2,1}. The second data set has a significant deviation (90) that could
show interesting activity that might otherwise have been missed if the averages technique
was utilized to show this data set’s activity.
The median of a data set is the number that separates the top half of the set from
the bottom half. The data set’s mean will highlight where half the elements are above and
half the elements are below. Medians can help particularly with measuring performance.
A median metric can aid IDS management in understanding performance or relevance.
When a particular signature can be counted by number of instances fired, an analyst can
rate his response based on whether the signature is above or below a calculated median.
!
Figure!1:!Median!Statistical!Example
Aggregation is a popular technique for consolidating records into some type of
summary data. Common to aggregation statistics are sum, standard deviation, highest,
lowest and count. In most cases aggregation involves averaging numeric values and
counting nonnumeric values such as signature descriptions or severity. Highest and
lowest aggregation values will allow analysis on top seen data elements and the least seen
data elements. Aggregation is heavily used in technology metrics. Top 20 alerts, Total
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 6
!
number of High ranked signatures, and number of denied signatures are often generated
for intrusion sensors dashboards.
Standard deviation measures the dispersion of a data set from the mean. This
analysis technique can show if the data set is tightly clustered or wildly disperse. The
smaller the standard deviation the more uniform the data set will be. A higher standard
deviation would indicate an irregular pattern. One can calculate the standard deviation by
first calculating the mean of the set. Then, for each element square the difference between
the element and the mean. Adding up the squares, divide by the number of elements in
the set to produce the variance. The variance provides a measure of dispersion and the
root of the variance produces the standard deviation. 1!This type of statistical analysis
could be used to show the types of TCP socket connection attempts to an organization’s
internet accessible assets and whether that could be considered normal.
!
Figure!2:!Example!Standard!deviation!chart
Time series analysis is the technique of understanding how a data set has
developed over time. This technique is a series of recordings, during regular intervals, of
a data set over a period of time. After grouping and aggregating the data set within the
desired interval the metric is sorted typically in ascending order. A time series technique
can be a powerful tool in determining the current state of a technology versus how it has
operated in the past.
“Time series analysis is an essential tool in the security analyst’s bag of tricks. It
provides the foundation for other types of analysis. When combined with cross!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
1!If!the!reader!is!interested!in!further!discussion!of!standard!deviation!they!can!visit!
(http://www.mathsisfun.com/data/standardZdeviation.html).!
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 7
!
sectional analysis and quartile analysis, it provides the basis for benchmarking”
(Jaquith 2007).
The time series technique will generate metrics for sensor behavior over a specified
reporting time window. Have the sensors alarmed on more events today than what was
seen last year? Has the number of incidents investigated decreased in the last 6 months?
!
Figure!3:!Example!Time!Series!chart
Cross-sectional analysis is a technique that will show how an attribute in the data set
will vary over a cross section of comparable data. The technique will take a point in time
and compare “apples to apples”. For example, analysts may want to measure current
high, medium and low ranked signatures in a data set. One could draw a sample of 1,000
alarms randomly from that population (also known as a cross section of that data set),
document their attack type profile, and calculate what percentage of that data set is
categorized as attack signatures. Twenty percent of the sample could be categorized as
attacking the organization. This cross-sectional sample provides one with a snapshot of
that population, at that one point in time. Note that an analyst does not know based on
one cross-sectional sample if the attack alarms are increasing or decreasing; it can only
describe the current proportion and what it could mean to the organization.
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 8
!
!
Figure!4:!Cross!Sectional!Analysis!Example
Quartile analysis shares several traits with cross-sectional analysis. Each requires a
collection of attributes to examine and the analyst will identify a grouping and
aggregation technique. In quartile analysis the aggregate is broken into quarters: first,
second, third, fourth. The first quarter represents the top (or best) 25% of the aggregation.
The fourth is the bottom 25%. By ranking each attribute into quartiles, the analyst gains
an understanding of which section each item falls into. This type of analysis can be used
to determine how well sensors are being managed, false positive acceptance rates, and
aide in determining outliers (i.e., items in the first and fourth quartiles).
!
Figure!5:!Quartile!Analysis!Example
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 9
!
Statistical process control is a technique that is used for determining scenarios
outside normal operating patterns and thus establishing the concept of a baseline. As an
example the security team is recording events from a tuned, managed sensor. A series of
line graphs or histograms can be drawn to represent the data as a statistical distribution.
This can be a picture of the behavior of the variation in the measurement that is being
recorded. If a process is deemed as “stable” then the concept is that the sensor generated
alarms in statistical control. If the distribution changes unpredictably over time, then the
process is said to be out of control. The variation may be large or small but it is always
present. Statistical process control can guide a team to the type of action that is
appropriate for trying to improve the functioning of a process being monitored. When the
data set is charted and falls outside the statistical upper control limit or below the lower
control limit, than a security team can investigate what caused the change and implement
changes to remediate what changed the sensor’s statistics.
Figure!6:!Statistical!Process!Control!Chart!Example!
For organizations with extremely large data sets and complex reporting
requirements, advanced tool sets from Informatica, Siebel analytics, Microsoft Business
Intelligence or Information Builders may be warranted. For most organizations, the
common Excel or Open Office spreadsheet can provide plenty of processes for carving
through data sets to produce metrics. Fortunately for security teams, there are common
statistical analysis tools than can make creating the analytics easier. For example,
Microsoft Excel can calculate:
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 10
!
•
Standard Deviation =STDDEV.P(),
•
Absolute Deviations = AVEDEV
•
Mean =AVG()
•
Median =MEDIAN()
•
Quartile analysis =QUARTILE.EXE()
By utilizing spreadsheets, analysts can automatically generate line graphs, scatter
charts, bar graphs and most visual graphics needed to generate metrics.
2.3. How can metrics identify an incident?
An intrusion will typically start off with a series of unsuccessful attempts to
compromise a host. Due to the current complexity of authentication systems, clandestine
attempts at intrusion generally take considerable time before the system gets
compromised or damaging change is affected to the system giving administrators a
window of opportunity to proactively detect and prevent intrusion (Pillai). Therefore
monitoring IDS patterns can be an effective way of identifying possible attacks.
However, an IDS system can show an attack attempt, but often has no way to validate
that it was successful. A host’s logs may show a new administrative user being added, but
has no way to determine if this was done maliciously. Yet the sequence of alarms,
followed almost immediately by the creation of an admin account, is an event that shouts
‘successful attack’ quite clearly.
Cross-technology correlation between a host event and a monitored event can be a
straight forward piece of evidence. Attacks against a host known to have a service or
vulnerability present can be correlated into metrics. Does the organization have a system
that must run in a vulnerable state? Any alarms against this system should be interesting,
but when the alarms are coming from several sources and are multiplying an analyst
should be notified. In the case of low and slow reconnaissance scans, many organizations
will miss the activity. IDS sensors are typically not configured to escalate on “slow and
low” single-packet probes, complex bounce or idle scans. If the signature event is not a
critical or high ranking and the number of packets is only a few an hour, many times this
will not stand out from the potentially millions of events generated for that day. A
reconnaissance scan of 20 sessions a day may not meet the threshold for an analyst’s
Tim!Proffitt,!tim@timproffitt.com!
!
!
How Can You Build and Leverage IDS Metrics! 11
!
attention but after 90 days the metric can show 1800 sessions to a resource which may be
interesting. Data collected over time can generate metrics that will show reconnaissance
attacks from source and/or destinations when the metrics are built.
Baselines produce a powerful advantage from existing metrics. A baseline can be
defined as a normal condition. A data set can then be measured, typically against the
baseline to show deviations. Most baselines are established at a point in time and serve to
continue to track measurement against the reference point. By utilizing bas …
Purchase answer to see full
attachment