Sunteți pe pagina 1din 4

International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 8- August 2013

ISSN: 2231-5381 http://www.ijettjournal.org Page 3531



Protection of Different Agent with Improved Ids Detection
L.Gomathi
Associate Professor
Muthayammal College of Arts&science
K.Priya
Muthayammal College of Arts&science

Abstract
To prevent network infrastructure from malicious
traffic, such as DDoS attack and scanning, source filtering is
widely used in the network. There are different ways to store
the filters, e.g., a blacklist of source addresses. Among them,
TCAM-based is used as the de facto, because of its wire speed
performance. Unfortunately, TCAM is a scarce resource
because its limited by small capacity, high power
consumption and high cost. There are chances like a data
distributor had given his sensitive data to a set of trusted
agents. These agents can be called as third parties. There are
chances that some of the data is leaked and found in an
unauthorized place. This situation is called IDS. In existing
case, the method called watermarking is using to identify the
leakage. Or also uses the technique like injecting fake data
that appears to be realistic in the data. I propose data
allocation strategies that improve the probability of
identifying leakages. In enhancement work I include the
investigation of agent guilt models that capture leakage
scenarios.
KEYWORDS: DDoS, Filtering, IP, Traffic Analysis,
Clustering, Classification, Internet Security
1.Introduction
As the Internet grows, malicious users continue to
find intelligent and insidious ways to attack it. Many types of
attacks happen every day, but one particular kind denial-of-
service (DoS) attacks remain the most common, accounting
for more than a third of all malicious behavior on the Internet
in 2011 [1]. The main goal of these attacks is literally to deny
some or all legitimate users access to a particular Internet
service, harming the service as a whole. In the extreme case,
when the attack is aimed at the core Internet infrastructure
(e.g., attacks on the root DNS servers [2]), the whole Internet
could be jeopardized. There is a clear need for comprehensive,
cheap, and easily deployable DoS protection mechanisms.
Attackers may have different motivations (extortion,
vengeance, or simple malice) and the goal of a DoS attack
could be achieved in many ways. Thus, there is a wide variety
of attack methods available [3] and a growing number of
proposed defense mechanisms to stop or mitigate them. Many
of the proposed DoS defenses are both clever and potentially
effective [4]. However, the most common question with DoS
defenses is how to deploy them. Some defenses require
deployment in core routers [5], but the tier 1 ASes that own
these routers have little incentive to do so. The economic
model of all transit providers, including tier 1 providers,
consists of charging for the amounts of forwarded traffic.
Thus, such providers are extremely cautious with any kind of
filtering, as they risk the loss of money or even customers. In
addition, unless fully deployed by every major ISP, core
defenses generally provide very limited protection.
A major threat to the reliability of Internet services is
the growth in stealthy and coordinated attacks, such as scans,
worms and distributed denial-of-service (DDoS) attacks.
While intrusion detection systems (IDSs) provide the ability to
detect a wide variety of attacks, traditional IDSs focus on
International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 8- August 2013
ISSN: 2231-5381 http://www.ijettjournal.org Page 3532

monitoring a single subnetwork. This limits their ability to
detect coordinated attacks in a scalable and accurate manner,
since they lack the ability to correlate evidence from multiple
subnetworks. An important challenge for intrusion detection
research is how to efficiently correlate evidence from multiple
subnetworks. Collaborative intrusion detection systems
(CIDSs) aim to address this research challenge. A CIDS
consists of a set of individual IDSs coming fromdifferent
network administrative domains or organizations, which
cooperate to detect coordinated attacks. Each IDS reports any
alerts of suspicious behaviour that it has collected fromits
local monitored network, then the CIDS correlates these alerts
to identify coordinated attacks that affect multiple
subnetworks. A key component of a CIDS is the alert
correlation algorithm, which clusters similar incidents
observed by different IDSs, prioritises these incidents, and
identifies false alerts generated by individual IDSs. The
problem of alert correlation (also known as event correlation)
is an active area of research. A key issue is how to improve
the scalability of alert correlation while still maintaining the
expressiveness of the patterns that can be found.
Singledimensional correlation schemes have been widely
studied due to their simplicity, but they lack the
expressiveness to characterize many types of attack behaviors.
For example, such schemes can correlate alerts pertaining to
the same source addresses, but cannot discriminate between
different types of behaviour. More sophisticated schemes use
multi-dimensional correlation to identify patterns in events.
2. RELATED WORK
The new DOS attack, called Ad Hoc Flooding
Attack(AHFA), can result in denial of service when used
against on-demand routing protocols for mobile ad hoc
networks, such as AODV & DSR. Wei-Shen Lai et al [3] have
proposed a scheme to monitor the traffic pattern in order to
alleviate distributed denial of service attacks. Shabana
Mehfuz1 et al [4] have proposed a new secure power-aware
ant routing algorithm(SPA-ARA) for mobile ad hoc networks
that is inspired from ant colony optimization (ACO)
algorithms such as swarm intelligent technique. Giriraj
Chauhan and Sukumar Nandi [5] proposed a QoS aware on
demand routing protocol that uses signal stability as the
routing criteria along with other QoS metrics. Xiapu Luo et al
[6] have presented the important problem of detecting pulsing
denial of service (PDoS) attacks which send a sequence of
attack pulses to reduce TCP throughput. Xiaoxin Wu et al [7]
proposed a DoS mitigation technique that uses digital
signatures to verify legitimate packets, and drop packets that
do not pass the verification Ping. S.A.Arunmozhi and
Y.Venkataramani [8] proposed a defense scheme for DDoS
attack in which they use MAC layer information like
frequency of RTD/CTS packet, sensing a busy channel and
number of RTS/DATA retransmission. J ae-Hyun Jun, Hyunju
Oh, and Sung-Ho Kiminvestigation scheme in which they use
entropy-based detection mechanismagainst DDoS attacks in
order to guarantee the transmission of normal traffic and
prevent the flood of abnormal traffic. Qi Chen, Wenmin Lin,
Wanchun Dou, Shui Yu [10] proposed a Confidence-Based
Filtering method (CBF) to detect DDoS attack in cloud
computing environment. In which anomaly detection is used
and normal profile of network is formed at non attack period
and CBF is used to detect the attacker at attack period.
3. Methods
To train and evaluate our detection system, overall
we used about 10 months of data collected through the ISC
Security Information Exchange4 from June 2010 to March
2011. We used about four months of data (fromJune 2010 to
September 2010) to build a labeled dataset, which we will
refer to as LDS. We used LDS for two purposes: (1) for
estimating the accuracy of the Classifier module through 10-
fold cross validation; and (2) to train FluxBusters Classifier
module before deployment. After training, we used
approximately one additional month of data for a preliminary
validation of the systemand parameter tuning, and finally we
deployed and evaluated.
International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 8- August 2013
ISSN: 2231-5381 http://www.ijettjournal.org Page 3533

A practical deployment scenario is that of a single
network under the same administrative authority, such as an
ISP or a campus network. The operator can use our algorithms
to install filters at a single edge router or at several routers, in
order to optimize the use of its resources and to defend against
an attack in a cost-efficient way. Our distributed algorithm
may also be useful, not only for a routers within the same ISP,
but also, in the future, when different ISPs start cooperating
against common enemies. ACLs vs. firewall rules. Our
algorithms may also be applicable in a different context: to
configure firewall rules to protect public-access networks,
such as university campus networks or web-hosting networks.
Unlike routers where TCAM puts a hard limit on the number
of ACLs, there is no hard limit on the number of firewall
rules, in software; however, there is still an incentive to
minimize their number and thus 13 any associated
performance penalty [22]. There is a body of work on firewall
rule management and (mis)configuration [23], which aims at
detecting anomalies such as the existence of multiple firewall
rules that match the same packet, or the existence of a rule
that will never match packets flowing through a specific
firewall. In contrast, we focus on resource allocation: given a
blacklist and a whitelist as input to the problem, our goal is to
optimally select which prefixes to filter so as to optimize an
appropriate objective subject to the constraints.
4. Result and Analysis
The proposed two-stage alert correlation scheme
equipped with the probabilistic threshold estimation achieves
significant advantage in detection rate over a naive threshold
selection scheme for stealthy attack scenarios. The 98%
confidence interval scheme gains a high Detection Rate
without significant increase in the number of messages
exchanged. Our results demonstrate that by using this
probabilistic confidence limit to estimate the local support
threshold in our two-stage architecture, we are able to capture
most of the variation between different sub networks during a
stealthy scan.

Fig:3. Cids percentage of load subscriptions.
5. Conclusion
In a perfect world, there would be no need to hand
over sensitive data to agents that may unknowingly or
maliciously leak it. And even if, hand over sensitive data, in a
perfect world, distributor could watermark each object so that
distributor could trace its origins with absolute certainty.
However, in many cases, Distributor must indeed work with
agents that may not be 100 percent trusted, and may not be
certain if a leaked object came froman agent or from some
other source, since certain data cannot admit watermarks. In
spite of these difficulties, i have shown that it is possible to
assess the likelihood that an agent is responsible for a leak,
based on the overlap of his data with the leaked data and the
data of other agents, and based on the probability that objects
can be guessed by other means. This model is relatively
simple, but I believe that it captures the essential trade-offs.
The algorithms I have presented implement a variety of data
distribution strategies that can improve the distributors
chances of identifying a leaker. I have shown that distributing
objects judiciously can make a significant difference in
identifying guilty agents, especially in cases where there is
large overlap in the data that agents must receive.
REFERENCES
International Journal of Engineering Trends and Technology (IJETT) Volume 4 Issue 8- August 2013
ISSN: 2231-5381 http://www.ijettjournal.org Page 3534

[1] Trustwave SpiderLabs, The Web hacking incident
database. Semiannual report. J uly to December 2010, 2011.
[2] R. Naraine, Massive DDoS attack hit DNS root servers,
InternetNews.com, October 2002, http:
//www.esecurityplanet.com/trends/article.php/1486981/
Massive-DDoS-Attack-Hit-DNS-Root-Servers.htm.
[3] J. Mirkovic and P. Reiher, A taxonomy of DDoS attack
and DDoS defense mechanisms, ACM SIGCOMM Computer
Communication Review, vol. 34, no. 2, pp. 3953, 2004.
[4] T. Peng, C. Leckie, and K. Ramamohanarao, Survey of
networkbased defense mechanisms countering the DoS and
DDoS problems, ACM Computing Surveys (CSUR), vol. 39,
no. 1, 2007.
[5] E. Kline, M. Beaumont-Gay, J. Mirkovic, and P. Reiher,
RAD: Reflector attack defense using message authentication
codes, in Proceedings of Annual Computer Security
Applications Conference (ASAC), 2009, pp. 269278.
[6] P. Ferguson and D. Senie, Network ingress filtering:
Defeating denial of service attacks which employ IP source
address spoofing, RFC 2827, May 2000.
[7] R. Beverly and S. Bauer, The spoofer project: Inferring
the extent of source address filtering on the internet, in
Proceedings of USENIX SRUTI, 2005, pp. 5359.
[8] Y. Rekhter, T. Li, and S. Hares, A Border Gateway
Protocol 4 (BGP-4), RFC 4271, January 2006.
[9] C. Partridge, T. Mendez, and W. Milliken, Host
Anycasting Service, RFC 1546, November 1993.
[10] H. Ballani and P. Francis, Towards a global IP anycast
service, in Proceedings of SIGCOMM, vol. 35, no. 4, August
2005, pp. 301312.
[11] D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina,
Generic Routing Encapsulation (GRE), RFC 2784, March
2000.
[12] J. Mirkovic and E. Kissel, Comparative evaluation of
spoofing defenses, IEEE Transactions on Dependable and
Secure Computing, pp. 218232, 2009.
[13] J. Postel, Internet Protocol, RFC 791, September 1981.
[14] R. Govindan and H. Tangmunarunkit, Heuristics for
Internet map discovery, in Proceedings of INFOCOM, vol. 3,
2000, pp. 13711380.
L.Gomathi received her BCA degree from
university of Amman Arts & Science College and MCA
degree fromBharathidasan University. She has completed her
M.Phil at Periyar University. She is having 7 Yrs of
experience in collegiate teaching and She is a Head of the
department of computer applications in Muthayammal college
of Arts and Science affiliated by Periyar University.
K.Priya, received her UG & PG-Trinity
college for women. Her Area of interest is Data Mining.

S-ar putea să vă placă și