Documente Academic
Documente Profesional
Documente Cultură
security - beveiliging
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sun, 15 Aug 2010 17:58:06 UTC
Contents
Articles
Genarals security
Security Security risk ISO/IEC 27000 ISO/IEC 27000-series Threat Risk 1 1 5 6 7 8 9 20 20 25 27 34 39 39 43 44 44 54 66 68 77 86 97 99 109 112 114 120 120
1.0 Personnal
Authentication Authorization Social engineering (security) Security engineering
2.0 Physical
Physical security
5.0 Networks
Communications security
Network security
123 127 127 128 133 144 156 169 169 171 186 188 202 205 205 209 213
5.1 Internet
Book:Internet security Firewall (computing) Denial-of-service attack Spam (electronic) Phishing
6.0 Information
Data security Information security Encryption Cryptography Bruce Schneier
7.0 Application
Application security Application software Software cracking
References
Article Sources and Contributors Image Sources, Licenses and Contributors 216 223
Article Licenses
License 225
Genarals security
Security
Security is the degree of protection against danger, damage, loss, and criminal activity. Security as a form of protection are structures and processes that provide or improve security as a condition. The Institute for Security and Open Methodologies (ISECOM) in the OSSTMM 3 defines security as "a form of protection where a separation is created between the assets and the threat. This includes but is not limited to the elimination of either the asset or the threat. Security as a national condition was defined in a United Nations study (1986), so that countries can develop and progress freely. Security has to be compared to related concepts: safety, continuity, reliability. The key difference between security and reliability is that security must take into account the actions of people attempting to cause destruction. Different scenarios also give rise to the context in which security is maintained: With respect to classified matter, the condition that prevents unauthorized persons from having access to official information that is safeguarded in the interests of national security. Measures taken by a military unit, an activity or installation to protect itself against all acts designed to, or which may,
X-ray machines and metal detectors are used to control what is allowed to pass through an airport security perimeter.
Security theater is a critical term for deployment of measures primarily aimed at raising subjective security in a population without a genuine Security checkpoint at the entrance to the Delta or commensurate concern for the effects of that measure onand Air Lines corporate headquarters in Atlanta possibly decreasingobjective security. For example, some consider the screening of airline passengers based on static databases to have been Security Theater and Computer Assisted Passenger Prescreening System to have created a decrease in objective security. Perception of security can also increase objective security when it affects or deters malicious behavior, as with visual signs of security protections, such as video surveillance, alarm systems in a home, or an anti-theft system in a car
Security such as a LoJack, signs. Since some intruders will decide not to attempt to break into such areas or vehicles, there can actually be less damage to windows in addition to protection of valuable objects inside. Without such advertisement, a car-thief might, for example, approach a car, break the window, and then flee in response to an alarm being triggered. Either way, perhaps the car itself and the objects inside aren't stolen, but with perceived security even the windows of the car have a lower chance of being damaged, increasing the financial security of its owner(s). However, the non-profit, security research group, ISECOM, has determined that such signs may actually increase the violence, daring, and desperation of an intruder [1] This claim shows that perceived security works mostly on the provider and is not security at all [2] . It is important, however, for signs advertising security not to give clues as to how to subvert that security, for example in the case where a home burglar might be more likely to break into a certain home if he or she is able to learn beforehand which company makes its security system.
Categorising security
There is an immense literature on the analysis and categorisation of security. Part of the reason for this is that, in most security systems, the "weakest link in the chain" is the most important. The situation is asymmetric since the defender must cover all points of attack while the attacker need only identify a single weak point upon which to concentrate.
Types
IT realm Application security Computing security Data security Information security Network security Physical realm Airport security Port security/Supply chain security Food security Home security Hospital security Physical security School security Shopping centre security Infrastructure security Political Homeland security Human security International security National security Public security
Aviation security is a combination of measures and material and human resources intended to counter the unlawful interference with the aviation security. Operations Security (OPSEC) is a complement to other "traditional" security measures that evaluates the organization from an adversarial perspective.[3] .
Security concepts
Certain concepts recur throughout different fields of security: Assurance - assurance is the level of guarantee that a security system will behave as expected Countermeasure - a countermeasure is a way to stop a threat from triggering a risk event Defense in depth - never rely on one single security measure alone Exploit - a vulnerability that has been triggered by a threat - a risk of 1.0 (100%) Risk - a risk is a possible event which could cause a loss Threat - a threat is a method of triggering a risk event that is dangerous Vulnerability - a weakness in a target that can potentially be exploited by a threat
Security
Security
National security
Richard A. Clarke David H. Holtzman
Physical security
James F. Pastor
See also
Concepts 3D Security Classified information Insecurity ISO 27000 ISO 28000 ISO 31000 Security breach Security increase Security Risk Surveillance Wireless sensor network Branches Computer security Cracking Hacking MySecureCyberspace Phreaking Communications security Human security Information security CISSP National security Physical Security Police Public Security Bureau Security guard Security police
References
[1] [2] [3] [4] [5] [6] http:/ / wiki. answers. com/ Q/ Do_home_security_systems_prevent_burglaries http:/ / www. isecom. org/ hsm OSPA Website (http:/ / www. opsecprofessionals. org/ ) Taming the Two-Headed Beast (http:/ / www. csoonline. com/ read/ 090402/ beast. html), CSOonline, September 2002 Security 2.0 (http:/ / www. csoonline. com/ read/ 041505/ constellation. html), CSOonline, April 2005 AESRM Website (http:/ / www. aesrm. org/ )
Security risk
Security risk
Security Risk describes employing the concept of risk to the security risk management paradigm to make a particular determination of security orientated events.
Introduction
Security risk is the demarcation of risk, into the security silo, from the broader enterprise risk management framework for the purposes of isolating and analysing unique events, outcomes and consequences.[1] Security risk is often, quantitatively, represented as any event that compromises the assets, operations and objectives of an organisation. 'Event', in the security paradigm, comprises those undertaken by actors intentionally for purposes that adversely affect the organisation. The role of the 'actors' and the intentionality of the 'events', provides the differentiation of security risk from other risk management silos, particularly those of safety, environment, quality, operational and financial.
References
[1] Function of security risk assessments to ERM (http:/ / www. optaresystems. com/ index. php/ optare/ publication_detail/ security_risk_assessment_enterprise_risk_management/ ) [2] Keller, C., Siegrist, M., & Gutscher, H. The Role of the Affect and Availability Heuristics in Risk Communication. Risk Analysis, Vol. 26, No. 3, 2006 [3] Heuristics and risk perception Risk assessments pitfalls (http:/ / www. optaresystems. com/ index. php/ optare/ publication_detail/ heuristics_risk_perception_risk_assessment_pitfalls/ )
ISO/IEC 27000
ISO/IEC 27000
ISO/IEC 27000 is part of a growing family of ISO/IEC Information Security Management Systems (ISMS) standards, the 'ISO/IEC 27000 series'. ISO/IEC 27000 is an international standard entitled: "Information technology Security techniques - Information security management systems - Overview and vocabulary". The standard was developed by sub-committee 27 (SC27) of the first Joint Technical Committee (JTC1) of the International Organization for Standardization and the International Electrotechnical Commission. ISO/IEC 27000 provides: An overview of and introduction to the entire ISO/IEC 27000 family of Information Security Management Systems (ISMS) standards; and A glossary or vocabulary of fundamental terms and definitions used throughout the ISO/IEC 27000 family. Information security, like many technical subjects, is evolving a complex web of terminology. Relatively few authors take the trouble to define precisely what they mean, an approach which is unacceptable in the standards arena as it potentially leads to confusion and devalues formal assessment and certification. As with ISO 9000 and ISO 14000, the base '000' standard is intended to address this.
Status
Current version: ISO/IEC 27000:2009, available from ISO/ITTF as a free download [1] Target audience: users of the remaining ISO/IEC 27000-series information security management standards
See also
ISO/IEC 27000-series ISO/IEC 27001 ISO/IEC 27002 (formerly ISO/IEC 17799)
References
[1] http:/ / standards. iso. org/ ittf/ PubliclyAvailableStandards/ c041933_ISO_IEC_27000_2009. zip
ISO/IEC 27000-series
ISO/IEC 27000-series
The ISO/IEC 27000-series (also known as the 'ISMS Family of Standards' or 'ISO27k' for short) comprises information security standards published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). The series provides best practice recommendations on information security management, risks and controls within the context of an overall Information Security Management System (ISMS), similar in design to management systems for quality assurance (the ISO 9000 series) and environmental protection (the ISO 14000 series). The series is deliberately broad in scope, covering more than just privacy, confidentiality and IT or technical security issues. It is applicable to organizations of all shapes and sizes. All organizations are encouraged to assess their information security risks, then implement appropriate information security controls according to their needs, using the guidance and suggestions where relevant. Given the dynamic nature of information security, the ISMS concept incorporates continuous feedback and improvement activities, summarized by Deming's "plan-do-check-act" approach, that seek to address changes in the threats, vulnerabilities or impacts of information security incidents. The standards are the product of ISO/IEC JTC1 (Joint Technical Committee 1) SC27 (Sub Committee 27), an international body that meets in person twice a year. At present, nine of the standards in the series are publicly available while several more are under development.
Published standards
ISO/IEC 27000 Information security management systems Overview and vocabulary [1] ISO/IEC 27001 Information security management systems Requirements ISO/IEC 27002 Code of practice for information security management ISO/IEC 27003 Information security management system implementation guidance ISO/IEC 27004 Information security management Measurement ISO/IEC 27005 Information security risk management ISO/IEC 27006 Requirements for bodies providing audit and certification of information security management systems ISO/IEC 27011 Information security management guidelines for telecommunications organizations based on ISO/IEC 27002 ISO 27799 - Information security management in health using ISO/IEC 27002 [standard produced by the Health Infomatics group within ISO, independently of ISO/IEC JTC1/SC27]
In preparation
ISO/IEC 27007 - Guidelines for information security management systems auditing (focused on the management system) ISO/IEC 27008 - Guidance for auditors on ISMS controls (focused on the information security controls) ISO/IEC 27013 - Guideline on the integrated implementation of ISO/IEC 20000-1 and ISO/IEC 27001 ISO/IEC 27014 - Information security governance framework ISO/IEC 27015 - Information security management guidelines for the finance and insurance sectors ISO/IEC 27031 - Guideline for ICT readiness for business continuity (essentially the ICT continuity component within business continuity management) ISO/IEC 27032 - Guideline for cybersecurity (essentially, 'being a good neighbor' on the Internet) ISO/IEC 27033 - IT network security, a multi-part standard based on ISO/IEC 18028:2006 ISO/IEC 27034 - Guideline for application security ISO/IEC 27035 - Security incident management
ISO/IEC 27000-series ISO/IEC 27036 - Guidelines for security of outsourcing ISO/IEC 27037 - Guidelines for identification, collection and/or acquisition and preservation of digital evidence
See also
BS 7799, the original British Standard from which ISO/IEC 17799, ISO/IEC 27002 and ISO/IEC 27001 were derived Document management system Sarbanes-Oxley Act Standard of Good Practice published by the Information Security Forum
External links
The ISO 17799 Newsletter [1] Opensource software to support ISO 27000 processes [2]
References
[1] http:/ / 17799-news. the-hamster. com [2] http:/ / esis. sourceforge. net/
Threat
A threat is an act of coercion wherein an act is proposed to elicit a negative response. It is a communicated intent to inflict harm or loss on another person. It is a crime in many jurisdictions. Libertarians hold that a palpable, immediate, and direct threat of aggression, embodied in the initiation of an overt act, is equivalent to aggression itself, and that proportionate defensive force would be justified in response to such a threat, if a clear and present danger exists.[1]
Brazil
In Brazil, the crime of threatening someone, defined as a threat to cause unjust and grave harm, is punishable by a fine or three months to one year in prison, as described in the Brazilian Penal Code, article 147. Brazilian jurisprudence does not treat as a crime a threat that was proferred in a heated discussion.
Germany
The German Strafgesetzbuch 241 punishes the crime of threat with a prison term for up to one year or a fine.
United States
In the United States, federal law criminalizes certain true threats transmitted via the U.S. mail[2] or in interstate commerce. It also criminalizes threatening the government officials of the United States. Some U.S. states criminalize cyberbullying.
References
[1] http:/ / mises. org/ rothbard/ Ethics/ twelve. asp [2] 18 U.S.C. 876 (http:/ / www. law. cornell. edu/ uscode/ 18/ 876. html)
Risk
Risk
Risk concerns the deviation of one or more results of one or more future events from their expected value. Technically, the value of those results may be positive or negative. However, general usage tends to focus only on potential harm that may arise from a future event, which may accrue either from incurring a cost ("downside risk [1]") or by failing to attain some benefit ("upside risk [1]").
Historical background
The term risk may be traced back to classical Greek rizikon (Greek , riza), meaning root, later used in Latin for "cliff". The term is used in Homer's Rhapsody M of Odyssey "Sirens, Scylla, Charybdee and the bulls of Helios (Sun)" Odysseus tried to save himself from Charybdee at the cliffs of Scylla, where his ship was destroyed by heavy seas generated by Zeus as a punishment for his crew killing before the bulls of Helios (the god of the sun), by grapping the roots of a wild fig tree. For the sociologist Niklas Luhmann the term 'risk' is a neologism that appeared with the transition from traditional to modern society.[2] "In the Middle Ages the term risicum was used in highly specific contexts, above all sea trade and its ensuing legal problems of loss and damage."[2] [3] In the vernacular languages of the 16th century the words rischio and riezgo were used,[2] both terms derived from the Arabic word "", "rizk", meaning 'to seek prosperity'. This was introduced to continental Europe, through interaction with Middle Eastern and North African Arab traders. In the English language the term risk appeared only in the 17th century, and "seems to be imported from continental Europe."[2] When the terminology of risk took ground, it replaced the older notion that thought "in terms of good and bad fortune."[2] Niklas Luhmann (1996) seeks to explain this transition: "Perhaps, this was simply a loss of plausibility of the old rhetorics of Fortuna as an allegorical figure of religious content and of prudentia as a (noble) virtue in the emerging commercial society."[4] Scenario analysis matured during Cold War confrontations between major powers, notably the United States and the Soviet Union. It became widespread in insurance circles in the 1970s when major oil tanker disasters forced a more comprehensive foresight. The scientific approach to risk entered finance in the 1960s with the advent of the capital asset pricing model and became increasingly important in the 1980s when financial derivatives proliferated. It reached general professions in the 1990s when the power of personal computing allowed for widespread data collection and numbers crunching. Governments are using it, for example, to set standards for environmental regulation, e.g. "pathway analysis" as practiced by the United States Environmental Protection Agency.
Risk
10
Definitions of risk
There are different definitions of risk for each of several applications. The widely inconsistent and ambiguous use of the word is one of several current criticisms of the methods to manage risk.[5] In one definition, "risks" are simply future issues that can be avoided or mitigated, rather than present problems that must be immediately addressed.[6] In risk management, the term "hazard" is used to mean an event that could cause harm and the term "risk" is used to mean simply the probability of something happening. OHSAS (Occupational Health & Safety Advisory Services) defines risk as the product of the probability of a hazard resulting in an adverse event, times the severity of the event.[7] Mathematically, risk often simply defined as: One of the first major uses of this concept was at the planning of the Delta Works in 1953, a flood protection program in the Netherlands, with the aid of the mathematician David van Dantzig.[8] The kind of risk analysis pioneered here has become common today in fields like nuclear power, aerospace and the chemical industry. There are many formal methods used to assess or to "measure" risk, which many consider to be a critical factor in human decision making. Some of these quantitative definitions of risk are well-grounded in sound statistics theory. However, these measurements of risk rely on failure occurrence data which may be sparse. This makes risk assessment difficult in hazardous industries such as nuclear energy where the frequency of failures is rare and harmful consequences of failure are astronomical. The dangerous harmful consequences often necessitate actions to reduce the probability of failure to infinitesimally small values which are hard to measure and corroborate with empirical evidence. Often, the probability of a negative event is estimated by using the frequency of past similar events or by event-tree methods, but probabilities for rare failures may be difficult to estimate if an event tree cannot be formulated. Methods to calculate the cost of the loss of human life vary depending on the purpose of the calculation. Specific methods include what people are willing to pay to insure against death,[9] and radiological release (e.g., GBq of radio-iodine). Financial risk is often defined as the unexpected variability or volatility of returns and thus includes both potential worse-than-expected as well as better-than-expected returns. References to negative risk below should be read as applying to positive impacts or opportunity (e.g., for "loss" read "loss or gain") unless the context precludes this interpretation. In statistics, risk is often mapped to the probability of some event seen as undesirable. Usually, the probability of that event and some assessment of its expected harm must be combined into a believable scenario (an outcome), which combines the set of risk, regret and reward probabilities into an expected value for that outcome. (See also Expected utility.) Thus, in statistical decision theory, the risk function of an estimator (x) for a parameter , calculated from some observables x, is defined as the expectation value of the loss function L,
In information security, a risk is written as an asset, the threats to the asset and the vulnerability that can be exploited by the threats to impact the asset - an example being: Our desktop computers (asset) can be compromised by malware (threat) entering the environment as an email attachment (vulnerability). The risk is then assessed as a function of three variables: 1. the probability that there is a threat 2. the probability that there are any vulnerabilities 3. the potential impact to the business. The two probabilities are sometimes combined and are also known as likelihood. If any of these variables approaches zero, the overall risk approaches zero.
Risk
11
... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measureable. Another distinction between risk and uncertainty is proposed in How to Measure Anything: Finding the Value of Intangibles in Business and The Failure of Risk Management: Why It's Broken and How to Fix It by Doug Hubbard:[10] [11] Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known. Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60% chance this market will double in five years" Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs". In this sense, Hubbard uses the terms so that one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.
12
Economic risk
Economic risks can be manifested in lower incomes or higher expenditures than expected. The causes can be many, for instance, the hike in the price for raw materials, the lapsing of deadlines for construction of a new operating facility, disruptions in a production process, emergence of a serious competitor on the market, the loss of key personnel, the change of a political regime, or natural disasters.[12]
In business
Means of assessing risk vary widely between professions. Indeed, they may define these professions; for example, a doctor manages medical risk, while a civil engineer manages risk of structural failure. A professional code of ethics is usually focused on risk assessment and mitigation (by the professional on behalf of client, public, society or life in general). In the workplace, incidental and inherent risks exist. Incidental risks are those that occur naturally in the business but are not part of the core of the business. Inherent risks have a negative effect on the operating profit of the business.
Risk-sensitive industries
Some industries manage risk in a highly quantified and numerate way. These include the nuclear power and aircraft industries, where the possible failure of a complex series of engineered systems could result in highly undesirable outcomes. The usual measure of risk for a class of events is then: R = probability of the event C The total risk is then the sum of the individual class-risks. In the nuclear industry, consequence is often measured in terms of off-site radiological release, and this is often banded into five or six decade-wide bands. The risks are evaluated using fault tree/event tree techniques (see safety engineering). Where these risks are low, they are normally considered to be "Broadly Acceptable". A higher level of risk (typically up to 10 to 100 times what is considered Broadly Acceptable) has to be justified against the costs of reducing it further and the possible benefits that make it tolerablethese risks are described as "Tolerable if ALARP". Risks beyond this level are classified as
Risk "Intolerable". The level of risk deemed Broadly Acceptable has been considered by regulatory bodies in various countriesan early attempt by UK government regulator and academic F. R. Farmer used the example of hill-walking and similar activities, which have definable risks that people appear to find acceptable. This resulted in the so-called Farmer Curve of acceptable probability of an event versus its consequence. The technique as a whole is usually referred to as Probabilistic Risk Assessment (PRA) (or Probabilistic Safety Assessment, PSA). See WASH-1400 for an example of this approach. In finance In finance, risk is the probability that an investment's actual return will be different than expected. This includes the possibility of losing some or all of the original investment. Some regard a calculation of the standard deviation of the historical returns or average returns of a specific investment as providing some historical measure of risk; see modern portfolio theory. Financial risk may be market-dependent, determined by numerous market factors, or operational, resulting from fraudulent behavior (e.g. Bernard Madoff). Recent studies suggest that testosterone level plays a major role in risk taking during financial decisions.[13] [14] In finance, risk has no one definition, but some theorists, notably Ron Dembo, have defined quite general methods to assess risk as an expected after-the-fact level of regret. Such methods have been uniquely successful in limiting interest rate risk in financial markets. Financial markets are considered to be a proving ground for general methods of risk assessment. However, these methods are also hard to understand. The mathematical difficulties interfere with other social goods such as disclosure, valuation and transparency. In particular, it is not always obvious if such financial instruments are "hedging" (purchasing/selling a financial instrument specifically to reduce or cancel out the risk in another investment) or "speculation" (increasing measurable risk and exposing the investor to catastrophic loss in pursuit of very high windfalls that increase expected value). As regret measures rarely reflect actual human risk-aversion, it is difficult to determine if the outcomes of such transactions will be satisfactory. Risk seeking describes an individual whose utility function's second derivative is positive. Such an individual would willingly (actually pay a premium to) assume all risk in the economy and is hence not likely to exist. In financial markets, one may need to measure credit risk, information timing and source risk, probability model risk, and legal risk if there are regulatory or civil actions taken as a result of some "investor's regret". Knowing one's risk appetite in conjunction with one's financial well-being are most crucial. A fundamental idea in finance is the relationship between risk and return (see modern portfolio theory). The greater the potential return one might seek, the greater the risk that one generally assumes. A free market reflects this principle in the pricing of an instrument: strong demand for a safer instrument drives its price higher (and its return proportionately lower), while weak demand for a riskier instrument drives its price lower (and its potential return thereby higher). "For example, a US Treasury bond is considered to be one of the safest investments and, when compared to a corporate bond, provides a lower rate of return. The reason for this is that a corporation is much more likely to go bankrupt than the U.S. government. Because the risk of investing in a corporate bond is higher, investors are offered a higher rate of return." The most popular, and also the most vilified lately risk measurement is Value-at-Risk (VaR). There are different types of VaR - Long Term VaR, Marginal VaR, Factor VaR and Shock VaR[15] The latter is used in measuring risk during the extreme market stress conditions.
13
Risk
14
In public works
In a peer reviewed study of risk in public works projects located in twenty nations on five continents, Flyvbjerg, Holm, and Buhl (2002, 2005) documented high risks for such ventures for both costs[16] and demand.[17] Actual costs of projects were typically higher than estimated costs; cost overruns of 50% were common, overruns above 100% not uncommon. Actual demand was often lower than estimated; demand shortfalls of 25% were common, of 50% not uncommon. Due to such cost and demand risks, cost-benefit analyses of public works projects have proved to be highly uncertain. The main causes of cost and demand risks were found to be optimism bias and strategic misrepresentation. Measures identified to mitigate this type of risk are better governance through incentive alignment and the use of reference class forecasting.[18]
In human services
Huge ethical and political issues arise when human beings themselves are seen or treated as 'risks', or when the risk decision making of people who use human services might have an impact on that service. The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse.[19]
Risk in psychology
Regret
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion (preferring the status quo in case one becomes worse off).
Framing
Framing[20] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk drivingpartly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident. For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science. All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest. One effective way to solve framing problems in risk assessment or measurement (although some argue that risk cannot be measured, only assessed) is to raise others' fears or personal ideals by way of completeness.
Risk Neurobiology of Framing Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[21] while greater left prefrontal activity relates to local or focal processing[22] From the Theory of Leaky Modules[23] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[24] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.
15
Risk in auditing
The audit risk model expresses the risk of an auditor providing an inappropriate opinion of a commercial entity's financial statements. It can be analytically expressed as: AR = IR x CR x DR Where AR is audit risk, IR is inherent risk, CR is control risk and DR is detection risk.
See also
Applied information economics Adventure Ambiguity Ambiguity aversion
Risk Benefit shortfall Cindynics Civil defense Cost overrun Credit risk Crisis Cultural Theory of risk Early case assessment Emergency Ergonomics Event chain methodology Financial risk Fuel price risk management Hazard Hazard prevention Identity resolution Inherent risk Insurance industry Interest rate risk International Risk Governance Council Investment risk ISO 31000 ISO 28000 Legal risk Life-critical system Liquidity risk List of books about risk Loss aversion Market risk Megaprojects and risk Operational risk Optimism bias Political risk Preventive maintenance Preventive medicine Probabilistic risk assessment Reference class forecasting Reinvestment risk Reputational risk Risk analysis Risk aversion Riskbase Risk factor (finance) Risk homeostasis Risk management
16
Risk Risk register Sampling risk Security risk Systemic risk Uncertainty Value at risk
17
Bibliography
Referred literature Bent Flyvbjerg, 2006: From Nobel Prize to Project Management: Getting Risks Right. Project Management Journal, vol. 37, no. 3, August, pp.515. Available at homepage of author [26] James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns Hopkins University Press. Niklas Luhmann, 1996: Modern Society Shocked by its Risks (= University of Hongkong, Department of Sociology Occasional Papers 17), Hongkong, available via HKU Scholars HUB [27] Books Historian David A. Moss's book When All Else Fails [28] explains the U.S. government's historical role as risk manager of last resort. Peter L. Bernstein. Against the Gods ISBN 0-471-29563-9. Risk explained and its appreciation by man traced from earliest times through all the major figures of their ages in mathematical circles. Rescher, Nicholas (1983). A Philosophical Introduction to the Theory of Risk Evaluation and Measurement. University Press of America. Porteous, Bruce T.; Pradip Tapadar (December 2005). Economic Capital and Financial Risk Management for Financial Services Firms and Conglomerates. Palgrave Macmillan. ISBN1-4039-3608-0. Tom Kendrick (2003). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project. AMACOM/American Management Association. ISBN978-0814407615. Flyvbjerg, Bent, Nils Bruzelius, and Werner Rothengatter, 2003. Megaprojects and Risk: An Anatomy of Ambition (Cambridge: Cambridge University Press). [29] David Hillson (2007). Practical Project Risk Management: The Atom Methodology. Management Concepts. ISBN978-1567262025. Kim Heldman (2005). Project Manager's Spotlight on Risk Management. Jossey-Bass. ISBN978-0782144116. Dirk Proske (2008). Catalogue of risks - Natural, Technical, Social and Health Risks. Springer. ISBN978-3540795544. Gardner, Dan, Risk: The Science and Politics of Fear [30], Random House, Inc., 2008. ISBN 0771032994 Articles and papers Clark, L., Manes, F., Antoun, N., Sahakian, B. J., & Robbins, T. W. (2003). "The contributions of lesion laterality and lesion volume to decision-making impairment following frontal lobe damage." Neuropsychologia, 41, 1474-1483. Drake, R. A. (1985). "Decision making and risk taking: Neurological manipulation with a proposed consistency mediation." Contemporary Social Psychology, 11, 149-152. Drake, R. A. (1985). "Lateral asymmetry of risky recommendations." Personality and Social Psychology Bulletin, 11, 409-417. Gregory, Kent J., Bibbo, Giovanni and Pattison, John E. (2005), "A Standard Approach to Measurement Uncertainties for Scientists and Engineers in Medicine", Australasian Physical and Engineering Sciences in
Risk Medicine 28(2):131-139. Hansson, Sven Ove. (2007). "Risk" [31], The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), forthcoming [32]. Holton, Glyn A. (2004). "Defining Risk" [33], Financial Analysts Journal, 60 (6), 1925. A paper exploring the foundations of risk. (PDF file) Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: [34], I.I.26.) Kruger, Daniel J., Wang, X.T., & Wilke, Andreas (2007) "Towards the development of an evolutionarily valid domain-specific risk-taking scale" [35] Evolutionary Psychology (PDF file) Metzner-Szigeth, A. (2009). "Contradictory Approaches? On Realism and Constructivism in the Social Sciences Research on Risk, Technology and the Environment." Futures, Vol. 41, No. 2, March 2009, pp.156170 (fulltext journal: [36]) (free preprint: [37]) Miller, L. (1985). "Cognitive risk taking after frontal or temporal lobectomy I. The synthesis of fragmented visual information." Neuropsychologia, 23, 359 369. Miller, L., & Milner, B. (1985). "Cognitive risk taking after frontal or temporal lobectomy II. The synthesis of phonemic and semantic information." Neuropsychologia, 23, 371 379.
18
Neill, M. Allen, J. Woodhead, N. Reid, S. Irwin, L. Sanderson, H. 2008 "A Positive Approach to Risk Requires Person Centred Thinking" London, CSIP Personalisation Network, Department of Health. Available from: http:// networks.csip.org.uk/Personalisation/Topics/Browse/Risk/[Accessed 21 July 2008]
External links
The Wiktionary definition of risk Risk [31] - The entry of the Stanford Encyclopedia of Philosophy Risk Management magazine [38], a publication of the Risk and Insurance Management Society. Risk and Insurance [39] StrategicRISK, a risk management journal [40] "Risk preference and religiosity" [41] article from the Institute for the Biocultural Study of Religion [42]
References
[1] [2] [3] [4] [5] [6] [7] http:/ / pages. stern. nyu. edu/ ~adamodar/ pdfiles/ invphil/ ch2. pdf Luhmann 1996:3 James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns Hopkins University Press, 274 Luhmann 1996:4 Douglas Hubbard The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009 E.g. "Risk is the unwanted subset of a set of uncertain outcomes." (Cornelius Keating) "Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can be caused by the event or exposure(s)" (OHSAS 18001:2007). [8] Wired Magazine, Before the levees break (http:/ / www. wired. com/ science/ planetearth/ magazine/ 17-01/ ff_dutch_delta?currentPage=3), page 3 [9] Landsburg, Steven (2003-03-03). "Is your life worth $10 million?" (http:/ / www. slate. com/ id/ 2079475/ ). Everyday Economics (Slate). . Retrieved 2008-03-17. [10] Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007 [11] Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009 [12] (http:/ / ssrn. com/ abstract=1012812) [13] Sapienza P., Zingales L. and Maestripieri D. 2009. Gender differences in financial risk aversion and career choices are affected by testosterone. Proceedings of the National Academy of Sciences. [14] Apicella C. L. and all. Testosterone and financial risk preferences. Evolution and Human Behavior. Vol 29. Issue 6. 384-390. abstract (http:/ / www. ehbonline. org/ article/ S1090-5138(08)00067-6/ abstract) [15] Value at risk [16] http:/ / flyvbjerg. plan. aau. dk/ JAPAASPUBLISHED. pdf [17] http:/ / flyvbjerg. plan. aau. dk/ Traffic91PRINTJAPA. pdf [18] http:/ / flyvbjerg. plan. aau. dk/ 0406DfT-UK%20OptBiasASPUBL. pdf
Risk
[19] A person centred approach to risk - Risk - Advice on Personalisation - Personalisation - Homepage - CSIP Networks (http:/ / networks. csip. org. uk/ Personalisation/ Topics/ Browse/ Risk/ ?parent=3151& child=3681) [20] Amos Tversky / Daniel Kahneman, 1981. "The Framing of Decisions and the Psychology of Choice." [21] Schatz, J., Craft, S., Koby, M., & DeBaun, M. R. (2004). Asymmetries in visual-spatial processing following childhood stroke. Neuropsychology, 18, 340-352. [22] Volberg, G., & Hubner, R. (2004). On the role of response conflicts and stimulus position for hemispheric differences in global/local processing: An ERP study. Neuropsychologia, 42, 1805-1813. [23] Drake, R. A. (2004). Selective potentiation of proximal processes: Neurobiological mechanisms for spread of activation. Medical Science Monitor, 10, 231-234. [24] McElroy, T., & Seta, J. J. (2004). On the other hand am I rational? Hemisphere activation and the framing effect. Brain and Cognition, 55, 572-580. [25] Flyvbjerg 2006 [26] http:/ / flyvbjerg. plan. aau. dk/ Publications2006/ Nobel-PMJ2006. pdf [27] http:/ / hub. hku. hk/ handle/ 123456789/ 38822 [28] http:/ / www. hup. harvard. edu/ catalog/ MOSWHE. html [29] http:/ / books. google. com/ books?vid=ISBN0521009464& id=RAV5P-50UjEC& printsec=frontcover [30] http:/ / books. google. com/ books?id=5j_8xF8vUlAC& printsec=frontcover [31] http:/ / plato. stanford. edu/ entries/ risk/ [32] http:/ / plato. stanford. edu/ archives/ sum2007/ entries/ risk/ [33] http:/ / www. riskexpertise. com/ papers/ risk. pdf [34] http:/ / www. econlib. org/ library/ Knight/ knRUP1. html [35] http:/ / www. epjournal. net/ filestore/ ep05555568. pdf [36] http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B6V65-4TGS7JY-1& _user=10& _coverDate=04%2F30%2F2009& _rdoc=1& _fmt=high& _orig=search& _sort=d& _docanchor=& view=c& _acct=C000050221& _version=1& _urlVersion=0& _userid=10& md5=054fec1f03e9ec784596add85197d2a8 [37] http:/ / egora. uni-muenster. de/ ifs/ personen/ bindata/ metznerszigeth_contradictory_approaches_preprint. PDF [38] http:/ / www. rmmag. com/ [39] http:/ / www. riskandinsurance. com/ [40] http:/ / www. strategicrisk. co. uk/ [41] http:/ / ibcsr. org/ index. php?option=com_content& view=article& id=149:risk-preference-and-religiosity& catid=25:research-news& Itemid=59 [42] http:/ / ibcsr. org/ index. php
19
20
1.0 Personnal
Authentication
Authentication (from Greek: ; real or genuine, from authentes; author) is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the subject are true ("authentification" is a French language variant of this word). This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labeling claims to be, or assuring that a computer program is a trusted one.
Authentication methods
In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain famous person, or was produced in a certain place or period of history. There are two types of techniques for doing this. The first is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist might use carbon dating to verify the age of an artifact, do a chemical analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Attribute comparison may be vulnerable to forgery. In general, it relies on the fact that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, or that the amount of effort required to do so is considerably greater than the amount of money that can be gained by selling the forgery. In art and antiques certificates are of great importance, authenticating an object of interest and value. Certificates can, however, also be forged and the authentication of these pose a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren. Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught. The second type relies on documentation or other external affirmations. For example, the rules of evidence in criminal courts often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. External records have their own problems of forgery and perjury, and are also vulnerable to being separated from the artifact and lost. Currency and other financial instruments commonly use the first type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for receivers to verify. Consumer goods such as pharmaceuticals, perfume, fashion clothing can use either type of authentication method to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). A trademark is a legally protected marking or other identifying feature which aids consumers in the identification of genuine brand-name goods.
Authentication
21
Product authentication
Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods such as electronics, music, apparel, and Counterfeit medications have been sold as being legitimate. Efforts to control the supply chain and educate consumers to evaluate the packaging and labeling help ensure that authentic products are sold and used.
Information content
The authentication of information can pose special problems (especially man-in-the-middle attacks), and is often wrapped up with authenticating identity.
A Security hologram label on an electronics box for authentication
Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging - anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like: A difficult-to-reproduce physical artifact, such as a seal, signature, watermark, special stationery, or fingerprint. A shared secret, such as a passphrase, in the content of the message. An electronic signature; public key infrastructure is often used to cryptographically guarantee that a message has been signed by the holder of a particular private key. The opposite problem is detection of plagiarism, where information from a different author is passed of as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases excessively high quality or a style mismatch may raise suspicion of plagiarism.
Factual verification
Determining the truth or factual accuracy of information in a message is generally considered a separate problem from authentication. A wide range of techniques, from detective work to fact checking in journalism, to scientific experiment might be employed.
Video authentication
With closed circuit television cameras in place in many public places it has become necessary to conduct video authentication to establish credibility when video CCTV recordings are used to identify crime. CCTV is a visual assessment tool. Visual Assessment means having proper identifiable or descriptive information during or after an incident. These systems should not be used independently from other security measures and their video recordings must be authenticated in order to be proven genuine when identifying an accident or crime. [1]
Authentication
22
Two-factor authentication
When elements representing two factors are required for identification, the term two-factor authentication is applied. . e.g. a bankcard (something the user has) and a PIN (something the user knows). Business networks may require users to provide a password (knowledge factor) and a random number from a security token (ownership factor). Access to a very high security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.
Authentication
23
Strong authentication
The U.S. Government's National Information Assurance Glossary defines strong authentication as layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information. In litigation electronic authentication of computers, digital audio recordings, video recordings analog and digital must be authenticated before being accepted into evidence as tampering has become a problem. All electronic evidence should be proven genuine before used in any legal proceeding.[4]
Access control
One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, thence granting those privileges as may be authorized to that identity. Common examples of access control involving authentication include: A captcha is a means of asserting that a user is a human being and not a computer program. A computer program using a blind credential to authenticate to another program Entering a country with a passport Logging in to a computer Using a confirmation E-mail to verify ownership of an e-mail address Using an Internet banking system Withdrawing cash from an ATM
In some cases, ease of access is balanced against the strictness of access checks. For example, the credit card network does not require a personal identification number for authentication of the claimed identity; and a small transaction usually does not even require a signature of the authenticated person for proof of authorization of the transaction. The security of the system is maintained by limiting distribution of credit card numbers, and by the threat of punishment for fraud.
Authentication Security experts argue that it is impossible to prove the identity of a computer user with absolute certainty. It is only possible to apply one or more tests which, if passed, have been previously declared to be sufficient to proceed. The problem is to determine which tests are sufficient, and many such are inadequate. Any given test can be spoofed one way or another, with varying degrees of difficulty.
24
See also
Classification of Authentication Athens access and identity management Atomic Authorization Authentication OSID Authorization Basic access authentication Biometrics CAPTCHA Chip Authentication Program Closed-loop authentication Diameter (protocol) Digital Identity Encrypted key exchange (EKE) EAP Fingerprint Verification Competition Geo-location Global Trust Center HMAC Identification (information) Identity Assurance Framework Java Authentication and Authorization Service Kerberos Multi-factor authentication Needham-Schroeder protocol OpenID an authentication method for the web Point of Access for Providers of Information - the PAPI protocol Public key cryptography RADIUS Recognition of human individuals Secret sharing Secure remote password protocol (SRP) Secure Shell Security printing Tamper-evident TCP Wrapper Time-based authentication Two-factor authentication
External links
Fourth-Factor Authentication: Somebody You Know [6] or episode 94,related on it - on SecurityNow [7] General Information on Enterprise Authentication [8] Password Management Best Practices [9] Password Policy Guidelines [10]
References
[1] http:/ / www. videoproductionprimeau. com/ index. php?q=content/ video-authentication [2] >Federal Financial Institutions Examination Council (2008). "Authentication in an Internet Banking Environment" (http:/ / www. ffiec. gov/ pdf/ authentication_guidance. pdf). . Retrieved 2009-12-31. [3] Register, UK; Dan Goodin; 30/3/08; Get your German Interior Minister's fingerprint, here. Compared to other solutions, "It's basically like leaving the password to your computer everywhere you go, without you being able to control it anymore," one of the hackers comments. (http:/ / www. theregister. co. uk/ 2008/ 03/ 30/ german_interior_minister_fingerprint_appropriated|The) [4] http:/ / expertpages. com/ news/ CCTV_Video_Problems_and_Solutions. htm [5] A mechanism for identity delegation at authentication level, N Ahmed, C Jensen - Identity and Privacy in the Internet Age - Springer 2009 [6] http:/ / www. rsa. com/ rsalabs/ node. asp?id=3156 [7] http:/ / www. grc. com/ securitynow. htm [8] http:/ / www. authenticationworld. com/ [9] http:/ / psynch. com/ docs/ password-management-best-practices. html [10] http:/ / psynch. com/ docs/ password-policy-guidelines. html
Authorization
25
Authorization
Authorization (also spelt Authorisation) is the function of specifying access rights to resources, which is related to information security and computer security in general and to access control in particular. More formally, "to authorize" is to define access policy. For example, human resources staff are normally authorized to access employee records, and this policy is usually formalized as access control rules in a computer system. During operation, the system uses the access control rules to decide whether access requests from (authenticated) consumers shall be granted or rejected. Resources include individual files' or items' data, computer programs, computer devices and functionality provided by computer applications. Examples of consumers are computer users, computer programs and other devices on the computer.
Overview
Access control in computer systems and networks relies on access policies. The access control process can be divided into two phases: 1) policy definition phase, and 2) policy enforcement phase. Authorization is the function of the policy definition phase which precedes the policy enforcement phase where access requests are granted or rejected based on the previously defined authorizations. Most modern, multi-user operating systems include access control and thereby rely on authorization. Access control also makes use of authentication to verify the identity of consumers. When a consumer tries to access a resource, the access control process checks that the consumer has been authorized to use that resource. Authorization is the responsibility of an authority, such as a department manager, within the application domain, but is often delegated to a custodian such as a system administrator. Authorizations are expressed as access policies in some type of "policy definition application", e.g. in the form of an access control list or a capability, on the basis of the "principle of least privilege": consumers should only be authorized to access whatever they need to do their jobs. Older and single user operating systems often had weak or non-existent authentication and access control systems. "Anonymous consumers" or "guests", are consumers that have not been required to authenticate. They often have limited authorization. On a distributed system, it is often desirable to grant access without requiring a unique identity. Familiar examples of access tokens include keys and tickets: they grant access without proving identity. Trusted consumers that have been authenticated are often authorized to unrestricted access to resources. "Partially trusted" and guests will often have restricted authorization in order to protect resources against improper access and usage. The access policy in some operating systems, by default, grant all consumers full access to all resources. Others do the opposite, insisting that the administrator explicitly authorizes a consumer to use each resource. Even when access is controlled through a combination of authentication and access control lists, the problems of maintaining the authorization data is not trivial, and often represents as much administrative burden as managing authentication credentials. It is often necessary to change or remove a user's authorization: this is done by changing or deleting the corresponding access rules on the system. Using atomic authorization is an alternative to per-system authorization management, where a trusted third party securely distributes authorization information.
Confusion
The term authorization is often incorrectly used in the sense of the policy enforcement phase function. This confusing interpretation can be traced back to the introduction of Cisco's AAA server. Examples of this can be seen in RFC2904 [1] , and Cisco AAA [2] . However, the correct and fundamental meaning of authorization is not compatible with this usage of the term. For example the fundamental security services confidentiality, integrity and availability are defined in terms of authorization [3] For example, confidentiality is defined by the International Organization for Standardization (ISO) as "ensuring that information is accessible only to those authorized to have access", where authorization is a function of the policy definition phase. It would be absurd to interpret
Authorization confidentiality as "ensuring that information is accessible only to those who are granted access when requested", because people who access systems e.g. with stolen passwords would then be "authorized". It is common that logon screens provide warnings like: "Only authorized users may access this system", e.g. [4] . Incorrect usage of the term authorization would invalidate such warnings, because attackers with stolen passwords could claim that they were authorized. The confusion around authorization is so widespread that both interpretations (i.e. authorization both as policy definition phase and as policy enforcement phase) often appear within the same document, e.g. [5] . Examples of correct usage of the authorization concept include e.g. [6] [7] .
26
Related Interpretations
Public policy
In public policy, authorization is a feature of trusted systems used for security or social control.
Banking
In banking, an authorization is a hold placed on a customer's account when a purchase is made using a debit card or credit card.
Publishing
In publishing, sometimes public lectures and other freely available texts are published without the consent of the author. These are called unauthorized texts. An example is the 2002 'The Theory of Everything: The Origin and Fate of the Universe' , which was collected from Stephen Hawking's lectures and published without his permission.
See also
Security engineering Computer security Authentication Access control Kerberos (protocol) Operating system Authorization OSID Authorization hold XACML
References
[1] J. Vollbrecht et al. AAA Authorization Framework. IETF, 2000 txt (http:/ / www. ietf. org/ rfc/ rfc2904. txt). [2] B.J. Caroll. Cisco Access Control Security: AAA Administration Services. Cisco Press, 2004 [3] ISO 7498-2 Information Processing Systems - Open Systems Interconnection - Basic Reference Model - Part 2: Security Architecture. ISO/IEC 1989 [4] Access Warning Statements, University of California, Berkeley (http:/ / technology. berkeley. edu/ policy/ warnings. html) [5] Understanding SOA Security Design and Implementation. IBM Redbook 2007 PDF (http:/ / www. redbooks. ibm. com/ redbooks/ pdfs/ sg247310. pdf) [6] A. H. Karp. Authorization-Based Access Control for the Services Oriented Architecture. Proceedings of the Fourth International Conference on Creating, Connecting, and Collaborating through Computing (C5), 26-27 January 2006, Berkeley, CA, USA. PDF (http:/ / www. hpl. hp. com/ techreports/ 2006/ HPL-2006-3. pdf) [7] A. Jsang, D. Gollmann, R. Au. A Method for Access Authorisation Through Delegation Networks. Proceedings of the Australasian Information Security Workshop (AISW'06), Hobart, January 2006. PDF (http:/ / persons. unik. no/ josang/ papers/ JGA2006-AISW. pdf)
27
Payloads
Social engineering is the act of manipulating people into performing actions or divulging confidential information, rather than by breaking in or using technical cracking techniques; essentially a fancier, more technical way of lying.[1] While similar to a confidence trick or simple fraud, the term typically applies to trickery or deception for the purpose of information gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face with the victim. "Social engineering" as an act of psychological manipulation was popularized by hacker-turned-consultant Kevin Mitnick. The term had previously been associated with the social sciences, but its usage has caught on among computer professionals and is now a recognized term of art.
Pretexting
Pretexting is the act of creating and using an invented scenario (the pretext) to engage a targeted victim in a manner that increases the chance the victim will divulge information or perform actions that would be unlikely in ordinary circumstances. It is more than a simple lie, as it most often involves some prior research or setup and the use of priori information for impersonation (e.g., date of birth, Social Security Number, last bill amount) to establish legitimacy in the mind of the target.[3] This technique can be used to trick a business into disclosing customer information as well as by private investigators to obtain telephone records, utility records, banking records and other information directly from junior company service representatives. The information can then be used to establish even greater legitimacy under tougher questioning with a manager, e.g., to make account changes, get specific balances, etc. Pretexting has been an observed law enforcement technique, under the auspices of which, a law officer may leverage the threat of an alleged infraction to detain a suspect for questioning and conduct close inspection of a vehicle or premises.
Social engineering (security) Pretexting can also be used to impersonate co-workers, police, bank, tax authorities, or insurance investigators or any other individual who could have perceived authority or right-to-know in the mind of the targeted victim. The pretexter must simply prepare answers to questions that might be asked by the victim. In some cases all that is needed is a voice that sounds authoritative, an earnest tone, and an ability to think on one's feet.
28
Diversion theft
Diversion theft, also known as the "Corner Game"[4] or "Round the Corner Game", originated in the East End of London. In summary, diversion theft is a "con" exercised by professional thieves, normally against a transport or courier company. The objective is to persuade the persons responsible for a legitimate delivery that the consignment is requested elsewhere hence, "round the corner". With a load/consignment redirected, the thieves persuade the driver to unload the consignment near to, or away from, the consignee's address, in the pretense that it is "going straight out" or "urgently required somewhere else". The "con" or deception has many different facets, which include social engineering techniques to persuade legitimate administrative or traffic personnel of a transport or courier company to issue instructions to the driver to redirect the consignment or load. Another variation of diversion theft is stationing a security van outside a bank on a Friday evening. Smartly dressed guards use the line "Night safe's out of order Sir". By this method shopkeepers etc. are gulled into depositing their takings into the van. They do of course obtain a receipt but later this turns out to be worthless. A similar technique was used many years ago to steal a Steinway grand piano from a radio studio in London "Come to overhaul the piano guv" was the chat line. Nowadays ID would probably be asked for but even that can be faked. The social engineering skills of these thieves are well rehearsed, and are extremely effective. Most companies do not prepare their staff for this type of deception.
Phishing
Phishing is a technique of fraudulently obtaining private information. Typically, the phisher sends an e-mail that appears to come from a legitimate business a bank, or credit card company requesting "verification" of information and warning of some dire consequence if it is not provided. The e-mail usually contains a link to a fraudulent web page that seems legitimate with company logos and content and has a form requesting everything from a home address to an ATM card's PIN. For example, 2003 saw the proliferation of a phishing scam in which users received e-mails supposedly from eBay claiming that the user's account was about to be suspended unless a link provided was clicked to update a credit card (information that the genuine eBay already had). Because it is relatively simple to make a Web site resemble a legitimate organization's site by mimicking the HTML code, the scam counted on people being tricked into thinking they were being contacted by eBay and subsequently, were going to eBay's site to update their account information. By spamming large groups of people, the "phisher" counted on the e-mail being read by a percentage of people who already had listed credit card numbers with eBay legitimately, who might respond. IVR or phone phishing This technique uses a rogue Interactive voice response (IVR) system to recreate a legitimate-sounding copy of a bank or other institution's IVR system. The victim is prompted (typically via a phishing e-mail) to call in to the "bank" via a (ideally toll free) number provided in order to "verify" information. A typical system will reject log-ins continually, ensuring the victim enters PINs or passwords multiple times, often disclosing several different passwords. More advanced systems transfer the victim to the attacker posing as a customer service agent for further questioning.
Social engineering (security) One could even record the typical commands ("Press one to change your password, press two to speak to customer service"...) and play back the direction manually in real time, giving the appearance of being an IVR without the expense. Phone phishing is also called vishing. Baiting Baiting is like the real-world Trojan Horse that uses physical media and relies on the curiosity or greed of the victim.[5] In this attack, the attacker leaves a malware infected floppy disk, CD ROM, or USB flash drive in a location sure to be found (bathroom, elevator, sidewalk, parking lot), gives it a legitimate looking and curiosity-piquing label, and simply waits for the victim to use the device. For example, an attacker might create a disk featuring a corporate logo, readily available from the target's web site, and write "Executive Salary Summary Q2 2010" on the front. The attacker would then leave the disk on the floor of an elevator or somewhere in the lobby of the targeted company. An unknowing employee might find it and subsequently insert the disk into a computer to satisfy their curiosity, or a good samaritan might find it and turn it in to the company. In either case as a consequence of merely inserting the disk into a computer to see the contents, the user would unknowingly install malware on it, likely giving an attacker unfettered access to the victim's PC and perhaps, the targeted company's internal computer network. Unless computer controls block the infection, PCs set to "auto-run" inserted media may be compromised as soon as a rogue disk is inserted.
29
Other types
Common confidence tricksters or fraudsters also could be considered "social engineers" in the wider sense, in that they deliberately deceive and manipulate people, exploiting human weaknesses to obtain personal benefit. They may, for example, use social engineering techniques as part of an IT fraud. A very recent type of social engineering techniques include spoofing or hacking IDs of people having popular e-mail IDs such as Yahoo!, GMail, Hotmail, etc. Among the many motivations for deception are: Phishing credit-card account numbers and their passwords. Hacking private e-mails and chat histories, and manipulating them by using common editing techniques before using them to extort money and creating distrust among individuals. Hacking websites of companies or organizations and destroying their reputation. Computer virus hoaxes
30
Federal legislation
The 1999 "GLBA" is a U.S. Federal law that specifically addresses pretexting of banking records as an illegal act punishable under federal statutes. When a business entity such as a private investigator, SIU insurance investigator, or an adjuster conducts any type of deception, it falls under the authority of the Federal Trade Commission (FTC). This federal agency has the obligation and authority to ensure that consumers are not subjected to any unfair or deceptive business practices. US Federal Trade Commission Act, Section 5 of the FTCA states, in part: "Whenever the Commission shall have reason to believe that any such person, partnership, or corporation has been or is using any unfair method of competition or unfair or deceptive act or practice in or affecting commerce, and if it shall appear to the Commission that a proceeding by it in respect thereof would be to the interest of the public, it shall issue and serve upon such person, partnership, or corporation a complaint stating its charges in that respect." The statute states that when someone obtains any personal, non-public information from a financial institution or the consumer, their action is subject to the statute. It relates to the consumer's relationship with the financial institution. For example, a pretexter using false pretenses either to get a consumer's address from the consumer's bank, or to get a consumer to disclose the name of his or her bank, would be covered. The determining principle is that pretexting only occurs when information is obtained through false pretenses. While the sale of cell telephone records has gained significant media attention, and telecommunications records are the focus of the two bills currently before the United States Senate, many other types of private records are being bought and sold in the public market. Alongside many advertisements for cell phone records, wireline records and the records associated with calling cards are advertised. As individuals shift to VoIP telephones, it is safe to assume that those records will be offered for sale as well. Currently, it is legal to sell telephone records, but illegal to obtain them.[12]
31
Hewlett Packard
Patricia Dunn, former chairman of Hewlett Packard, reported that the HP board hired a private investigation company to delve into who was responsible for leaks within the board. Dunn acknowledged that the company used the practice of pretexting to solicit the telephone records of board members and journalists. Chairman Dunn later apologized for this act and offered to step down from the board if it was desired by board members.[13] Unlike Federal law, California law specifically forbids such pretexting. The four felony charges brought on Dunn were dismissed.[14]
In popular culture
In the film Hackers, the protagonist used pretexting when he asked a security guard for the telephone number to a TV station's modem while posing as an important executive. In Jeffrey Deaver's book The Blue Nowhere, social engineering to obtain confidential information is one of the methods used by the killer, Phate, to get close to his victims. In the movie Live Free or Die Hard, Justin Long is seen pretexting that his father is dying from a heart attack to have a BMW Assist representative start what will become a stolen car. In the movie Sneakers, one of the characters poses as a low level security guard's superior in order to convince him that a security breach is just a false alarm. In the movie The Thomas Crown Affair, one of the characters poses over the telephone as a museum guard's superior in order to move the guard away from his post. In the James Bond movie Diamonds Are Forever, Bond is seen gaining entry to the Whyte laboratory with a then-state-of-the-art card-access lock system by "tailgating". He merely waits for an employee to come to open the door, then posing himself as a rookie at the lab, fakes inserting a non-existent card while the door is unlocked for him by the employee.
32
See also
Phishing Confidence trick Certified Social Engineering Prevention Specialist (CSEPS) Media pranks, which often use similar tactics (though usually not for criminal purposes) Physical information security Vishing SMiShing
References
Further reading
Boyington, Gregory. (1990). Baa Baa Black Sheep Published by Bantam Books ISBN 0-553-26350-1 Harley, David. 1998 Re-Floating the Titanic: Dealing with Social Engineering Attacks [15] EICAR Conference. Laribee, Lena. June 2006 Development of methodical social engineering taxonomy project [16] Master's Thesis, Naval Postgraduate School. Leyden, John. April 18, 2003. Office workers give away passwords for a cheap pen [17]. The Register. Retrieved 2004-09-09. Long, Johnny. (2008). No Tech Hacking - A Guide to Social Engineering, Dumpster Diving, and Shoulder Surfing Published by Syngress Publishing Inc. ISBN 978-1-59749-215-7 Mann, Ian. (2008). Hacking the Human: Social Engineering Techniques and Security Countermeasures Published by Gower Publishing Ltd. ISBN 0-566-08773-1 or ISBN 978-0-566-08773-8 Mitnick, Kevin, Kasperaviius, Alexis. (2004). CSEPS Course Workbook. Mitnick Security Publishing. Mitnick, Kevin, Simon, William L., Wozniak, Steve,. (2002). The Art of Deception: Controlling the Human Element of Security Published by Wiley. ISBN 0-471-23712-4 or ISBN 0-7645-4280-X
External links
Socialware.ru [18] - The most major runet community devoted to social engineering. Spylabs on vimeo [19] - Video chanel devoted to social engineering. Social Engineering Fundamentals [20] - Securityfocus.com. Retrieved on August 3, 2009. Social Engineering, the USB Way [21] - DarkReading.com. Retrieved on July 7, 2006. Should Social Engineering be a part of Penetration Testing? [22] - Darknet.org.uk. Retrieved on August 3, 2009. "Protecting Consumers' Phone Records" [23] - US Committee on Commerce, Science, and Transportation. Retrieved on February 8, 2006. Plotkin, Hal. Memo to the Press: Pretexting is Already Illegal [24]. Retrived on September 9, 2006. Striptease for passwords [25] - MSNBC.MSN.com. Retrieved on November 1, 2007. Social-Engineer.org [26] - social-engineer.org. Retrieved on September 16, 2009. [27] - Social Engineering: Manipulating Caller-Id
33
References
[1] Goodchild, Joan (January 11, 2010). "Social Engineering: The Basics" (http:/ / www. csoonline. com/ article/ 514063/ Social_Engineering_The_Basics). csoonline. . Retrieved 14 January 2010. [2] Mitnick, K: "CSEPS Course Workbook" (2004), unit 3, Mitnick Security Publishing. [3] " Pretexting: Your Personal Information Revealed (http:/ / www. ftc. gov/ bcp/ edu/ pubs/ consumer/ credit/ cre10. shtm),"Federal Trade Commission [4] http:/ / trainforlife. co. uk/ onlinecourses. php [5] http:/ / www. darkreading. com/ document. asp?doc_id=95556& WT. svl=column1_1 [6] Office workers give away passwords (http:/ / www. theregister. co. uk/ content/ 55/ 30324. html) [7] Passwords revealed by sweet deal (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3639679. stm) [8] Mitnick, K: "CSEPS Course Workbook" (2004), p. 4, Mitnick Security Publishing. [9] http:/ / www. wired. com/ wired/ archive/ 12. 02/ phreaks. html [10] Restatement 2d of Torts 652C. [11] Congress outlaws pretexting (http:/ / arstechnica. com/ news. ars/ post/ 20061211-8395. html), Eric Bangeman, 12/11/2006 11:01:01, Ars Technica [12] Mitnick, K (2002): "The Art of Deception", p. 103 Wiley Publishing Ltd: Indianapolis, Indiana; United States of America. ISBN 0-471-23712-4 [13] HP chairman: Use of pretexting 'embarrassing' (http:/ / news. com. com/ HP+ chairman+ Use+ of+ pretexting+ embarrassing/ 2100-1014_3-6113715. html?tag=nefd. lede) Stephen Shankland, 2006-09-08 1:08 PM PDT CNET News.com [14] Calif. court drops charges against Dunn (http:/ / news. cnet. com/ Calif. -court-drops-charges-against-Dunn/ 2100-1014_3-6167187. html) [15] http:/ / smallbluegreenblog. files. wordpress. com/ 2010/ 04/ eicar98. pdf [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] http:/ / faculty. nps. edu/ ncrowe/ oldstudents/ laribeethesis. htm http:/ / www. theregister. co. uk/ 2003/ 04/ 18/ office_workers_give_away_passwords/ http:/ / www. socialware. ru/ http:/ / vimeo. com/ spylabs/ http:/ / www. securityfocus. com/ infocus/ 1527 http:/ / www. darkreading. com/ document. asp?doc_id=95556& WT. svl=column1_1 http:/ / www. darknet. org. uk/ 2006/ 03/ should-social-engineering-a-part-of-penetration-testing/ http:/ / www. epic. org/ privacy/ iei/ sencomtest2806. html http:/ / www. plotkin. com/ blog-archives/ 2006/ 09/ memo_to_the_pre. html http:/ / www. msnbc. msn. com/ id/ 21566341/ http:/ / www. social-engineer. org http:/ / www. jocktoday. com/ 2010/ 02/ 08/ social-engineering-manipulating-caller-id/
Security engineering
34
Security engineering
Security engineering is a specialized field of engineering that deals with the development of detailed engineering plans and designs for security features, controls and systems. It is similar to other systems engineering activities in that its primary motivation is to support the delivery of engineering solutions that satisfy pre-defined functional and user requirements, but with the added dimension of preventing misuse and malicious behavior. These constraints and restrictions are often asserted as a security policy. In one form or another, Security Engineering has existed as an informal field of study for several centuries. For example, the fields of locksmithing and security printing have been around for many years. Due to recent catastrophic events, most notably 9/11, Security Engineering has quickly become a rapidly growing field. In fact, in a recent report completed in 2006, it was estimated that the global security industry was valued at US$150 billion.[1] Security engineering involves aspects of social science, psychology (such as designing a system to 'fail well' instead of trying to eliminate all sources of error) and economics, as well as physics, chemistry, mathematics, architecture and landscaping.[2] Some of the techniques used, such as fault tree analysis, are derived from safety engineering. Other techniques such as cryptography were previously restricted to military applications. One of the pioneers of security engineering as a formal field of study is Ross Anderson.
Qualifications
Typical qualifications for a security engineer are: Security+ - Entry Level Professional Engineer, Chartered Engineer, Chartered Professional Engineer CPP PSP CISSP
However, multiple qualifications, or several qualified persons working together, may provide a more complete solution.[3]
Security Stance
The 2 possible default positions on security matters are: 1 Default deny - "Everything, not explicitly permitted, is forbidden" Improves security at a cost in functionality. This is a good approach if you have lots of security threats. See secure computing for a discussion of computer security using this approach. 2 Default permit - "Everything, not explicitly forbidden, is permitted" Allows greater functionality by sacrificing security. This is only a good approach in an environment where security threats are non-existent or negligible. See computer insecurity for an example of the failure of this approach in the real world.
Security engineering
35
Core Practices
Security Planning Security Requirements Analysis Security Architecture Secure Coding Security testing Security Operations and Maintenance Economics of Security
Sub-fields
Physical security deter attackers from accessing a facility, resource, or information stored on physical media. Information security protecting data from unauthorized access, use, disclosure, destruction, modification, or disruption to access. See esp. Computer security Economics of security the economic aspects of economics of privacy and computer security.
Methodologies
Technological advances, principally in the field of computers, have now allowed the creation of far more complex systems, with new and complex security problems. Because modern systems cut across many areas of human endeavor, security engineers not only need consider the mathematical and physical properties of systems; they also need to consider attacks on the people who use and form parts of those systems using social engineering attacks. Secure systems have to resist not only technical attacks, but also coercion, fraud, and deception by confidence tricksters.
Web Applications
According to the Microsoft Developer Network the patterns & practices of Security Engineering following activities: Security Objectives Security Design Guidelines Security Modeling Security Architecture and Design Review Security Code Review Security Testing Security Tuning Security Deployment Review
[4]
consists of the
These activities are designed to help meet security objectives in the software life cycle.
Security engineering
36
Physical
Understanding of a typical threat and the usual risks to people and property. Understanding the incentives created both by the threat and the countermeasures. Understanding risk and threat analysis methodology and the benefits of an empirical study of the physical security of a facility. Understanding how to apply the methodology to buildings, critical infrastructure, ports, public transport and other facilities/compounds. Overview of common physical and technological methods of protection and understanding their roles in deterrence, detection and mitigation.
Canadian Embassy in Washington, D.C. showing planters being used as vehicle barriers, and barriers and gates along the vehicle entrance
Determining and prioritizing security needs and aligning them with the perceived threats and the available budget. Target Hardening Whatever the target, there are multiple ways of preventing penetration by unwanted or unauthorised persons. Methods include placing Jersey barriers, stairs or other sturdy obstacles outside tall or politically sensitive buildings to prevent car and truck bombings. Improving the method of Visitor management and some new electronic locks take advantage of technologies such as fingerprint scanning, iris or retinal scanning, and voiceprint identification to authenticate users.
Criticisms
Some criticize this field as not being a bona fide field of engineering because the methodologies of this field are less formal or excessively ad-hoc compared to other fields and many in the practice of security engineering have no engineering degree. Part of the problem lies in the fact that while conforming to positive requirements is well understood; conforming to negative requirements requires complex and indirect posturing to reach a closed form solution. In fact, some rigorous methods do exist to address these difficulties but are seldom used, partly because they are viewed as too old or too complex by many practitioners. As a result, many ad-hoc approaches simply do not succeed.
Security engineering
37
See also
Computer Related Authentication Cloud engineering Cryptography Cryptanalysis Computer insecurity Data remanence Defensive programming (secure coding) Earthquake engineering Electronic underground community Explosion protection Hacking Information Systems Security Engineering Password policy Software cracking Software Security Assurance Secure computing Security Patterns Systems engineering Trusted system Economics of Security Physical Access control Access control vestibule Authorization Critical Infrastructure Protection Environmental design (esp. CPTED) Locksmithing Physical Security Secrecy Security Secure cryptoprocessor Security through obscurity Technical surveillance counter-measures Misc. Topics Deception Fraud Full disclosure Security awareness Security community Steganography Social engineering Kerckhoffs' principle
Further reading
Ross Anderson (2001). Security Engineering [6]. Wiley. ISBN0-471-38922-6. Ross Anderson (2008). Security Engineering - A Guide to Building Dependable Distributed Systems. Wiley. ISBN0-470-06852-3. Ross Anderson (2001). "Why Information Security is Hard - An Economic Perspective [7]" Bruce Schneier (1995). Applied Cryptography (2nd edition ed.). Wiley. ISBN0-471-11709-9. Bruce Schneier (2000). Secrets and Lies: Digital Security in a Networked World. Wiley. ISBN0-471-25311-1. David A. Wheeler (2003). "Secure Programming for Linux and Unix HOWTO" [8]. Linux Documentation Project. Retrieved 2005-12-19.
References
[1] "Data analytics, networked video lead trends for 2008" (http:/ / www. sptnews. ca/ index. php?option=com_content& task=view& id=798& Itemid=4). SP&T News (CLB MEDIA INC). . Retrieved 2008-01-05. [2] http:/ / findarticles. com/ p/ articles/ mi_m1216/ is_n5_v181/ ai_6730246 [3] http:/ / www. asla. org/ safespaces/ pdf/ design_brochure. pdf [4] http:/ / msdn2. microsoft. com/ en-us/ library/ ms998404. aspx [5] http:/ / careers. state. gov/ specialist/ opportunities/ seceng. html [6] http:/ / www. cl. cam. ac. uk/ ~rja14/ book. html [7] http:/ / www. acsa-admin. org/ 2001/ papers/ 110. pdf [8] http:/ / www. dwheeler. com/ secure-programs [9] http:/ / channel9. msdn. com/ wiki/ default. aspx/ SecurityWiki. SecurityEngineering [10] http:/ / msdn. com/ SecurityEngineering
Security engineering
[11] http:/ / msdn. microsoft. com/ library/ en-us/ dnpag2/ html/ SecEngExplained. asp [12] http:/ / www. capitalprograms. sa. edu. au/ a8_publish/ modules/ publish/ content. asp?id=23343& navgrp=2557
38
39
2.0 Physical
Physical security
Physical security describes both measures that prevent or deter attackers from accessing a facility, resource, or information stored on physical media and guidance on how to design structures to resist various hostile acts[1] . It can be as simple as a locked door or as elaborate as multiple layers of armed Security guards and Guardhouse placement. Physical security is not a modern phenomenon. Physical security exists in order to deter persons from entering a physical facility. Historical examples of physical security include city walls, moats, etc. The key factor is the technology used for physical security has changed over time. While in past eras, there was no Passive Infrared (PIR) based technology, electronic access control systems, or Video Surveillance System (VSS) cameras, the essential methodology of physical security has not altered over time.
The goal is to convince potential attackers that the likely costs of attack exceed the value of making the attack. The initial layer of security for a campus, building, office, or physical space uses Crime Prevention Through Environmental Design to deter threats. Some of the most common examples are also the most basic - barbed wire, warning signs and fencing, concrete bollards, metal barriers, vehicle height-restrictors, site lighting and trenches.
Physical security
40
The next layer is mechanical and includes gates, doors, and locks. Key control of the locks becomes a problem with large user populations and any user turnover. Keys quickly become unmanageable forcing the adoption of electronic access control. Electronic access control easily manages large user populations, controlling for user lifecycles times, dates, and individual access points. For example a user's access rights could allow access from 0700 to 1900 Monday through Friday and expires in 90 days. Another form of access control (procedural) includes the use of policies, processes and procedures to manage the Electronic access control ingress into the restricted area. An example of this is the deployment of security personnel conducting checks for authorized entry at predetermined points of entry. This form of access control is usually supplemented by the earlier forms of access control (i.e. mechanical and electronic access control), or simple devices such as physical passes. An additional sub-layer of mechanical/electronic access control protection is reached by integrating a key management system to manage the possession and usage of mechanical keys to locks or property within a building or campus. The third layer is intrusion detection systems or alarms. Intrusion detection monitors for attacks. It is less a preventative measure and more of a response measure, although some would argue that it is a deterrent. Intrusion detection has a high incidence of false alarms. In many jurisdictions, law enforcement will not respond to alarms from intrusion detection systems. The last layer is video monitoring systems. Security cameras can be a deterrent in many cases, but their real power comes from incident verification and historical analysis. For example, if alarms are being generated and there is a camera in place, the camera could be viewed to verify the alarms. In instances when an attack has already occurred and a camera is in place at the point of attack, the recorded video can be reviewed. Although the term closed-circuit television (CCTV) is common, it is quickly becoming outdated as more video systems lose the closed circuit for signal transmission and are instead transmitting Closed-circuit television sign on computer networks. Advances in information technology are transforming video monitoring into video analysis. For instance, once an image is digitized it can become data that sophisticated algorithms can act upon. As the speed and accuracy of automated analysis increases, the video system could move from a monitoring system to an intrusion detection system or access control system. It is not a stretch to imagine a video camera inputting data to a processor that outputs to a door lock. Instead of using some kind of key, whether mechanical or electrical, a person's visage is the key. When actual design and implementation is considered, there are numerous types of security cameras that can be used for many different applications. One must analyze their needs and choose accordingly[3] .
Physical security
41
Intertwined in these four layers are people. Guards have a role in all layers, in the first as patrols and at checkpoints. In the second to administer electronic access control. In the third to respond to alarms. The response force must be able to arrive on site in less time than it is expected that the attacker will require to breach the barriers. And in the fourth to monitor and analyze video. Users obviously have a role also by questioning and reporting suspicious people. Aiding in identifying people as known versus unknown are identification systems. Often photo ID badges are used and are frequently coupled to the electronic access control system. Visitors are often required to wear a visitor badge.
Examples
Many installations, serving a myriad of different purposes, have physical obstacles in place to deter intrusion. This can be high walls, barbed wire, glass mounted on top of walls, etc. The presence of PIR-based motion detectors are common in many places, as a means of noting intrusion into a physical installation. Moreover, VSS/CCTV cameras are becoming increasingly common, as a means of identifying persons who intrude into physical locations. Businesses use a variety of options for physical security, including security guards, electric security fencing, cameras, motion detectors, and light beams.
ATMs (cash dispensers) are protected, not by making them invulnerable, but by spoiling the money inside when they are attacked. Money tainted with a dye could act as a flag to the money's unlawful acquisition.
Canadian Embassy in Washington, D.C. showing planters being used as vehicle barriers, and barriers and gates along the vehicle entrance
Safes are rated in terms of the time in minutes which a skilled, well equipped safe-breaker is expected to require to open the safe. These ratings are developed by highly skilled safe breakers employed by insurance agencies, such as Underwriters Laboratories. In a properly designed system, either the time between inspections by a patrolling guard should be less than that time, or an alarm response force should be able to reach it in less than that time. Hiding the resources, or hiding the fact that resources are valuable, is also often a good idea as it will reduce the exposure to opponents and will cause further delays during an attack, but should not be relied upon as a principal means of ensuring security (see security through obscurity and inside job). Not all aspects of Physical Security need be high tech. Even something as simple as a thick or pointy bush can add a layer of physical security to some premises, especially in a residential setting.
See also
Physical security
42
Access badge Access control Alarm Alarm management Bank vault Biometrics Burglar alarm Castle Closed-circuit television Common Access Card Computer security Credential Door security Electronic lock Electric fence
Fence Fortification Guard tour patrol system ID Card IP video surveillance Keycards Locksmithing Lock picking Logical security Magnetic stripe card Mifare Optical turnstile Photo identification Physical Security Professional Physical key management Prison
Proximity card Razor wire Safe Safe-cracking Security Security engineering Security lighting Security Operations Center Security policy Smart card Surveillance Swipe card Wiegand effect
Category:Security companies
References
[1] Task Committee (1999). Structural Design for Physical Security. ASCE. ISBN0784404577. [2] Anderson, Ross (2001). Security Engineering. Wiley. ISBN0471389226. [3] Oeltjen, Jason. "Different Types of Security Cameras" (http:/ / www. thecctvblog. com/ choosing-type-security-camera-installation/ ). .
43
44
4.0 Systems
Computer security
Computer security Secure operating systems Security architecture Security by design Secure coding Computer insecurity Vulnerability Social engineering Eavesdropping Exploits Trojans Viruses and worms Denial of service Backdoors Rootkits Keyloggers
Payloads
Computer security is a branch of computer technology known as information security as applied to computers and networks. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. The term computer system security means the collective processes and mechanisms by which sensitive and valuable information and services are protected from publication, tampering or collapse by unauthorized activities or untrustworthy individuals and unplanned events respectively. The strategies and methodologies of computer security often differ from most other computer technologies because of its somewhat elusive objective of preventing unwanted computer behavior instead of enabling wanted computer behavior.
Security by design
The technologies of computer security are based on logic. As security is not necessarily the primary goal of most computer applications, designing a program with security in mind often imposes restrictions on that program's behavior. There are 4 approaches to security in computing, sometimes a combination of approaches is valid: 1. Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity). 2. Trust all the software to abide by a security policy and the software is validated as trustworthy (by tedious branch and path analysis for example). 3. Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity). 4. Trust no software but enforce a security policy with trustworthy hardware mechanisms.
Computer security Computers consist of software executing atop hardware, and a "computer system" is, by frank definition, a combination of hardware, software (and, arguably, firmware, should one choose so separately to categorize it) that provides specific functionality, to include either an explicitly expressed or (as is more often the case) implicitly carried along security policy. Indeed, citing the Department of Defense Trusted Computer System Evaluation Criteria (the TCSEC, or Orange Book)archaic though that may be the inclusion of specially designed hardware features, to include such approaches as tagged architectures and (to particularly address "stack smashing" attacks of recent notoriety) restriction of executable text to specific memory regions and/or register groups, was a sine qua non of the higher evaluation classes, to wit, B2 and above.) Many systems have unintentionally resulted in the first possibility. Since approach two is expensive and non-deterministic, its use is very limited. Approaches one and three lead to failure. Because approach number four is often based on hardware mechanisms and avoids abstractions and a multiplicity of degrees of freedom, it is more practical. Combinations of approaches two and four are often used in a layered architecture with thin layers of two and thick layers of four. There are various strategies and techniques used to design security systems. However there are few, if any, effective strategies to enhance security after design. One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest. Furthermore, by breaking the system up into smaller components, the complexity of individual components is reduced, opening up the possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution to security that works well when only a single well-characterized property can be isolated as critical, and that property is also assessible to math. Not surprisingly, it is impractical for generalized correctness, which probably cannot even be defined, much less proven. Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure. The design should use "defense in depth", where more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds. Defense in depth works when the breaching of one security measure does not provide a platform to facilitate subverting another. Also, the cascading principle acknowledges that several low hurdles does not make a high hurdle. So cascading several weak mechanisms does not provide the safety of a single stronger mechanism. Subsystems should default to secure settings, and wherever possible should be designed to "fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure. In addition, security should not be an all or nothing issue. The designers and operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
45
Computer security
46
Security architecture
Security Architecture can be defined as the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes, among them confidentiality, integrity, availability, accountability and assurance."[1] . A security architecture is the plan that shows where security measures need to be placed. If the plan describes a specific solution then, prior to building such a plan, one would make a risk analysis. If the plan describes a generic high level design (reference architecture) then the plan should be based on a threat analysis.
Computer security verifiable certainty comparable to specifications for size, weight and power. Secure operating systems designed this way are used primarily to protect national security information, military secrets, and the data of international financial institutions. These are very powerful security tools and very few secure operating systems have been certified at the highest level (Orange Book A-1) to operate over the range of "Top Secret" to "unclassified" (including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS LAN.) The assurance of security depends not only on the soundness of the design strategy, but also on the assurance of correctness of the implementation, and therefore there are degrees of security strength defined for COMPUSEC. The Common Criteria quantifies security strength of products in terms of two components, security functionality and assurance level (such as EAL levels), and these are specified in a Protection Profile for requirements and a Security Target for product descriptions. None of these ultra-high assurance secure general purpose operating systems have been produced for decades or certified under the Common Criteria. In USA parlance, the term High Assurance usually suggests the system has the right security functions that are implemented robustly enough to protect DoD and DoE classified information. Medium assurance suggests it can protect less valuable information, such as income tax information. Secure operating systems designed to meet medium robustness levels of security functionality and assurance have seen wider use within both government and commercial markets. Medium robust systems may provide the same security functions as high assurance secure operating systems but do so at a lower assurance level (such as Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less certain that the security functions are implemented flawlessly, and therefore less dependable. These systems are found in use on web servers, guards, database servers, and management hosts and are used not only to protect the data stored on these systems but also to provide a high level of protection for network connections and routing services.
47
Secure coding
If the operating environment is not based on a secure operating system capable of maintaining a domain for its own execution, and capable of protecting application code from malicious subversion, and capable of protecting the system from subverted code, then high degrees of security are understandably not possible. While such secure operating systems are possible and have been implemented, most commercial systems fall in a 'low security' category because they rely on features not supported by secure operating systems (like portability, et al.). In low security operating environments, applications must be relied on to participate in their own protection. There are 'best effort' secure coding practices that can be followed to make an application more resistant to malicious subversion. In commercial environments, the majority of software subversion vulnerabilities result from a few known kinds of coding defects. Common software defects include buffer overflows, format string vulnerabilities, integer overflow, and code/command injection. It is to be immediately noted that all of the foregoing are specific instances of a general class of attacks, where situations in which putative "data" actually contains implicit or explicit, executable instructions are cleverly exploited. Some common languages such as C and C++ are vulnerable to all of these defects (see Seacord, "Secure Coding in C and C++" [2]). Other languages, such as Java, are more resistant to some of these defects, but are still prone to code/command injection and other software defects which facilitate subversion. Recently another bad coding practice has come under scrutiny; dangling pointers. The first known exploit for this particular problem was presented in July 2007. Before this publication the problem was known but considered to be academic and not practically exploitable.[3] Unfortunately, there is no theoretical model of "secure coding" practices, nor is one practically achievable, insofar as the variety of mechanisms are too wide and the manners in which they can be exploited are too variegated. It is interesting to note, however, that such vulnerabilities often arise from archaic philosophies in which computers were assumed to be narrowly disseminated entities used by a chosen few, all of whom were likely highly educated, solidly trained academics with naught but the goodness of mankind in mind. Thus, it was considered quite harmless if, for
Computer security (fictitious) example, a FORMAT string in a FORTRAN program could contain the J format specifier to mean "shut down system after printing." After all, who would use such a feature but a well-intentioned system programmer? It was simply beyond conception that software could be deployed in a destructive fashion. It is worth noting that, in some languages, the distinction between code (ideally, read-only) and data (generally read/write) is blurred. In LISP, particularly, there is no distinction whatsoever between code and data, both taking the same form: an S-expression can be code, or data, or both, and the "user" of a LISP program who manages to insert an executable LAMBDA segment into putative "data" can achieve arbitrarily general and dangerous functionality. Even something as "modern" as Perl offers the eval() function, which enables one to generate Perl code and submit it to the interpreter, disguised as string data.
48
Applications
Computer security is critical in almost any technology-driven industry which operates on computer systems.Computer security can also be referred to as computer safety. The issues of computer based systems and addressing their countless vulnerabilities are an integral part of maintaining an operational industry.[4]
Computer security
49
In aviation
The aviation industry is especially important when analyzing computer security because the involved risks include human life, expensive equipment, cargo, and transportation infrastructure. Security can be compromised by hardware and software malpractice, human error, and faulty operating environments. Threats that exploit computer vulnerabilities can stem from sabotage, espionage, industrial competition, terrorist attack, mechanical malfunction, and human error.[5] The consequences of a successful deliberate or inadvertent misuse of a computer system in the aviation industry range from loss of confidentiality to loss of system integrity, which may lead to more serious concerns such as data theft or loss, network and air traffic control outages, which in turn can lead to airport closures, loss of aircraft, loss of passenger life. Military systems that control munitions can pose an even greater risk. A proper attack does not need to be very high tech or well funded; for a power outage at an airport alone can cause repercussions worldwide.[6] . One of the easiest and, arguably, the most difficult to trace security vulnerabilities is achievable by transmitting unauthorized communications over specific radio frequencies. These transmissions may spoof air traffic controllers or simply disrupt communications altogether. These incidents are very common, having altered flight courses of commercial aircraft and caused panic and confusion in the past. Controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. Beyond the radar's sight controllers must rely on periodic radio communications with a third party. Lightning, power fluctuations, surges, brown-outs, blown fuses, and various other power outages instantly disable all computer systems, since they are dependent on an electrical source. Other accidental and intentional faults have caused significant disruption of safety critical systems throughout the last few decades and dependence on reliable communication and electrical power only jeopardizes computer safety. Notable system accidents In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horse viruses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.[7] Now, a technique called Ethical hack testing is used to remediate these issues. Electromagnetic interference is another threat to computer safety and in 1989, a United States Air Force F-16 jet accidentally dropped a 230kg bomb in West Georgia after unspecified interference caused the jet's computers to release it. [8] A similar telecommunications accident also happened in 1994, when two UH-60 Blackhawk helicopters were destroyed by F-15 aircraft in Iraq because the IFF system's encryption system malfunctioned.
Computer security entities that own infrastructures that are critical to national security interests (the bill quotes John Brennan, the Assistant to the President for Homeland Security and Counterterrorism: "our nations security and economic prosperity depend on the security, stability, and integrity of communications and information infrastructure that are largely privately-owned and globally-operated" and talks about the country's response to a "cyber-Katrina"[11] .), increase public awareness on cybersecurity issues, and foster and fund cybersecurity research. Some of the most controversial parts of the bill include Paragraph 315, which grants the President the right to "order the limitation or shutdown of Internet traffic to and from any compromised Federal Government or United States critical infrastructure information system or network[11] ." The Electronic Frontier Foundation, an international non-profit digital rights advocacy and legal organization based in the United States, characterized the bill as promoting a "potentially dangerous approach that favors the dramatic over the sober response"[12] . International Cybercrime Reporting and Cooperation Act On March 25, 2010, Representative Yvette Clarke (D-NY) introduced the "International Cybercrime Reporting and Cooperation Act - H.R.4962" (full text [13]) in the House of Representatives; the bill, co-sponsored by seven other representatives (among whom only one Republican), was referred to three House committees[14] . The bill seeks to make sure that the administration keeps Congress informed on information infrastructure, cybercrime, and end-user protection worldwide. It also "directs the President to give priority for assistance to improve legal, judicial, and enforcement capabilities with respect to cybercrime to countries with low information and communications technology levels of development or utilization in their critical infrastructure, telecommunications systems, and financial industries"[14] as well as to develop an action plan and an annual compliance assessment for countries of "cyber concern"[14] . Protecting Cyberspace as a National Asset Act of 2010 ("Kill switch bill") On June 19, 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010 - S.3480" (full text in pdf [15]), which he co-wrote with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed the "Kill switch bill", would grant the President emergency powers over the Internet. However, all three co-authors of the bill issued a statement claiming that instead, the bill "[narrowed] existing broad Presidential authority to take over telecommunications networks"[16] .
50
Terminology
The following terms used in engineering secure systems are explained below. Authentication techniques can be used to ensure that communication end-points are who they say they are. Automated theorem proving and other verification tools can enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications. Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. This section discusses their use. Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers. Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that data exchanged between systems can be intercepted or modified. Firewalls can provide some protection from online intrusion. A microkernel is a carefully crafted, deliberately small corpus of software that underlies the operating system per se and is used solely to provide very low-level, very precisely defined primitives upon which an operating system can be developed. A simple example with considerable didactic value is the early '90s GEMSOS (Gemini Computers), which provided extremely low-level primitives, such as "segment" management, atop which an
Computer security operating system could be built. The theory (in the case of "segments") was thatrather than have the operating system itself worry about mandatory access separation by means of military-style labelingit is safer if a low-level, independently scrutinized module can be charged solely with the management of individually labeled segments, be they memory "segments" or file system "segments" or executable text "segments." If software below the visibility of the operating system is (as in this case) charged with labeling, there is no theoretically viable means for a clever hacker to subvert the labeling scheme, since the operating system per se does not provide mechanisms for interfering with labeling: the operating system is, essentially, a client (an "application," arguably) atop the microkernel and, as such, subject to its restrictions. Endpoint Security software helps networks to prevent data theft and virus infection through portable storage devices, such as USB drives. Some of the following items may belong to the computer insecurity article: Access authorization restricts access to a computer to group of users through the use of authentication systems. These systems can protect either the whole computer such as through an interactive logon screen or individual services, such as an FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, and, more recently, smart cards and biometric systems. Anti-virus software consists of computer programs that attempt to identify, thwart and eliminate computer viruses and other malicious software (malware). Applications with known security flaws should not be run. Either leave it turned off until it can be patched or otherwise fixed, or delete it and replace it with some other application. Publicly known flaws are the main entry used by worms to automatically break into a system and then spread to other systems connected to it. The security website Secunia provides a search tool for unpatched known flaws in popular products. Backups are a way of securing information; they are another copy of all the important computer files kept in another location. These files are kept on hard disks, CD-Rs, CD-RWs, and tapes. Suggested locations for backups are a fireproof, waterproof, and heat proof safe, or in a separate, offsite location than that in which the original files are contained. Some individuals and companies also keep their backups in safe deposit boxes inside bank vaults. There is also a fourth option, which involves using one of the file hosting services that backs up files over the Internet for both business and individuals. Backups are also important for reasons other than security. Natural disasters, such as earthquakes, hurricanes, or tornadoes, may strike the building where the computer is located. The building can be on fire, or an explosion may occur. There needs to be a recent backup at an alternate secure location, in case of such kind of disaster. Further, it is recommended that the alternate location be placed where the same disaster would not affect both locations. Examples of alternate disaster recovery sites being compromised by the same disaster that affected the primary site include having had a primary site in World Trade Center I and the recovery site in 7 World Trade Center, both of which were destroyed in the 9/11 attack, and having one's primary site and recovery site in the same coastal region, which leads to both being vulnerable to hurricane damage (e.g. primary site in New Orleans and recovery site in Jefferson Parish, both of which were hit by Hurricane Katrina in 2005). The backup media should be moved between the geographic sites in a secure manner, in order to prevent them from being stolen.
51
Computer security
52
Encryption is used to protect the message from the eyes of others. It can be done in several ways by switching the characters around, replacing characters with others, and even removing characters from the message. These have to be used in Cryptographic techniques involve transforming information, scrambling it so it combination to make the encryption becomes unreadable during transmission. The intended recipient can unscramble secure enough, that is to say, sufficiently the message, but eavesdroppers cannot. difficult to crack. Public key encryption is a refined and practical way of doing encryption. It allows for example anyone to write a message for a list of recipients, and only those recipients will be able to read that message. Firewalls are systems which help protect computers and computer networks from attack and subsequent intrusion by restricting the network traffic which can pass through them, based on a set of system administrator defined rules. Honey pots are computers that are either intentionally or unintentionally left vulnerable to attack by crackers. They can be used to catch crackers or fix vulnerabilities. Intrusion-detection systems can scan a network for people that are on the network but who should not be there or are doing things that they should not be doing, for example trying a lot of passwords to gain access to the network. Pinging The ping application can be used by potential crackers to find if an IP address is reachable. If a cracker finds a computer they can try a port scan to detect and attack services on that computer. Social engineering awareness keeps employees aware of the dangers of social engineering and/or having a policy in place to prevent social engineering can reduce successful breaches of the network and servers. File Integrity Monitors are tools used to detect changes in the integrity of systems and files.
See also
Attack tree Authentication Authorization CAPTCHA CERT Cloud computing security Computer security model Cryptographic hash function Cryptography Cyber security standards Dancing pigs Data security Differentiated security Ethical hack Fault tolerance Firewalls Formal methods Human-computer interaction (security) Identity management Information Leak Prevention Internet privacy ISO/IEC 15408 Network Security Toolkit Network security OWASP Penetration test Physical information security Physical security Presumed security Proactive Cyber Defence Sandbox (computer security) Security Architecture Separation of protection and security
Computer security
53
References
Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems [6], ISBN 0-471-38922-6 Morrie Gasser: Building a secure computer system [17] ISBN 0-442-23022-2 1988 Stephen Haag, Maeve Cummings, Donald McCubbrey, Alain Pinsonneault, Richard Donovan: Management Information Systems for the information age, ISBN 0-07-091120-7 E. Stewart Lee: Essays about Computer Security [18] Cambridge, 1999 Peter G. Neumann: Principled Assuredly Trustworthy Composable Architectures [19] 2004 Paul A. Karger, Roger R. Schell: Thirty Years Later: Lessons from the Multics Security Evaluation [20], IBM white paper. Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1 Robert C. Seacord: Secure Coding in C and C++. Addison Wesley, September, 2005. ISBN 0-321-33572-4 Clifford Stoll: Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, Pocket Books, ISBN 0-7434-1146-3 Network Infrastructure Security [21], Angus Wong and Alan Yeung, Springer, 2009.
External links
Security advisories links [22] from the Open Directory Project Top 5 Security No Brainers for Businesses [23] from Network World
References
[1] Definitions: IT Security Architecture (http:/ / opensecurityarchitecture. com). SecurityArchitecture.org, Jan, 2008 [2] http:/ / www. cert. org/ books/ secure-coding [3] New hacking technique exploits common programming error (http:/ / searchsecurity. techtarget. com/ originalContent/ 0,289142,sid14_gci1265116,00. html). SearchSecurity.com, July 2007 [4] J. C. Willemssen, "FAA Computer Security". GAO/T-AIMD-00-330. Presented at Committee on Science, House of Representatives, 2000. [5] P. G. Neumann, "Computer Security in Aviation," presented at International Conference on Aviation Safety and Security in the 21st Century, White House Commission on Safety and Security, 1997. [6] J. Zellan, Aviation Security. Hauppauge, NY: Nova Science, 2003, pp. 6570. [7] Information Security (http:/ / www. fas. org/ irp/ gao/ aim96084. htm). United States Department of Defense, 1986 [8] Air Force Bombs Georgia (http:/ / catless. ncl. ac. uk/ Risks/ 8. 72. html). The Risks Digest, vol. 8, no. 72, May 1989 [9] http:/ / www. opencongress. org/ bill/ 111-s773/ text [10] Cybersecurity bill passes first hurdle (http:/ / www. computerworld. com/ s/ article/ 9174065/ Cybersecurity_bill_passes_first_hurdle), Computer World, March 24, 2010. Retrieved on June 26, 2010. [11] Cybersecurity Act of 2009 (http:/ / www. opencongress. org/ bill/ 111-s773/ text), OpenCongress.org, April 1, 2009. Retrieved on June 26, 2010. [12] Federal Authority Over the Internet? The Cybersecurity Act of 2009 (http:/ / www. eff. org/ deeplinks/ 2009/ 04/ cybersecurity-act), eff.org, April 10, 2009. Retrieved on June 26, 2010. [13] http:/ / www. opencongress. org/ bill/ 111-h4962/ text [14] H.R.4962 - International Cybercrime Reporting and Cooperation Act (http:/ / www. opencongress. org/ bill/ 111-h4962/ show), OpenCongress.org. Retrieved on June 26, 2010. [15] http:/ / hsgac. senate. gov/ public/ index. cfm?FuseAction=Files. View& FileStore_id=4ee63497-ca5b-4a4b-9bba-04b7f4cb0123 [16] Senators Say Cybersecurity Bill Has No 'Kill Switch' (http:/ / www. informationweek. com/ news/ government/ security/ showArticle. jhtml?articleID=225701368& subSection=News), informationweek.com, June 24, 2010. Retrieved on June 25, 2010. [17] http:/ / cs. unomaha. edu/ ~stanw/ gasserbook. pdf [18] http:/ / www. cl. cam. ac. uk/ ~mgk25/ lee-essays. pdf [19] http:/ / www. csl. sri. com/ neumann/ chats4. pdf [20] http:/ / www. acsac. org/ 2002/ papers/ classic-multics. pdf [21] http:/ / www. springer. com/ computer/ communications/ book/ 978-1-4419-0165-1 [22] http:/ / www. dmoz. org/ Computers/ Security/ Advisories_and_Patches/ [23] http:/ / www. networkworld. com/ community/ node/ 59971
Access control
54
Access control
Access control is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. An access control system, within the field of physical security, is generally seen as the second layer in the security of a physical structure. Access control is, in reality, an everyday phenomenon. A lock on a car door is essentially a form of access control. A PIN on an ATM system at a bank is another means of access control. Bouncers standing in front of a night club is perhaps a more primitive mode of access control (given the evident lack of information technology involved). The possession of access control is of prime importance when persons seek to secure important, confidential, or sensitive information and equipment. Item control or electronic key management is an area within (and possibly integrated with) an access control system which concerns the managing of possession and location of small assets or physical (mechanical) keys.
Physical access
Physical access by a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people. These can be enforced by personnel such as a border guard, a doorman, a ticket checker, etc., or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.
Underground entrance to the New York City
In physical security, the term access control refers to the practice of Subway system restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the Access control vestibule. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets. Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically this was partially accomplished through keys and locks. When a door is locked only someone with a key can enter through the door depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed. Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked.
Access control
55
Credential
A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something you know (such as number or PIN), something you have (such as an access badge), something you are (such as a biometric feature) or some combination of these items. The typical credential is an access card, key fob, or other key. There are many card technologies including magnetic stripe, bar code, Wiegand, 125kHz proximity, 26 bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs which are more compact than ID cards and attach to a key ring. Typical biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry.
Access control When the button is pushed or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.
56
Types of readers
Access control readers may be classified by functions they are able to perform: Basic (non-intelligent) readers: simply read card number or PIN and forward it to a control panel. In case of biometric identification, such readers output ID number of a user. Typically Wiegand protocol is used for transmitting data to the control panel, but other options such as RS-232, RS-485 and Clock/Data are not uncommon. This is the most popular type of access control readers. Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by HID, and P300 by Farpointe Data.
Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not Access control door wiring when using intelligent make any access decisions. When a user presents a card or enters readers PIN, the reader sends information to the main controller and waits for its response. If the connection to the main controller is interrupted, such readers stop working or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of such readers are InfoProx Lite IPL200 by CEM Systems and AP-510 by Apollo. Intelligent readers: have all inputs and outputs necessary to control door hardware, they also have memory and processing power necessary to make access decisions independently. Same as semi-intelligent readers they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates and retrieves events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems and AP-500 by Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers usually do not have traditional control panels and readers communicate directly to PC that acts as a host. Examples of such readers are PowerNet IP Reader by Isonas Security Systems, ID08 by Solus has the built in webservice to make it user friendly, Edge ER40 reader by HID Global, LogLock and UNiLOCK by ASPiSYS Ltd, and BioEntry Plus reader by Suprema Inc.
Access control Some readers may have additional features such as LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support. Access control readers may also be classified by the type of identification technology.
57
RS-485 standard allows long cable runs, up to 4000 feet (1200 m) Relatively short response time. The maximum number of devices on an RS-485 line is limited to 32, which means that the host can frequently request status updates from each device and display events almost in real time. High reliability and security as the communication line is not shared with any other systems. Disadvantages: RS-485 does not allows Star-type wiring unless splitters are used RS-485 is not well suited for transferring large amounts of data (i.e. configuration and users). The highest possible throughput is 115.2 kbit/s, but in most system it is downgraded to 56.2 kbit/s or less to increase reliability. RS-485 does not allow host PC to communicate with several controllers connected to the same port simultaneously. Therefore in large systems transfers of configuration and users to controllers may take a very long time and interfere with normal operations. Controllers cannot initiate communication in case of an alarm. The host PC acts as a master on the RS-485 communication line and controllers have to wait till they are polled. Special serial switches are required in order to build a redundant host PC setup. Separate RS-485 lines have to be installed instead of using an already existing network infrastructure. Cable that meets RS-485 standards is significantly more expensive than the regular Category 5 UTP network cable. Operation of the system is highly dependent on the host PC. In case the host PC fails, events from controllers are not retrieved and functions that required interaction between controllers (i.e. anti-passback) stop working. 2. Serial main and sub-controllers. All door hardware is connected to sub-controllers (a.k.a. door controllers or door interfaces). Sub-controllers usually do not make access decisions, and forward all requests to the main controllers. Main controllers usually support from 16 to 32 sub-controllers. Advantages: Work load on the host PC is significantly reduced, because it only needs to communicate with a few main controllers. The overall cost of the system is lower, as sub-controllers are usually simple and inexpensive devices. All other advantages listed in the first paragraph apply. Disadvantages:
Access control Operation of the system is highly dependent on main controllers. In case one of the main controllers fails, events from its sub-controllers are not retrieved and functions that require interaction between sub controllers (i.e. anti-passback) stop working. Some models of sub-controllers (usually lower cost) have no memory and processing power to make access decisions independently. If the main controller fails, sub-controllers change to degraded mode in which doors are either completely locked or unlocked and no events are recorded. Such sub-controllers should be avoided or used only in areas that do not require high security. Main controllers tend to be expensive, therefore such topology is not very well suited for systems with multiple remote locations that have only a few doors. All other RS-485-related disadvantages listed in the first paragraph apply. 3. Serial main controllers & intelligent readers. All door hardware is connected directly to intelligent or semi-intelligent readers. Readers usually do not make access decisions, and forward all requests to the main controller. Only if the connection to the main controller is unavailable, the readers use their internal database to make access Access control system using serial main decisions and record events. Semi-intelligent reader that have no controller and intelligent readers database and cannot function without the main controller should be used only in areas that do not require high security. Main controllers usually support from 16 to 64 readers. All advantages and disadvantages are the same as the ones listed in the second paragraph. 4. Serial controllers with terminal servers. In spite of the rapid development and increasing use of computer networks, access control manufacturers remained conservative and did not rush to introduce network-enabled products. When pressed for solutions with network connectivity, many chose the option requiring less efforts: addition of a terminal server, a device that converts serial data for transmission via LAN or WAN. Terminal servers manufactured by Lantronix and Tibbo Technology are popular in the security industry. Advantages: Allows utilizing existing network infrastructure for connecting separate segments of the system. Provides convenient solution in cases when installation of an RS-485 line would be difficult or impossible. Disadvantages: Increases complexity of the system. Creates additional work for installers: usually terminal servers have to be configured independently, not through the interface of the access control software. Access control systems using serial controllers Serial communication link between the controller and the terminal and terminal servers server acts as a bottleneck: even though the data between the host PC and the terminal server travels at the 10/100/1000Mbit/s network speed it then slows down to the serial speed of 112.5 kbit/s or less. There are also additional delays introduced in the process of conversion between serial and network data. All RS-485-related advantages and disadvantages also apply.
58
Access control
59
5. Network-enabled main controllers. The topology is nearly the same as described in the second and third paragraphs. The same advantages and disadvantages apply, but the on-board network interface offers a couple valuable improvements. Transmission of configuration and users to the main controllers is faster and may be done in parallel. This makes the system more responsive and does not interrupt normal operations. No special hardware is required in order to achieve redundant host PC setup: in case the primary host PC fails, the secondary host PC may start polling network controllers. The disadvantages introduced by terminal servers (listed in the fourth paragraph) are also eliminated.
6. IP controllers. Controllers are connected to a host PC via Ethernet LAN or WAN. Advantages: An existing network infrastructure is fully utilized, there is no need to install new communication lines. There are no limitations regarding the number of controllers (32 per line in case of RS-485). Access control system using IP controllers Special RS-485 installation, termination, grounding and troubleshooting knowledge is not required. Communication with controllers may be done at the full network speed, which is important if transferring a lot of data (databases with thousands of users, possibly including biometric records). In case of an alarm controllers may initiate connection to the host PC. This ability is important in large systems because it allows to reduce network traffic caused by unnecessary polling. Simplifies installation of systems consisting of multiple sites separated by large distances. Basic Internet link is sufficient to establish connections to remote locations. Wide selection of standard network equipment is available to provide connectivity in different situations (fiber, wireless, VPN, dual path, PoE) Disadvantages: The system becomes susceptible to network related problems, such as delays in case of heavy traffic and network equipment failures. Access controllers and workstations may become accessible to hackers if the network of the organization is not well protected. This threat may be eliminated by physically separating the access control network from the network of the organization. Also it should be noted that most IP controllers utilize either Linux platform or proprietary operating systems, which makes them more difficult to hack. Industry standard data encryption is also used. Maximum distance from a hub or a switch to the controller is 100 meters (330ft). Operation of the system is dependent on the host PC. In case the host PC fails, events from controllers are not retrieved and functions that required interaction between controllers (i.e. anti-passback) stop working. Some controllers, however, have peer-to-peer communication option in order to reduce dependency on the host PC.
Access control
60
7. IP readers. Readers are connected to a host PC via Ethernet LAN or WAN. Advantages: Most IP readers are PoE capable. This feature makes it very easy to provide battery backed power to the entire system, including the locks and various types of detectors (if used). IP readers eliminate the need for controller enclosures.
There is no wasted capacity when using IP readers (i.e. a 4-door controller would have 25% unused capacity if it was controlling only 3 doors). IP reader systems scale easily: there is no need to install new main or sub-controllers. Failure of one IP reader does not affect any other readers in the system. Disadvantages: In order to be used in high-security areas IP readers require special input/output modules to eliminate the possibility of intrusion by accessing lock and/or exit button wiring. Not all IP reader manufacturers have such modules available. Being more sophisticated than basic readers IP readers are also more expensive and sensitive, therefore they should not be installed outdoors in areas with harsh weather conditions or high possibility of vandalism. The variety of IP readers in terms of identification technologies and read range is much lower than that of the basic readers. The advantages and disadvantages of IP controllers apply to the IP readers as well.
Security risks
The most common security risk of intrusion of an access control system is simply following a legitimate user through a door. Often the legitimate user will hold the door for the intruder. This risk can be minimized through security awareness training of the user population or more active means such as turnstiles. In very high security applications this risk is minimized by using a sally port, sometimes called a security vestibule or mantrap where operator intervention is required presumably to assure valid identification. The second most common risk is from levering the door open. This is surprisingly simple and effective on most doors. The lever could be as small as a screw driver or big as a crow bar. Fully implemented access control systems include forced door monitoring alarms. These vary in effectiveness usually failing from high false positive alarms, poor database configuration, or lack of active intrusion monitoring.
Access control door wiring when using intelligent readers and IO module
Similar to levering is crashing through cheap partition walls. In shared tenant spaces the demisal wall is a vulnerability. Along the same lines is breaking sidelights. Spoofing locking hardware is fairly simple and more elegant than levering. A strong magnet can operate the solenoid controlling bolts in electric locking hardware. Motor locks, more prevalent in Europe than in the US, are also susceptible to this attack using a donut shaped magnet. It is also possible to manipulate the power to the lock either by removing or adding current. Access cards themselves have proven vulnerable to sophisticated attacks. Enterprising hackers have built portable readers that capture the card number from a users proximity card. The hacker simply walks by the user, reads the card, and then presents the number to a reader securing the door. This is possible because card numbers are sent in
Access control the clear, no encryption being used. Finally, most electric locking hardware still have mechanical keys as a failover. Mechanical key locks are vulnerable to bumping. The need-to-know principle The need to know principle can be enforced with user access controls and authorization procedures and its objective is to ensure that only authorized individuals gain access to information or systems necessary to undertake their duties. See Principle_of_least_privilege.
61
Computer security
In computer security, access control includes authentication, authorization and audit. It also includes measures such as physical devices, including biometric scans and metal locks, hidden paths, digital signatures, encryption, social barriers, and monitoring by humans and automated systems. In any access control model, the entities that can perform actions in the system are called subjects, and the entities representing resources to which access may need to be controlled are called objects (see also Access Control Matrix). Subjects and objects should both be considered as software entities and as human users[1] . Although some systems equate subjects with userIDs, so that all processes started by a user by default have the same authority, this level of control is not fine-grained enough to satisfy the Principle of least privilege, and arguably is responsible for the prevalence of malware in such systems (see computer insecurity). In some models, for example the object-capability model, any software entity can potentially act as both a subject and object. Access control models used by current systems tend to fall into one of two classes: those based on capabilities and those based on access control lists (ACLs). In a capability-based model, holding an unforgeable reference or capability to an object provides access to the object (roughly analogous to how possession of your house key grants you access to your house); access is conveyed to another party by transmitting such a capability over a secure channel. In an ACL-based model, a subject's access to an object depends on whether its identity is on a list associated with the object (roughly analogous to how a bouncer at a private party would check your ID to see if your name is on the guest list); access is conveyed by editing the list. (Different ACL systems have a variety of different conventions regarding who or what is responsible for editing the list and how it is edited.) Both capability-based and ACL-based models have mechanisms to allow access rights to be granted to all members of a group of subjects (often the group is itself modeled as a subject). Access control systems provide the essential services of identification and authentication (I&A), authorization, and accountability where: identification and authentication determine who can log on to a system, and the association of users with the software subjects that they are able to control as a result of logging in; authorization determines what a subject can do; accountability identifies what a subject (or all subjects associated with a user) did.
Access control entity asserts an identity together with an authenticator as a means for validation. The only requirements for the identifier is that it must be unique within its security domain. Authenticators are commonly based on at least one of the following four factors: Something you know, such as a password or a personal identification number (PIN). This assumes that only the owner of the account knows the password or PIN needed to access the account. Something you have, such as a smart card or security token. This assumes that only the owner of the account has the necessary smart card or token needed to unlock the account. Something you are, such as fingerprint, voice, retina, or iris characteristics. Where you are, for example inside or outside a company firewall, or proximity of login location to a personal GPS device.
62
Authorization
Authorization applies to subjects. Authorization determines what a subject can do on the system. Most modern operating systems define sets of permissions that are variations or extensions of three basic types of access: Read (R): The subject can Read file contents List directory contents Write (W): The subject can change the contents of a file or directory with the following tasks: Add Create Delete Rename Execute (X): If the file is a program, the subject can cause the program to be run. (In Unix systems, the 'execute' permission doubles as a 'traverse directory' permission when granted for a directory.) These rights and permissions are implemented differently in systems based on discretionary access control (DAC) and mandatory access control (MAC).
Accountability
Accountability uses such system components as audit trails (records) and logs to associate a subject with its actions. The information recorded should be sufficient to map the subject to a controlling user. Audit trails and logs are important for Detecting security violations Re-creating security incidents If no one is regularly reviewing your logs and they are not maintained in a secure and consistent manner, they may not be admissible as evidence. Many systems can generate automated reports based on certain predefined criteria or thresholds, known as clipping levels. For example, a clipping level may be set to generate a report for the following: More than three failed logon attempts in a given period Any attempt to use a disabled user account These reports help a system administrator or security administrator to more easily identify possible break-in attempts.
Access control
63
Access control upper-bound values for a pair of elements, such as a subject and an object. Few systems implement MAC. XTS-400 is an example of one that does. The computer system at the company in the movie Tron is an example of MAC in popular culture. Role-based access control Role-based access control (RBAC) is an access policy determined by the system, not the owner. RBAC is used in commercial applications and also in military systems, where multi-level security requirements may also exist. RBAC differs from DAC in that DAC allows users to control access to their resources, while in RBAC, access is controlled at the system level, outside of the user's control. Although RBAC is non-discretionary, it can be distinguished from MAC primarily in the way permissions are handled. MAC controls read and write permissions based on a user's clearance level and additional labels. RBAC controls collections of permissions that may include complex operations such as an e-commerce transaction, or may be as simple as read or write. A role in RBAC can be viewed as a set of permissions. Three primary rules are defined for RBAC: 1. Role assignment: A subject can execute a transaction only if the subject has selected or been assigned a role. 2. Role authorization: A subject's active role must be authorized for the subject. With rule 1 above, this rule ensures that users can take on only roles for which they are authorized. 3. Transaction authorization: A subject can execute a transaction only if the transaction is authorized for the subject's active role. With rules 1 and 2, this rule ensures that users can execute only transactions for which they are authorized. Additional constraints may be applied as well, and roles can be combined in a hierarchy where higher-level roles subsume permissions owned by sub-roles. Most IT vendors offer RBAC in one or more products.
64
Telecommunication
In telecommunication, the term access control is defined in U.S. Federal Standard 1037C[2] with the following meanings: 1. A service feature or technique used to permit or deny use of the components of a communication system. 2. A technique used to define or restrict the rights of individuals or application programs to obtain data from, or place data onto, a storage device. 3. The definition or restriction of the rights of individuals or application programs to obtain data from, or place data into, a storage device. 4. The process of limiting access to the resources of an AIS to authorized users, programs, processes, or other systems. 5. That function performed by the resource controller that allocates system resources to satisfy user requests. Notice that this definition depends on several other technical terms from Federal Standard 1037C.
Access control
65
Public policy
In public policy, access control to restrict access to systems ("authorization") or to track or monitor behavior within systems ("accountability") is an implementation feature of using trusted systems for security or social control.
See also
Access badge Access control Access control vestibule Alarm Alarm management Bank vault Biometrics Burglar alarm Card reader Castle Common Access Card Computer security Credential Door security Electronic lock Fortification Htaccess ID Card IP Controller IP reader Key cards key management Lock smithing Lock picking Logical security Magnetic stripe card Optical turnstile Photo identification Physical key management Physical Security Professional Prison Proximity card Razor wire Safe Safe-cracking Security Security engineering Security lighting Security Management Security policy Smart card Swipe card Wiegand effect XACML
Category:Security companies
References
U.S. Federal Standard 1037C U.S. MIL-STD-188 U.S. National Information Systems Security Glossary Harris, Shon, All-in-one CISSP Exam Guide, Third Edition, McGraw Hill Osborne, Emeryville, California, 2005.
[1] http:/ / www. techexams. net/ technotes/ securityplus/ mac_dac_rbac. shtml [2] http:/ / www. its. bldrdoc. gov/ fs-1037/ other/ a. pdf
External links
eXtensible Access Control Markup Language. (http://xml.coverpages.org/xacml.html) An OASIS standard language/model for access control. Also XACML. Access Control Authentication Article on AuthenticationWorld.com (http://www.authenticationworld.com/ Access-Control-Authentication/) Entrance Technology Options (http://www.sptnews.ca/index.php/20070907739/Articles/ Entrance-Technology-Options.html) article at SP&T News Novel chip-in access control technology used in Austrian ski resort (http://www.sourcesecurity.com/news/ articles/co-1040-ga-co-3879-ga.2311.html) Beyond Authentication, Authorization and Accounting (http://ism3.wordpress.com/2009/04/23/beyondaaa/)
66
Filesystem ACLs
A Filesystem ACL is a data structure (usually a table) containing entries that specify individual user or group rights to specific system objects such as programs, processes, or files. These entries are known as access control entries (ACEs) in the Microsoft Windows NT, OpenVMS, Unix-like, and Mac OS X operating systems. Each accessible object contains an identifier to its ACL. The privileges or permissions determine specific access rights, such as whether a user can read from, write to, or execute an object. In some implementations an ACE can control whether or not a user, or group of users, may alter the ACL on an object. Most of the Unix and Unix-like operating systems (e.g. Linux,[1] BSD, or Solaris) support so called POSIX.1e ACLs, based on an early POSIX draft that was abandoned. Many of them, for example AIX, Mac OS X beginning with version 10.4 ("Tiger"), or Solaris with ZFS filesystem[2] , support NFSv4 ACLs, which are part of the NFSv4 standard. FreeBSD 9-CURRENT supports NFSv4 ACLs on both UFS and ZFS file systems; full support is expected to be backported to version 8.1[3] . There is an experimental implementation of NFSv4 ACLs for Linux.[4]
Networking ACLs
On some types of proprietary computer hardware, an Access Control List refers to rules that are applied to port numbers or network daemon names that are available on a host or other layer 3, each with a list of hosts and/or networks permitted to use the service. Both individual servers as well as routers can have network ACLs. Access control lists can generally be configured to control both inbound and outbound traffic, and in this context they are similar to firewalls.
See also
Standard Access Control List, Cisco-IOS configuration rules Role-based access control Confused deputy problem Capability-based security Cacls
67
External links
FreeBSD Handbook: File System Access Control Lists [5] SELinux and grsecurity: A Case Study Comparing Linux Security Kernel Enhancements [6] Susan Hinrichs. "Operating System Security" [7]. John Mitchell. "Access Control and Operating System Security" [8]. Michael Clarkson. "Access Control" [9]. Microsoft MSDN Library: Access Control Lists [10] Microsoft Technet: How Permissions Work [11] This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.
References
[1] Support for ACL and EA introduced in RHEL-3 in October 2003 (http:/ / www. redhat. com/ docs/ manuals/ enterprise/ RHEL-3-Manual/ release-notes/ as-x86/ ) (the patch exists before, but officially in kernel since 2.6 released at December 2003) [2] "8. Using ACLs to Protect ZFS Files (Solaris ZFS Administration Guide) - Sun Microsystems" (http:/ / docs. sun. com/ app/ docs/ doc/ 819-5461/ ftyxi?a=view). Docs.sun.com. 2009-10-01. . Retrieved 2010-05-04. [3] "NFSv4_ACLs - FreeBSD Wiki" (http:/ / wiki. freebsd. org/ NFSv4_ACLs). Wiki.freebsd.org. 2010-04-20. . Retrieved 2010-05-04. [4] "Native NFSv4 ACLs on Linux" (http:/ / www. suse. de/ ~agruen/ nfs4acl/ ). Suse.de. . Retrieved 2010-05-04. [5] http:/ / www. freebsd. org/ doc/ en/ books/ handbook/ fs-acl. html [6] http:/ / www. cs. virginia. edu/ ~jcg8f/ GrsecuritySELinuxCaseStudy. pdf [7] http:/ / www. cs. uiuc. edu/ class/ fa05/ cs498sh/ seclab/ slides/ OSNotes. ppt [8] http:/ / crypto. stanford. edu/ cs155old/ cs155-spring03/ lecture9. pdf [9] http:/ / www. cs. cornell. edu/ courses/ cs513/ 2007fa/ NL. accessControl. html [10] http:/ / msdn. microsoft. com/ en-us/ library/ aa374872(VS. 85). aspx [11] http:/ / technet. microsoft. com/ en-us/ library/ cc783530(WS. 10). aspx
Password
68
Password
A password is a secret word or string of characters that is used for authentication, to prove identity or gain access to a resource (example: an access code is a type of password). The password should be kept secret from those not allowed access. The use of passwords is known to be ancient. Sentries would challenge those wishing to enter an area or approaching it to supply a password or watchword. Sentries would only allow a person or group to pass if they knew the password. In modern times, user names and passwords are commonly used by people during a log in process that controls access to protected computer operating systems, mobile phones, cable TV decoders, automated teller machines (ATMs), etc. A typical computer user may require passwords for many purposes: logging in to computer accounts, retrieving e-mail from servers, accessing programs, databases, networks, web sites, and even reading the morning newspaper online. Despite the name, there is no need for passwords to be actual words; indeed passwords which are not actual words may be harder to guess, a desirable property. Some passwords are formed from multiple words and may more accurately be called a passphrase. The term passcode is sometimes used when the secret information is purely numeric, such as the personal identification number (PIN) commonly used for ATM access. Passwords are generally short enough to be easily memorized and typed. For the purposes of more compellingly authenticating the identity of one computing device to another, passwords have significant disadvantages (they may be stolen, spoofed, forgotten, etc.) over authentications systems relying on cryptographic protocols, which are more difficult to circumvent.
Password
69
Password A modified version of the DES algorithm was used for this purpose in early Unix systems. The UNIX DES function was iterated to make the hash function equivalent slow, further frustrating automated guessing attacks, and used the password candidate as a key to encrypt a fixed value, thus blocking yet another attack on the password shrouding system. More recent Unix or Unix like systems (e.g., Linux or the various BSD systems) use what most believe to be still more effective protective mechanisms based on MD5, SHA1, Blowfish, Twofish, or any of several other algorithms to prevent or frustrate attacks on stored password files.[7] If the hash function is well designed, it will be computationally infeasible to reverse it to directly find a plaintext password. However, many systems do not protect their hashed passwords adequately, and if an attacker can gain access to the hashed values he can use widely available tools which compare the encrypted outcome of every word from some list, such as a dictionary (many are available on the Internet). Large lists of possible passwords in many languages are widely available on the Internet, as are software programs to try common variations. The existence of these dictionary attack tools constrains user password choices which are intended to resist easy attacks; they must not be findable on such lists. Obviously, words on such lists should be avoided as passwords. Use of a key stretching hash such as PBKDF2 is designed to reduce this risk. A poorly designed hash function can make attacks feasible even if a strong password is chosen. See LM hash for a very widely deployed, and deplorably insecure, example. [8]
70
Password multiple computers, certainly on the originating and receiving computers, most often in cleartext. Transmission through encrypted channels The risk of interception of passwords sent over the Internet can be reduced by, among other approaches, using cryptographic protection. The most widely used is the Transport Layer Security (TLS, previously called SSL) feature built into most current Internet browsers. Most browsers alert the user of a TLS/SSL protected exchange with a server by displaying a closed lock icon, or some other sign, when TLS is in use. There are several other techniques in use; see cryptography. Hash-based challenge-response methods Unfortunately, there is a conflict between stored hashed-passwords and hash-based challenge-response authentication; the latter requires a client to prove to a server that he knows what the shared secret (i.e., password) is, and to do this, the server must be able to obtain the shared secret from its stored form. On many systems (including Unix-type systems) doing remote authentication, the shared secret usually becomes the hashed form and has the serious limitation of exposing passwords to offline guessing attacks. In addition, when the hash is used as a shared secret, an attacker does not need the original password to authenticate remotely; he only needs the hash. Zero-knowledge password proofs Rather than transmitting a password, or transmitting the hash of the password, password-authenticated key agreement systems can perform a zero-knowledge password proof, which proves knowledge of the password without exposing it. Moving a step further, augmented systems for password-authenticated key agreement (e.g., AMP, B-SPEKE, PAK-Z, SRP-6) avoid both the conflict and limitation of hash-based methods. An augmented system allows a client to prove knowledge of the password to a server, where the server knows only a (not exactly) hashed password, and where the unhashed password is required to gain access.
71
Password
72
Password longevity
"Password aging" is a feature of some operating systems which forces users to change passwords frequently (e.g., quarterly, monthly or even more often), with the intent that a stolen password will become unusable more or less quickly. Such policies usually provoke user protest and foot-dragging at best and hostility at worst. Users may develop simple variation patterns to keep their passwords memorable. In any case, the security benefits are distinctly limited, if worthwhile, because attackers often exploit a password as soon as it is compromised, which will probably be some time before change is required. In many cases, particularly with administrative or "root" accounts, once an attacker has gained access, he can make alterations to the operating system that will allow him future access even after the initial password he used expires. (see rootkit). Implementing such a policy requires careful consideration of the relevant human factors.
Password the next time period. However, this is vulnerable to a form of denial of service attack. Introducing a delay between password submission attempts to slow down automated password guessing programs. Some of the more stringent policy enforcement measures can pose a risk of alienating users, possibly decreasing security as a result.
73
Password cracking
Attempting to crack passwords by trying as many possibilities as time and money permit is a brute force attack. A related method, rather more efficient in most cases, is a dictionary attack. In a dictionary attack, all words in one or more dictionaries are tested. Lists of common passwords are also typically tested. Password strength is the likelihood that a password cannot be guessed or discovered, and varies with the attack algorithm used. Passwords easily discovered are termed weak or vulnerable; passwords very difficult or impossible to discover are considered strong. There are several programs available for password attack (or even auditing and recovery by systems personnel) such as L0phtCrack, John the Ripper, and Cain; some of which use password design vulnerabilities (as found in the Microsoft LANManager system) to increase efficiency. These programs are sometimes used by system administrators to detect weak passwords proposed by users. Studies of production computer systems have consistently shown that a large fraction of all user-chosen passwords are readily guessed automatically. For example, Columbia University found 22% of user passwords could be recovered with little effort.[10] According to Bruce Schneier, examining data from a 2006 phishing attack, 55% of MySpace passwords would be crackable in 8 hours using a commercially available Password Recovery Toolkit capable of testing 200,000 passwords per second in 2006.[11] He also reported that the single most common password was password1, confirming yet again the general lack of informed care in choosing passwords among users. (He nevertheless maintained, based on these data, that the general quality of passwords has improved over the yearsfor example, average length was up to eight characters from under seven in previous surveys, and less than 4% were dictionary words.[12] ) 1998 incident On July 16, 1998, CERT reported an incident[13] where an intruder had collected 186,126 account names with their respective encrypted passwords. At the time of the discovery, the intruder had guessed 47,642 (25.6%) of these passwords using a password cracking tool. The passwords appeared to have been collected from several other sites, some were identified but not all. This is still the largest reported incident to date.
Password on the user's screen. Access controls based on public key cryptography e.g. ssh. The necessary keys are usually too large to memorize (but see proposal Passmaze [14]) and must be stored on a local computer, security token or portable memory device, such as a USB flash drive or even floppy disk. Biometric methods promise authentication based on unalterable personal characteristics, but currently (2008) have high error rates and require additional hardware to scan, for example, fingerprints, irises, etc. They have proven easy to spoof in some famous incidents testing commercially available systems, for example, the gummie fingerprint spoof demonstration,[15] and, because these characteristics are unalterable, they cannot be changed if compromised; this is a highly important consideration in access control as a compromised access token is necessarily insecure. Single sign-on technology is claimed to eliminate the need for having multiple passwords. Such schemes do not relieve user and administrators from choosing reasonable single passwords, nor system designers or administrators from ensuring that private access control information passed among systems enabling single sign-on is secure against attack. As yet, no satisfactory standard has been developed. Envaulting technology is a password-free way to secure data on e.g. removable storage devices such as USB flash drives. Instead of user passwords, access control is based on the user's access to a network resource.
74
Non-text-based passwords, such as graphical passwords or mouse-movement based passwords.[16] Another system requires users to select a series of faces as a password, utilizing the human brain's ability to recall faces easily.[17] So far, these are promising, but are not widely used. Studies on this subject have been made to determine its usability in the real world. Graphical passwords are an alternative means of authentication for log-in intended to be used in place of conventional password; they use images, graphics or colours instead of letters, digits or special characters. In some implementations the user is required to pick from a series of images in the correct sequence in order to gain access.[18] While some believe that graphical passwords would be harder to crack, others suggest that people will be just as likely to pick common images or sequences as they are to pick common passwords. 2D Key (2-Dimensional Key) [19] is a 2D matrix-like key input method having the key styles of multiline passphrase, crossword, ASCII/Unicode art, with optional textual semantic noises, to create big password/key beyond 128 bits to realize the MePKC (Memorizable Public-Key Cryptography) using fully memorizable private key upon the current private key management technologies like encrypted private key, split private key, and roaming private key. Cognitive passwords use question and answer cue/response pairs to verify identity.
Password advantage of server-side scripting languages such as ASP or PHP to authenticate users on the server before delivering the source code to the browser. Popular systems such as Sentry Login [20] and Password Sentry [21] take advantage of technology in which web pages are protected using such scripting language code snippets placed in front of the HTML code in the web page source saved in the appropriate extension on the server, such as .asp or .php.
75
History of passwords
Passwords or watchwords have been used since ancient times. Polybius describes the system for distribution watchwords in the Roman military as follows: The way in which they secure the passing round of the watchword for the night is as follows: from the tenth maniple of each class of infantry and cavalry, the maniple which is encamped at the lower end of the street, a man is chosen who is relieved from guard duty, and he attends every day at sunset at the tent of the tribune, and receiving from him the watchword - that is a wooden tablet with the word inscribed on it - takes his leave, and on returning to his quarters passes on the watchword and tablet before witnesses to the commander of the next maniple, who in turn passes it to the one next him. All do the same until it reaches the first maniples, those encamped near the tents of the tribunes. These latter are obliged to deliver the tablet to the tribunes before dark. So that if all those issued are returned, the tribune knows that the watchword has been given to all the maniples, and has passed through all on its way back to him. If any one of them is missing, he makes inquiry at once, as he knows by the marks from what quarter the tablet has not returned, and whoever is responsible for the stoppage meets with the punishment he merits.[22] Passwords in military use evolved to include not just a password, but a password and a counterpassword; for example in the opening days of the Battle of Normandy, paratroopers of the U.S. 101st Airborne Division used a password - "thunder" - which was presented as a challenge, and answered with the correct response - "flash". The challenge and response were changed periodically. American paratroopers also famously used a device known as a "cricket" on D-Day in place of a password system as a temporarily unique method of identification; one metallic click given by the device in lieu of a password was to be met by two clicks in reply.[23] Passwords have been used with computers since the earliest days of computing. MIT's CTSS, one of the first time sharing systems, was introduced in 1961. It had a LOGIN command that requested a user password. "After typing PASSWORD, the system turns off the printing mechanism, if possible, so that the user may type in his password with privacy."[24] Robert Morris invented the idea of storing login passwords in a hashed form as part of the Unix operating system. His algorithm, know as crypt(3), used a 12-bit salt and invoked a modified form of the DES algorithm 25 times to reduce the risk of pre-computed dictionary attacks.
See also
Access Code Authentication CAPTCHA Diceware Keyfile Passphrase Password manager Password policy Password psychology
Password Password fatigue Password-authenticated key agreement Password notification e-mail Password synchronization Pre-shared key Random password generator Rainbow table Self-service password reset Shibboleth
76
External links
Large collection of statistics about passwords [25] Graphical Passwords: A Survey [26] PassClicks [27] PassImages [28] Links for password-based cryptography [29] Procedural Advice for Organisations and Administrators [30]
Memorability and Security of Passwords [31] - Cambridge University Computer Laboratory study of password memorability vs. security.
References
[1] Vance, Ashlee (January 20, 2010). "If Your Password Is 123456, Just Make It HackMe" (http:/ / www. nytimes. com/ 2010/ 01/ 21/ technology/ 21password. html). The New York Times. . [2] (http:/ / all. net/ journal/ netsec/ 1997-09. html) Fred Cohen and Associates [3] The Memorability and Security of Passwords (http:/ / homepages. cs. ncl. ac. uk/ jeff. yan/ jyan_ieee_pwd. pdf) [4] Lyquix Blog: Do We Need to Hide Passwords? (http:/ / www. lyquix. com/ blog/ 92-do-we-need-to-hide-passwords) [5] news.bbc.co.uk: Malaysia car thieves steal finger (http:/ / news. bbc. co. uk/ 2/ hi/ asia-pacific/ 4396831. stm) [6] Top ten passwords used in the United Kingdom (http:/ / www. modernlifeisrubbish. co. uk/ top-10-most-common-passwords. asp) [7] Password Protection for Modern Operating Systems (http:/ / www. usenix. org/ publications/ login/ 2004-06/ pdfs/ alexander. pdf) [8] http:/ / support. microsoft. com/ default. aspx?scid=KB;EN-US;q299656 [9] "To Capitalize or Not to Capitalize?" (http:/ / world. std. com/ ~reinhold/ dicewarefaq. html#capitalize) [10] Password (http:/ / www1. cs. columbia. edu/ ~crf/ howto/ password-howto. html) [11] Schneier, Real-World Passwords (http:/ / www. schneier. com/ blog/ archives/ 2006/ 12/ realworld_passw. html) [12] MySpace Passwords Aren't So Dumb (http:/ / www. wired. com/ politics/ security/ commentary/ securitymatters/ 2006/ 12/ 72300) [13] "CERT IN-98.03" (http:/ / www. cert. org/ incident_notes/ IN-98. 03. html). . Retrieved 2009-09-09. [14] http:/ / eprint. iacr. org/ 2005/ 434 [15] T Matsumoto. H Matsumotot, K Yamada, and S Hoshino, Impact of artificial 'Gummy' Fingers on Fingerprint Systems. Proc SPIE, vol 4677, Optical Security and Counterfeit Deterrence Techniques IV or itu.int/itudoc/itu-t/workshop/security/resent/s5p4.pdf pg 356 [16] http:/ / waelchatila. com/ 2005/ 09/ 18/ 1127075317148. html [17] http:/ / mcpmag. com/ reviews/ products/ article. asp?EditorialsID=486 [18] http:/ / searchsecurity. techtarget. com/ sDefinition/ 0,,sid14_gci1001829,00. html [19] http:/ / www. xpreeli. com/ doc/ manual_2DKey_2. 0. pdf [20] http:/ / www. Sentrylogin. com [21] http:/ / www. monster-submit. com/ sentry/ [22] Polybius on the Roman Military (http:/ / ancienthistory. about. com/ library/ bl/ bl_text_polybius6. htm) [23] Bando, Mark Screaming Eagles: Tales of the 101st Airborne Division in World War II [24] CTSS Programmers Guide, 2nd Ed., MIT Press, 1965 [25] http:/ / www. passwordresearch. com/ stats/ statindex. html [26] http:/ / www. acsac. org/ 2005/ abstracts/ 89. html [27] http:/ / labs. mininova. org/ passclicks/ [28] http:/ / www. network-research-group. org/ default. asp?page=publications [29] http:/ / www. jablon. org/ passwordlinks. html [30] http:/ / www. emiic. net/ services/ guides/ Passwords%20Guide. pdf
Password
[31] http:/ / www. ftp. cl. cam. ac. uk/ ftp/ users/ rja14/ tr500. pdf
77
Hobbyist hacker Technology hacker Hacker programmer Hacking in computer security Computer security Computer insecurity Network security History Phreaking Cryptovirology Hacker ethic Black hat, Grey hat, White hat Hacker Manifesto Black Hat Briefings, DEF CON Cybercrime Computer crime, Crimeware List of convicted computer criminals Script kiddie Hacking tools Vulnerability Exploit Payload Software Malware Rootkit, Backdoor Trojan horse, Virus, Worm Spyware, Botnet, Keystroke logging Antivirus software, Firewall, HIDS
In common usage, a hacker is a person who breaks into computers and computer networks, either for profit or motivated by the challenge.[1] The subculture that has evolved around hackers is often referred to as the computer underground but is now an open community.[2] Other uses of the word hacker exist that are not related to computer security (computer programmer and home computer hobbyists), but these are rarely used by the mainstream media because of the common stereotype that is in
Hacker (computer security) TV and movies. Some would argue that the people that are now considered hackers are not hackers, as before the media described the person who breaks into computers as a hacker there was a hacker community. This group was a community of people who had a large interest in computer programming, often sharing, without restrictions, the source code for the software they wrote. These people now refer to the cyber-criminal hackers as "crackers"[3] .
78
History
In today's society understanding the term Hacker is complicated because it has many different definitions. The term Hacker can be traced back to MIT (Massachusetts Institute Technology). MIT was the first institution to offer a course in computer programming and computer science and it is here in 1960 where a group of MIT students taking a lab on Artificial Intelligence first coined this word. These students called themselves hackers because they were able to take programs and have them perform actions not intended for that program. The term was developed on the basis of a practical joke and feeling of excitement because the team member would hack away at the keyboard hours at a time. (Moore R., 2006).[4] Hacking developed alongside "Phone Phreaking", a term referred to exploration of the phone network without authorization, and there has often been overlap between both technology and participants. The first recorded hack was accomplished by "Joe Engressia" also known as The Whistler. Engressia is known as the grandfather of Phreaking. His hacking technique was that he could perfectly whistle a tone into a phone and make free call.[5] Bruce Sterling traces part of the roots of the computer underground to the Yippies, a 1960s counterculture movement which published the Technological Assistance Program (TAP) newsletter. [6] . Other sources of early 70s hacker culture can be traced towards more beneficial forms of hacking, including MIT labs or the homebrew club, which later resulted in such things as early personal computers or the open source movement.
Hacker attitudes
Several subgroups of the computer underground with different attitudes and aims use different terms to demarcate themselves from each other, or try to exclude some specific group with which they do not agree. Eric S. Raymond (author of The New Hacker's Dictionary) advocates that members of the computer underground should be called crackers. Yet, those people see themselves as hackers and even try to include the views of Raymond in what they see as one wider hacker culture, a view harshly rejected by Raymond himself. Instead of a hacker/cracker dichotomy, they give more emphasis to a spectrum of different categories, such as white hat, grey hat, black hat and script kiddie. In contrast to Raymond, they usually reserve the term cracker. According to (Clifford R.D. 2006) a cracker or
Hacker (computer security) cracking is to " gain unauthorized access to a computer in order to commit another crime such as destroying information contianed in that system"e).[9] These subgroups may also defined by the legal status of their activities.[10] According to Steven Levy an American journalist who has written several books on computers, technology, cryptography, and cybersecurity said most hacker motives are reflected by the Hackers Ethic. These ethic are as follows:" Access to computers and anything that might teach you something about the way the world works should be unlimited and total.always yield to the Hands-on imperative! All information should be free. Mistrust authority and promote decentralization. Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position. You can create art and beauty on a computer. Computers can change your life for the better."[5]
79
White hat
A white hat hacker breaks security for non-malicious reasons, for instance testing their own security system. This classification also includes individuals who perform penetration tests and vulnerability assessments within a contractual agreement. Often, this type of 'white hat' hacker is called an ethical hacker. The International Council of Electronic Commerce Consultants, also known as the EC-Council [11] has developed certifications, courseware, classes, and online training covering the diverse arena of Ethical Hacking.[10]
Grey hat
A gray hat hacker is a combination of a Black Hat Hacker and a White Hat Hacker. A Grey Hat Hacker will surf the internet and hack into a computer system for the sole purpose of notifying the administrator that their system has been hacked. Then they will offer to repair their system for a small fee.[4]
Blue Hat
A blue hat hacker is someone outside computer security consulting firms who is used to bug test a system prior to its launch, looking for exploits so they can be closed. Microsoft also uses the term BlueHat [12] to represent a series of security briefing events.[13] [14] [15]
Black hat
A black hat hacker, sometimes called "cracker", is someone who breaks computer security without authorization or uses technology (usually a computer, phone system or network) for vandalism, credit card fraud, identity theft, piracy, or other types of illegal activity.[10] [16]
80
Script kiddie
A script kiddie is a non-expert who breaks into computer systems by using pre-packaged automated tools written by others, usually with little understanding of the underlying concepthence the term script (i.e. a prearranged plan or set of activities) kiddie (i.e. kid, childan individual lacking knowledge and experience, immature).[16]
Neophyte
A neophyte or "newbie" is a term used to describe someone who is new to hacking or phreaking and has almost no knowledge or experience of the workings of technology, and hacking.[4]
Hacktivism
A hacktivist is a hacker who utilizes technology to announce a social, ideological, religious, or political message. In general, most hacktivism involves website defacement or denial-of-service attacks. In more extreme cases, hacktivism is used as tool for Cyberterrorism.
Common methods
Computer security Secure operating systems Security architecture Security by design Secure coding Computer insecurity Vulnerability Social engineering Eavesdropping Exploits Trojans Viruses and worms Denial of service Backdoors Rootkits Keyloggers
Payloads
A typical approach in an attack on Internet-connected system is: 1. Network enumeration: Discovering information about the intended target. 2. Vulnerability analysis: Identifying potential ways of attack. 3. Exploitation: Attempting to compromise the system by employing the vulnerabilities found through the vulnerability analysis.[17] In order to do so, there are several recurring tools of the trade and techniques used by computer criminals and security experts.
81
Security exploit
A security exploit is a prepared application that takes advantage of a known weakness. Common examples of security exploits are SQL injection, Cross Site Scripting and Cross Site Request Forgery which abuse security holes that may result from substandard programming practice. Other exploits would be able to be used through FTP, HTTP, PHP, SSH, Telnet and some web-pages. These are very common in website/domain hacking.
Vulnerability scanner
A vulnerability scanner is a tool used to quickly check computers on a network for known weaknesses. Hackers also commonly use port scanners. These check to see which ports on a specified computer are "open" or available to access the computer, and sometimes will detect what program or service is listening on that port, and its version number. (Note that firewalls defend computers from intruders by limiting access to ports/machines both inbound and outbound, but can still be circumvented.)
Password cracking
Password cracking is the process of recovering passwords from data that has been stored in or transmitted by a computer system. A common approach is to repeatedly try guesses for the password.
Packet sniffer
A packet sniffer is an application that captures data packets, which can be used to capture passwords and other data in transit over the network.
Spoofing attack
A spoofing attack involves one program, system, or website successfully masquerading as another by falsifying data and thereby being treated as a trusted system by a user or another program. The purpose of this is usually to fool programs, systems, or users into revealing confidential information, such as user names and passwords, to the attacker.
Rootkit
A rootkit is designed to conceal the compromise of a computer's security, and can represent any of a set of programs which work to subvert control of an operating system from its legitimate operators. Usually, a rootkit will obscure its installation and attempt to prevent its removal through a subversion of standard system security. Rootkits may include replacements for system binaries so that it becomes impossible for the legitimate user to detect the presence of the intruder on the system by looking at process tables.
Social engineering
Social Engineering is the art of getting persons to reveal sensitive information about a system. This is usually done by impersonating someone or by convincing people to believe you have permissions to obtain such information.
Trojan horse
A Trojan horse is a program which seems to be doing one thing, but is actually doing another. A trojan horse can be used to set up a back door in a computer system such that the intruder can gain access later. (The name refers to the horse from the Trojan War, with conceptually similar function of deceiving defenders into bringing an intruder inside.)
82
Virus
A virus is a self-replicating program that spreads by inserting copies of itself into other executable code or documents. Therefore, a computer virus behaves in a way similar to a biological virus, which spreads by inserting itself into living cells. While some are harmless or mere hoaxes most computer viruses are considered malicious.
Worm
Like a virus, a worm is also a self-replicating program. A worm differs from a virus in that it propagates through computer networks without user intervention. Unlike a virus, it does not need to attach itself to an existing program. Many people conflate the terms "virus" and "worm", using them both to describe any self-propagating program.
Key loggers
A keylogger is a tool designed to record ('log') every keystroke on an affected machine for later retrieval. Its purpose is usually to allow the user of this tool to gain access to confidential information typed on the affected machine, such as a user's password or other private data. Some key loggers uses virus-, trojan-, and rootkit-like methods to remain active and hidden. However, some key loggers are used in legitimate ways and sometimes to even enhance computer security. As an example, a business might have a key logger on a computer that was used as at a Point of Sale and data collected by the key logger could be use for catching employee fraud.
Eric Corley
Eric Corley (also known as Emmanuel Goldstein) is the long standing publisher of 2600: The Hacker Quarterly. He is also the founder of the H.O.P.E. conferences. He has been part of the hacker community since the late '70s.
Fyodor
Gordon Lyon, known by the handle Fyodor, authored the Nmap Security Scanner as well as many network security books and web sites. He is a founding member of the Honeynet Project and Vice President of Computer Professionals for Social Responsibility.
83
Solar Designer
Solar Designer is the pseudonym of the founder of the Openwall Project.
Micha Zalewski
Micha Zalewski (lcamtuf) is a prominent security researcher.
Gary McKinnon
Gary McKinnon is a British hacker facing extradition to the United States to face charges of perpetrating what has been described as the "biggest military computer hack of all time".[19]
Hackers in fiction
Hackers often show an interest in fictional cyberpunk and cyberculture literature and movies. Absorption of fictional pseudonyms, symbols, values, and metaphors from these fictional works is very common. Books portraying hackers: The cyberpunk novels of William Gibson especially the Sprawl Trilogy are very popular with hackers.[20] Hackers (short stories) Snow Crash Helba from the .hack manga and anime series. Little Brother by Cory Doctorow Rice Tea by Julien McArdle Lisbeth Salander in Men who hate women by Stieg Larsson
84
Non-fiction books
Hacking: The Art of Exploitation, Second Edition by Jon Erickson The Hacker Crackdown The Art of Intrusion by Kevin D. Mitnick The Art of Deception by Kevin D. Mitnick Takedown The Hacker's Handbook The Cuckoo's Egg by Clifford Stoll Underground by Suelette Dreyfus
Fiction books
Ender's Game Neuromancer Evil Genius
See also
Wireless hacking List of notable hackers Cyber spying Cyber Storm Exercise
References
Taylor, 1999 Taylor, Paul A. (1999). Hackers. Routledge. ISBN9780415180726.
[1] Sterling, Bruce (1993). "Part 2(d)". The Hacker Crackdown. McLean, Virginia: IndyPublish.com. p.61. ISBN1-4043-0641-2. [2] Blomquist, Brian (May 29, 1999). " FBI's Web Site Socked as Hackers Target Feds (http:/ / archive. nypost. com/ a/ 475198)". New York Post. Retrieved on October 21, 2008. [3] S. Raymond, Eric. "Jargon File: Cracker" (http:/ / catb. org/ jargon/ html/ C/ cracker. html). . Retrieved 2010-05-08. "Coined ca. 1985 by hackers in defense against journalistic misuse of hacker" [4] Moore, Robert (2006). Cybercrime:Investigating High-Technology Computer Crime. Cincinnati, Ohio: Anderson Publishing. [5] Kizza, Joseph M. (2005). Computer Network Security. New York, LLC: Springer-Verlag. [6] TAP Magazine Archive. http:/ / servv89pn0aj. sn. sourcedns. com/ ~gbpprorg/ 2600/ TAP/ [7] Tim Jordan, Paul A. Taylor (2004). Hacktivism and Cyberwars. Routledge. pp.133134. ISBN9780415260039. "Wild West imagery has permeated discussions of cybercultures." [8] Thomas, Douglas. Hacker Culture. University of Minnesota Press. pp.90. ISBN9780816633463. [9] Clifford, Ralph D. (2006). Cybercrime:The Investigation, Prosecution and Defense of a Computer-Related Crime Second Edition. Durham, North Carolina: Carolina Academic Press. [10] Wilhelm, Douglas. "2". Professional Penetration Testing. Syngress Press. pp.503. ISBN9781597494250. [11] http:/ / www. eccouncil. org/ [12] http:/ / www. microsoft. com/ technet/ security/ bluehat/ default. mspx [13] "Blue hat hacker Definition" (http:/ / www. pcmag. com/ encyclopedia_term/ 0,2542,t=blue+ hat+ hacker& i=56321,00. asp). PC Magazine Encyclopedia. . Retrieved 31 May 2010. "A security professional invited by Microsoft to find vulnerabilities in Windows." [14] Fried, Ina (June 15, 2005). ""Blue Hat" summit meant to reveal ways of the other side" (http:/ / news. cnet. com/ Microsoft-meets-the-hackers/ 2009-1002_3-5747813. html). Microsoft meets the hackers. CNET News. . Retrieved 31 May 2010. [15] Markoff, John (October 17, 2005). "At Microsoft, Interlopers Sound Off on Security" (http:/ / www. nytimes. com/ 2005/ 10/ 17/ technology/ 17hackers. html?pagewanted=1& _r=1). New York Times. . Retrieved 31 May 2010. [16] Andress, Mandy; Cox, Phil; Tittel, Ed. CIW Security Professional. New York, NY: Hungry Minds, Inc.. p.10. ISBN0764548220. [17] Hacking approach (http:/ / www. informit. com/ articles/ article. aspx?p=25916) [18] United States Attorney's Office, Central District of California (9 August 1999). "Kevin Mitnick sentenced to nearly four years in prison; computer hacker ordered to pay restitution ..." (http:/ / www. usdoj. gov/ criminal/ cybercrime/ mitnick. htm). Press release. . Retrieved 10 April 2010.
85
Related literature
Kevin Beaver. Hacking For Dummies. Code Hacking: A Developer's Guide to Network Security by Richard Conway, Julian Cordingley Dot.Con: The Dangers of Cyber Crime and a Call for Proactive Solutions, (http://www.scribd.com/doc/ 14361572/Dotcon-Dangers-of-Cybercrime-by-Johanna-Granville) by Johanna Granville, Australian Journal of Politics and History, vol. 49, no. 1. (Winter 2003), pp.102109. Katie Hafner & John Markoff (1991). Cyberpunk: Outlaws and Hackers on the Computer Frontier. Simon & Schuster. ISBN0-671-68322-5. David H. Freeman & Charles C. Mann (1997). @ Large: The Strange Case of the World's Biggest Internet Invasion. Simon & Schuster. ISBN0-684-82464-7. Suelette Dreyfus (1997). Underground: Tales of Hacking, Madness and Obsession on the Electronic Frontier. Mandarin. ISBN1-86330-595-5. Bill Apro & Graeme Hammond (2005). Hackers: The Hunt for Australia's Most Infamous Computer Cracker. Five Mile Press. ISBN1-74124-722-5. Stuart McClure, Joel Scambray & George Kurtz (1999). Hacking Exposed. Mcgraw-Hill. ISBN0-07-212127-0. Michael Gregg (2006). Certfied Ethical Hacker. Pearson. ISBN978-0789735317. Clifford Stoll (1990). The Cuckoo's Egg. The Bodley Head Ltd. ISBN0-370-31433-6.
External links
(http://archives.cnn.com/2001/TECH/internet/11/19/hack.history.idg/), CNN Tech PCWorld Staff(November 2001). Timeline: A 40-year history of hacking from 1960 to 2001 (http://vodpod.com/watch/31369-discovery-channel-the-history-of-hacking-documentary) Discovery Channel Documentary. History of Hacking Documentary video
Malware
86
Malware
Malware, short for malicious software, is software designed to infiltrate a computer system without the owner's informed consent. The expression is a general term used by computer professionals to mean a variety of forms of hostile, intrusive, or annoying software or program code.[1] The term "computer virus" is sometimes used as a catch-all phrase to include all types of malware, including true viruses. Software is considered to be malware based on the perceived intent of the creator rather than any particular features. Malware includes computer viruses, worms, trojan horses, spyware, dishonest adware, crimeware, most rootkits, and other malicious and unwanted software. In law, malware is sometimes known as a computer contaminant, for instance in the legal codes of several U. S. states, including California and West Virginia.[2] [3] Malware is not the same as defective software, that is a software that has a legitimate purpose but contains harmful bugs. Preliminary results from Symantec published in 2008 suggested that "the release rate of malicious code and other unwanted programs may be exceeding that of legitimate software applications."[4] According to F-Secure, "As much malware [was] produced in 2007 as in the previous 20 years altogether."[5] Malware's most common pathway from criminals to users is through the Internet: primarily by e-mail and the World Wide Web.[6] The prevalence of malware as a vehicle for organized Internet crime, along with the general inability of traditional anti-malware protection platforms (products) to protect against the continuous stream of unique and newly produced malware, has seen the adoption of a new mindset for businesses operating on the Internet: the acknowledgment that some sizable percentage of Internet customers will always be infected for some reason or another, and that they need to continue doing business with infected customers. The result is a greater emphasis on back-office systems designed to spot fraudulent activities associated with advanced malware operating on customers' computers.[7] On March 29, 2010, Symantec Corporation named Shaoxing, China as the world's malware capital.[8] Sometimes, malware is disguised as genuine software, and may come from an official site. Therefore, some security programs, such as McAfee may call malware "potentially unwanted programs" or "PUP".
Purposes
Many early infectious programs, including the first Internet Worm and a number of MS-DOS viruses, were written as experiments or pranks. They were generally intended to be harmless or merely annoying, rather than to cause serious damage to computer systems. In some cases, the perpetrator did not realize how much harm their creations would do. Young programmers learning about viruses and their techniques wrote them for the sole purpose that they could or to see how far it could spread. As late as 1999, widespread viruses such as the Melissa virus appear to have been written chiefly as pranks. Hostile intent related to vandalism can be found in programs designed to cause harm or data loss. Many DOS viruses, and the Windows ExploreZip worm, were designed to destroy files on a hard disk, or to corrupt the file system by writing invalid data to them. Network-borne worms such as the 2001 Code Red worm or the Ramen worm fall into the same category. Designed to vandalize web pages, worms may seem like the online equivalent to graffiti tagging, with the author's alias or affinity group appearing everywhere the worm goes. Since the rise of widespread broadband Internet access, malicious software has been designed for a profit, for examples forced advertising. For instance, since 2003, the majority of widespread viruses and worms have been designed to take control of users' computers for black-market exploitation. Infected "zombie computers" are used to send email spam, to host contraband data such as child pornography,[9] or to engage in distributed denial-of-service attacks as a form of extortion.
Malware Another strictly for-profit category of malware has emerged in spyware -- programs designed to monitor users' web browsing, display unsolicited advertisements, or redirect affiliate marketing revenues to the spyware creator. Spyware programs do not spread like viruses; they are, in general, installed by exploiting security holes or are packaged with user-installed software, such as peer-to-peer applications.
87
Malware
88
Rootkits
Once a malicious program is installed on a system, it is essential that it stays concealed, to avoid detection and disinfection. The same is true when a human attacker breaks into a computer directly. Techniques known as rootkits allow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a malicious process from being visible in the system's list of processes, or keep its files from being read. Originally, a rootkit was a set of tools installed by a human attacker on a Unix system, allowing the attacker to gain administrator (root) access. Today, the term is used more generally for concealment routines in a malicious program. Some malicious programs contain routines to defend against removal, not merely to hide themselves, but to repel attempts to remove them. An early example of this behavior is recorded in the Jargon File tale of a pair of programs infesting a Xerox CP-V time sharing system: Each ghost-job would detect the fact that the other had been killed, and would start a new copy of the recently slain program within a few milliseconds. The only way to kill both ghosts was to kill them simultaneously (very difficult) or to deliberately crash the system.[13] Similar techniques are used by some modern malware, wherein the malware starts a number of processes that monitor and restore one another as needed. Some malware programs use other techniques, such as naming the infected file similar to a legitimate or trust-able file (expl0rer.exe VS explorer.exe).
Backdoors
A backdoor is a method of bypassing normal authentication procedures. Once a system has been compromised (by one of the above methods, or in some other way), one or more backdoors may be installed in order to allow easier access in the future. Backdoors may also be installed prior to malicious software, to allow attackers entry. The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. Crackers typically use backdoors to secure remote access to a computer, while attempting to remain hidden from casual inspection. To install backdoors crackers may use Trojan horses, worms, or other methods.
Malware
89
Data-stealing malware
Data-stealing malware is a web threat that divests victims of personal and proprietary information with the intent of monetizing stolen data through direct use or underground distribution. Content security threats that fall under this umbrella include keyloggers, screen scrapers, spyware, adware, backdoors, and bots. The term does not refer to activities such as spam, phishing, DNS poisoning, SEO abuse, etc. However, when these threats result in file download or direct installation, as most hybrid attacks do, files that act as agents to proxy information will fall into the data-stealing malware category.
Malware
90
Malware
91
Vulnerability to malware
In this context, as throughout, it should be borne in mind that the system under attack may be of various types, e.g. a single computer and operating system, a network or an application. Various factors make a system more vulnerable to malware: Homogeneity: e.g. when all computers in a network run the same OS, upon exploiting one, one can exploit them all. Weight of numbers: simply because the vast majority of existing malware is written to attack Windows systems, then Windows systems, ipso facto, are more vulnerable to succumbing to malware (regardless of the security strengths or weaknesses of Windows itself). Defects: malware leveraging defects in the OS design. Unconfirmed code: code from a floppy disk, CD-ROM or USB device may be executed without the users agreement. Over-privileged users: some systems allow all users to modify their internal structures. Over-privileged code: some systems allow code executed by a user to access all rights of that user. An oft-cited cause of vulnerability of networks is homogeneity or software monoculture.[18] For example, Microsoft Windows or Apple Mac have such a large share of the market that concentrating on either could enable a cracker to subvert a large number of systems, but any total monoculture is a problem. Instead, introducing inhomogeneity (diversity), purely for the sake of robustness, could increase short-term costs for training and maintenance. However, having a few diverse nodes would deter total shutdown of the network, and allow those nodes to help with recovery of the infected nodes. Such separate, functional redundancy would avoid the cost of a total shutdown, would avoid homogeneity as the problem of "all eggs in one basket". Most systems contain bugs, or loopholes, which may be exploited by malware. A typical example is the buffer-overrun weakness, in which an interface designed to store data, in a small area of memory, allows the caller to supply more data than will fit. This extra data then overwrites the interface's own executable structure (past the end of the buffer and other data). In this manner, malware can force the system to execute malicious code, by replacing legitimate code with its own payload of instructions (or data values) copied into live memory, outside the buffer area. Originally, PCs had to be booted from floppy disks, and until recently it was common for this to be the default boot device. This meant that a corrupt floppy disk could subvert the computer during booting, and the same applies to CDs. Although that is now less common, it is still possible to forget that one has changed the default, and rare that a BIOS makes one confirm a boot from removable media. In some systems, non-administrator users are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status. This is primarily a configuration decision, but on Microsoft Windows systems the default configuration is to over-privilege the user. This situation exists due to decisions made by Microsoft to prioritize compatibility with older systems above security configuration in newer systems and because typical applications were developed without the under-privileged users in mind. As privilege escalation exploits have increased this priority is shifting for the release of Microsoft Windows Vista. As a result, many existing applications that require excess privilege (over-privileged code) may have compatibility problems with Vista. However, Vista's User Account Control feature attempts to remedy applications not designed for under-privileged users, acting as a crutch to resolve the privileged access problem inherent in legacy applications. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also many scripting applications allow code too many privileges, usually in the sense that when a user executes code, the system allows that code all rights of that user. This makes users vulnerable to malware in the form of e-mail attachments, which may or may not be disguised.
Malware Given this state of affairs, users are warned only to open attachments they trust, and to be wary of code received from untrusted sources. It is also common for operating systems to be designed so that device drivers need escalated privileges, while they are supplied by more and more hardware manufacturers.
92
Anti-malware programs
As malware attacks become more frequent, attention has begun to shift from viruses and spyware protection, to malware protection, and programs have been developed to specifically combat them. Anti-malware programs can combat malware in two ways: 1. They can provide real time protection against the installation of malware software on a computer. This type of spyware protection works the same way as that of antivirus protection in that the anti-malware software scans all incoming network data for malware software and blocks any threats it comes across. 2. Anti-malware software programs can be used solely for detection and removal of malware software that has already been installed onto a computer. This type of malware protection is normally much easier to use and more popular. This type of anti-malware software scans the contents of the Windows registry, operating system files, and installed programs on a computer and will provide a list of any threats found, allowing the user to choose which files to delete or keep, or to compare this list to a list of known malware components, removing files that match. Real-time protection from malware works identically to real-time antivirus protection: the software scans disk files at download time, and blocks the activity of components known to represent malware. In some cases, it may also intercept attempts to install start-up items or to modify browser settings. Because many malware components are installed as a result of browser exploits or user error, using security software (some of which are anti-malware, though many are not) to "sandbox" browsers (essentially babysit the user and their browser) can also be effective in helping to restrict any damage done.
Malware
93
Grayware
Grayware[25] (or greyware) is a general term sometimes used as a classification for applications that behave in a manner that is annoying or undesirable, and yet less serious or troublesome than malware.[26] Grayware encompasses spyware, adware, dialers, joke programs, remote access tools, and any other unwelcome files and programs apart from viruses that are designed to harm the performance of computers on your network. The term has been in use since at least as early as September 2004.[27] Grayware refers to applications or files that are not classified as viruses or trojan horse programs, but can still negatively affect the performance of the computers on your network and introduce significant security risks to your organization.[28] Often grayware performs a variety of undesired actions such as irritating users with pop-up windows, tracking user habits and unnecessarily exposing computer vulnerabilities to attack. Spyware is software that installs components on a computer for the purpose of recording Web surfing habits (primarily for marketing purposes). Spyware sends this information to its author or to other interested parties when the computer is online. Spyware often downloads with items identified as 'free downloads' and does not notify the user of its existence or ask for permission to install the components. The information spyware components gather can include user keystrokes, which means that private information such as login names, passwords, and credit card numbers are vulnerable to theft. Adware is software that displays advertising banners on Web browsers such as Internet Explorer and Mozilla Firefox. While not categorized as malware, many users consider adware invasive. Adware programs often create unwanted effects on a system, such as annoying popup ads and the general degradation in either network connection or system performance. Adware programs are typically installed as separate programs that are bundled with certain free software. Many users inadvertently agree to installing adware by accepting the End User License
Malware Agreement (EULA) on the free software. Adware are also often installed in tandem with spyware programs. Both programs feed off each other's functionalities: spyware programs profile users' Internet behavior, while adware programs display targeted ads that correspond to the gathered user profile.
94
The World Wide Web is a criminals' preferred pathway for spreading malware. Today's web threats use combinations of malware to create infection chains. About one in ten Web pages may contain malicious code.[30]
See also
Browser exploit Category:Web security exploits Computer crime Computer insecurity Cyber spying Firewall (networking) Malvertising Privacy-invasive software Privilege escalation Security in Web applications
Malware Targeted threat Securelist.com Web server overload causes White collar crime Economic and Industrial Espionage
95
External links
Malicious Software [35] at the Open Directory Project US Department of Homeland Security Identity Theft Technology Council report "The Crimeware Landscape: Malware, Phishing, Identity Theft and Beyond" [36] Video: Mark Russinovich - Advanced Malware Cleaning [37] An analysis of targeted attacks using malware [38] Malicious Programs Hit New High [39] -retrieved February 8, 2008 Malware Block List [40] Open Security Foundation Data Loss Database [41] Internet Crime Complaint Center [42]
References
[1] "Defining Malware: FAQ" (http:/ / technet. microsoft. com/ en-us/ library/ dd632948. aspx). technet.microsoft.com. . Retrieved 2009-09-10. [2] National Conference of State Legislatures Virus/Contaminant/Destructive Transmission Statutes by State (http:/ / www. ncsl. org/ programs/ lis/ cip/ viruslaws. htm) [3] jcots.state.va.us/2005%20Content/pdf/Computer%20Contamination%20Bill.pdf [18.2-152.4:1 Penalty for Computer Contamination] [4] "Symantec Internet Security Threat Report: Trends for July-December 2007 (Executive Summary)" (http:/ / eval. symantec. com/ mktginfo/ enterprise/ white_papers/ b-whitepaper_exec_summary_internet_security_threat_report_xiii_04-2008. en-us. pdf) (PDF). Symantec Corp.. April 2008. p. 29. . Retrieved 2008-05-11. [5] F-Secure Corporation (December 4, 2007). "F-Secure Reports Amount of Malware Grew by 100% during 2007" (http:/ / www. f-secure. com/ f-secure/ pressroom/ news/ fs_news_20071204_1_eng. html). Press release. . Retrieved 2007-12-11. [6] "F-Secure Quarterly Security Wrap-up for the first quarter of 2008" (http:/ / www. f-secure. com/ f-secure/ pressroom/ news/ fsnews_20080331_1_eng. html). F-Secure. March 31, 2008. . Retrieved 2008-04-25. [7] "Continuing Business with Malware Infected Customers" (http:/ / www. technicalinfo. net/ papers/ MalwareInfectedCustomers. html). Gunter Ollmann. October 2008. . [8] "Symantec names Shaoxing, China as world's malware capital" (http:/ / www. engadget. com/ 2010/ 03/ 29/ symantec-names-shaoxing-china-worlds-malware-capital). Engadget. . Retrieved 2010-04-15. [9] PC World - Zombie PCs: Silent, Growing Threat (http:/ / www. pcworld. com/ article/ id,116841-page,1/ article. html). [10] Nick Farrell (20 February 2006). "Linux worm targets PHP flaw" (http:/ / www. theregister. co. uk/ 2006/ 02/ 20/ linux_worm/ ). The Register. . Retrieved 19 May 2010. [11] John Leyden (March 28, 2001). "Highly destructive Linux worm mutating" (http:/ / www. theregister. co. uk/ 2001/ 03/ 28/ highly_destructive_linux_worm_mutating/ ). The Register. . Retrieved 19 May 2010. [12] "Aggressive net bug makes history" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 2720337. stm). BBC News. February 3, 2003. . Retrieved 19 May 2010. [13] "Catb.org" (http:/ / catb. org/ jargon/ html/ meaning-of-hack. html). Catb.org. . Retrieved 2010-04-15. [14] "Gonzalez, Albert - Indictment 080508" (http:/ / www. usdoj. gov/ usao/ ma/ Press Office - Press Release Files/ IDTheft/ Gonzalez, Albert Indictment 080508. pdf). US Department of Justice Press Office. pp. 0118. . Retrieved 2010-. [15] Keizer, Gregg (2007) Monster.com data theft may be bigger (http:/ / www. pcworld. com/ article/ 136154/ monstercom_data_theft_may_be_bigger. html) [16] Vijayan, Jaikumar (2008) Hannaford hit by class-action lawsuits in wake of data breach disclosure (http:/ / www. computerworld. com/ action/ article. do?command=viewArticleBasic& articleId=9070281) [17] BBC News: Trojan virus steals banking info (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 7701227. stm) [18] "LNCS 3786 - Key Factors Influencing Worm Infection", U. Kanlayasiri, 2006, web (PDF): SL40-PDF (http:/ / www. springerlink. com/ index/ 3x8582h43ww06440. pdf). [19] John von Neumann, "Theory of Self-Reproducing Automata", Part 1: Transcripts of lectures given at the University of Illinois, December 1949, Editor: A. W. Burks, University of Illinois, USA, 1966. [20] Fred Cohen, "Computer Viruses", PhD Thesis, University of Southern California, ASP Press, 1988.
Malware
[21] L. M. Adleman, "An Abstract Theory of Computer Viruses", Advances in Cryptology---Crypto '88, LNCS 403, pp. 354-374, 1988. [22] A. Young, M. Yung, "Cryptovirology: Extortion-Based Security Threats and Countermeasures," IEEE Symposium on Security & Privacy, pp. 129-141, 1996. [23] H. Toyoizumi, A. Kara. Predators: Good Will Mobile Codes Combat against Computer Viruses. Proc. of the 2002 New Security Paradigms Workshop, 2002 [24] Zakiya M. Tamimi, Javed I. Khan, Model-Based Analysis of Two Fighting Worms (http:/ / www. medianet. kent. edu/ publications/ ICCCE06DL-2virusabstract-TK. pdf), IEEE/IIU Proc. of ICCCE '06, Kuala Lumpur, Malaysia, May 2006, Vol-I, p. 157-163. [25] "Other meanings" (http:/ / mpc. byu. edu/ Exhibitions/ Of Earth Stone and Corn/ Activities/ Native American Pottery. dhtml). . Retrieved 2007-01-20. The term "grayware" is also used to describe a kind of Native American pottery and has also been used by some working in computer technology as slang for the human brain. "grayware definition" (http:/ / www. techweb. com/ encyclopedia/ defineterm. jhtml?term=grayware). TechWeb.com. . Retrieved 2007-01-02. [26] "Greyware" (http:/ / webopedia. com/ TERM/ g/ greyware. html). What is greyware? - A word definition from the Webopedia Computer Dictionary. . Retrieved 2006-06-05. [27] Antony Savvas. "The network clampdown" (http:/ / www. computerweekly. com/ Articles/ 2004/ 09/ 28/ 205554/ the-network-clampdown. htm). Computer Weekly. . Retrieved 2007-01-20. [28] "Fortinet WhitePaper Protecting networks against spyware, adware and other forms of grayware" (http:/ / www. boll. ch/ fortinet/ assets/ Grayware. pdf) (PDF). . Retrieved 2007-01-20. [29] Zittrain, Jonathan (Mike Deehan, producer). (2008-04-17). Berkman Book Release: The Future of the Internet And How to Stop It (http:/ / cyber. law. harvard. edu/ interactive/ events/ 2008/ 04/ zittrain). [video/audio]. Cambridge, MA, USA: Berkman Center, The President and Fellows of Harvard College. . Retrieved 2008-04-21. [30] "Google searches web's dark side" (http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 6645895. stm). BBC News. May 11, 2007. . Retrieved 2008-04-26. [31] Sharon Khare. "Wikipedia Hijacked to Spread Malware" (http:/ / www. tech2. com/ india/ news/ telecom/ wikipedia-hijacked-to-spread-malware/ 2667/ 0). India: Tech2.com. . Retrieved 2010-04-15. [32] "Attacks against Wordpress" (http:/ / blog. sucuri. net/ 2010/ 05/ new-attack-today-against-wordpress. html). Sucuri Security. May 11, 2010. . Retrieved 2010-04-26. [33] "Protecting Corporate Assets from E-mail Crimeware," Avinti, Inc., p.1 (http:/ / www. avinti. com/ download/ market_background/ whitepaper_email_crimeware_protection. pdf) [34] F-Secure (March 31, 2008). "F-Secure Quarterly Security Wrap-up for the first quarter of 2008" (http:/ / www. f-secure. com/ f-secure/ pressroom/ news/ fsnews_20080331_1_eng. html). Press release. . Retrieved 2008-03-31. [35] http:/ / www. dmoz. org/ Computers/ Security/ Malicious_Software/ [36] http:/ / www. antiphishing. org/ reports/ APWG_CrimewareReport. pdf [37] http:/ / www. microsoft. com/ emea/ itsshowtime/ sessionh. aspx?videoid=359 [38] http:/ / www. daemon. be/ maarten/ targetedattacks. html [39] http:/ / news. bbc. co. uk/ 2/ hi/ technology/ 7232752. stm [40] http:/ / www. malware. com. br [41] http:/ / datalossdb. org/ [42] http:/ / www. ic3. gov/ default. aspx/
96
Vulnerability
97
Vulnerability
For other uses of the word "Vulnerability", please refer to vulnerability (computing) You may also want to refer to natural disaster. Vulnerability is the susceptibility to physical or emotional injury or attack. It also means to have one's guard down, open to censure or criticism. Vulnerability refers to a person's state of being liable to succumb, as to manipulation, persuasion or temptation. A window of vulnerability, sometimes abbreviated to wov, is a time frame within which defensive measures are reduced, compromised or lacking.
Common applications
In relation to hazards and disasters, vulnerability is a concept that links the relationship that people have with their environment to social forces and institutions and the cultural values that sustain and contest them. The concept of vulnerability expresses the multidimensionality of disasters by focusing attention on the totality of relationships in a given social situation which constitute a condition that, in combination with environmental forces, produces a disaster (Bankoff et al. 2004: 11). It's also the extent to which changes could harm a system.In other words, it's the extent to which a community can be affected by the impact of a hazard. In global warming, vulnerability is the degree to which a system is susceptible to, or unable to cope with, adverse effects of climate change, including climate variability and extremes [1] .
Emerging research
Vulnerability research covers a complex, multidisciplinary field including development and poverty studies, public health, climate studies, security studies, engineering, geography, political ecology, and disaster and risk management. This research is of particular importance and interest for organizations trying to reduce vulnerability especially as related to poverty and other Millennium Development Goals. Many institutions are conducting interdisciplinary research on vulnerability. A forum that brings many of the current researchers on vulnerability together is the Expert Working Group (EWG).1 Researchers are currently working to refine definitions of vulnerability, measurement and assessment methods, and effective communication of research to decision makers (Birkmann et al. 2006).
Vulnerability
98
Military vulnerability
In military circles Vulnerability is a subset of Survivability (the others being Susceptibility and Recoverability). Vulnerability is defined in various ways depending on the nation and service arm concerned, but in general it refers to the near-instantaneous effects of a weapon attack. In some definitions Recoverability (damage control, firefighting, restoration of capability) is included in Vulnerability. A discussion of warship vulnerability can be found here [2]
Invulnerability
Invulnerability is a common feature found in video games. It makes the player impervious to pain, damage or loss of health. It can be found in the form of "power-ups" or cheats. Generally, it does not protect the player from certain instant-death hazards, most notably "bottomless" pits from which, even if the player were to survive the fall, they would be unable to escape. As a rule, invulnerability granted by power-ups is temporary, and wears off after a set amount of time, while invulnerability cheats, once activated, remain in effect until deactivated, or the end of the level is reached. Depending on the game in question, invulnerability to damage may or may not protect the player from non-damage effects, such as being immobilized or sent flying. In comic books, some superheroes are considered invulnerable, though this usually only applies up to a certain level. (e.g. Superman is invulnerable to physical attacks from normal people but not to the extremely powerful attacks of Doomsday).
See also
Vulnerability in computing Social vulnerability
References
[1] Glossary Climate Change (http:/ / www. global-greenhouse-warming. com/ glossary-climate-change. html) [2] Warship Vulnerability (http:/ / www. ausairpower. net/ Warship-Hits. html)
Bankoff, Greg, George Frerks and Dorothea Hilhorst. 2004. Mapping Vulnerability. Sterling: Earthscan. Birkmann, Joern (editor). 2006. Measuring Vulnerability to Natural Hazards Towards Disaster Resilient Societies. UNU Press. Thywissen, Katharina. 2006. Components of Risk: A comparative glossary." SOURCE No. 2/2006. Bonn, Germany. Villagran, Juan Carlos. "Vulnerability: A conceptual and methodological review." SOURCE. No. 2/2006. Bonn, Germany.
Vulnerability
99
External links
vulnerable site reporter (http://bugtraq.byethost22.com) United Nations University Institute of Environment and Human Security (http://www.ehs.unu.edu) MunichRe Foundation (http://www.munichre-foundation.org) Community based vulnerability mapping in Bzi, Mozambique (GIS and Remote Sensing) (http://projects. stefankienberger.at/vulmoz/) Satellite Vulnerability (http://www.fas.org/spp/eprint/at_sp.htm) Top Computer Vulnerabilities (http://www.sans.org/top20/?utm_source=web-sans&utm_medium=text-ad& utm_content=Free_Resources_Homepage_top20_free_rsrcs_homepage&utm_campaign=Top_20&ref=27974)
Computer virus
A computer virus is a computer program that can copy itself[1] and infect a computer. The term "virus" is also commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.[2] Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[3] [4] As stated above, the term "computer virus" is sometimes used as a catch-all phrase to include all types of malware, even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system's data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves.
History
Academic work
The first academic work on the theory of computer viruses (although the term "computer virus" was not invented at that time) was done by John von Neumann in 1949 who held lectures at the University of Illinois about the "Theory and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of self-reproducing automata".[5] In his essay von Neumann postulated that a computer program could reproduce. In 1972 Veith Risak published his article "Selbstreproduzierende Automaten mit minimaler Informationsbertragung" (Self-reproducing automata with minimal information exchange).[6] The article describes a fully functional virus written in assembler language for a SIEMENS 4004/35 computer system. In 1980 Jrgen Kraus wrote his diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at the University of Dortmund.[7] In his work Kraus postulated that computer programs can behave in a way similar to biological viruses. In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer Viruses - Theory and Experiments".[8] It was the first paper to explicitly call a self-reproducing program a "virus"; a term introduced by
Computer virus his mentor Leonard Adleman. An article that describes "useful virus functionalities" was published by J.B. Gunn under the title "Use of virus functions to provide a virtual APL interpreter under user control" in 1984.[9]
100
Science Fiction
The Terminal Man, a science fiction novel by Michael Crichton (1972), told (as a sideline story) of a computer with telephone modem dialing capability, which had been programmed to randomly dial phone numbers until it hit a modem that is answered by another computer. It then attempted to program the answering computer with its own program, so that the second computer would also begin dialing random numbers, in search of yet another computer to program. The program is assumed to spread exponentially through susceptible computers. The actual term 'virus' was first used in David Gerrold's 1972 novel, When HARLIE Was One. In that novel, a sentient computer named HARLIE writes viral software to retrieve damaging personal information from other computers to blackmail the man who wants to turn him off.
Virus programs
The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s.[10] Creeper was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971.[11] Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system.[12] Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'm the creeper, catch me if you can!" was displayed. The Reaper program was created to delete Creeper.[13] A program called "Elk Cloner" was the first computer virus to appear "in the wild" that is, outside the single computer or lab where it was created.[14] Written in 1981 by Richard Skrenta, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk.[14] [15] This virus, created as a practical joke when Skrenta was still in high school, was injected in a game on a floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the computer and displaying a short poem beginning "Elk Cloner: The program with a personality." The first PC virus in the wild was a boot sector virus dubbed (c)Brain[16] , created in 1986 by the Farooq Alvi Brothers in Lahore, Pakistan, reportedly to deter piracy of the software they had written[17] . However, analysts have claimed that the Ashar virus, a variant of Brain, possibly predated it based on code within the virus. Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. PCs of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the wild for many years.[1] Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in BBS, modem use, and software sharing. Bulletin board-driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBS's. Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Word and Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected e-mail, those viruses which did take advantage of the Microsoft Outlook COM interface.
Computer virus Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".[18] A virus may also send a web address link as an instant message to all the contacts on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating. Viruses that spread using cross-site scripting were first reported in 2002[19] , and were academically demonstrated in 2005.[20] There have been multiple instances of the cross-site scripting viruses in the wild, exploiting websites such as MySpace and Yahoo.
101
Infection strategies
In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an infected program, the virus' code may be executed simultaneously. Viruses can be divided into two types based on their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect those targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.
Nonresident viruses
Nonresident viruses can be thought of as consisting of a finder module and a replication module. The finder module is responsible for finding new files to infect. For each new executable file the finder module encounters, it calls the replication module to infect that file.
Resident viruses
Resident viruses contain a replication module that is similar to the one that is employed by nonresident viruses. This module, however, is not called by a finder module. The virus loads the replication module into memory when it is executed instead and ensures that this module is executed each time the operating system is called to perform a certain operation. The replication module can be called, for example, each time the operating system executes a file. In this case the virus infects every suitable program that is executed on the computer. Resident viruses are sometimes subdivided into a category of fast infectors and a category of slow infectors. Fast infectors are designed to infect as many files as possible. A fast infector, for instance, can infect every potential host file that is accessed. This poses a special problem when using anti-virus software, since a virus scanner will access every potential host file on a computer when it performs a system-wide scan. If the virus scanner fails to notice that such a virus is present in memory the virus can "piggy-back" on the virus scanner and in this way infect all files that are scanned. Fast infectors rely on their fast infection rate to spread. The disadvantage of this method is that infecting many files may make detection more likely, because the virus may slow down a computer or perform many suspicious actions that can be noticed by anti-virus software. Slow infectors, on the other hand, are designed to infect hosts infrequently. Some slow infectors, for instance, only infect files when they are copied. Slow infectors are designed to avoid detection by limiting their actions: they are less likely to slow down a computer noticeably and will, at most, infrequently trigger anti-virus software that detects suspicious behavior by programs. The slow infector approach, however, does not seem very successful.
Computer virus
102
Computer virus
103
Stealth
Some viruses try to trick antivirus software by intercepting its requests to the operating system. A virus can hide itself by intercepting the antivirus softwares request to read the file and passing the request to the virus, instead of the OS. The virus can then return an uninfected version of the file to the antivirus software, so that it seems that the file is "clean". Modern antivirus software employs various techniques to counter stealth mechanisms of viruses. The only completely reliable method to avoid stealth is to boot from a medium that is known to be clean. Self-modification Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-called virus signatures. A signature is a characteristic byte-pattern that is part of a certain virus or family of viruses. If a virus scanner finds such a pattern in a file, it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus. Encryption with a variable key A more advanced method is the use of simple encryption to encipher the virus. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is in fact entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that it may be reason for virus scanners to at least
Computer virus flag the file as suspicious. An old, but compact, encryption involves XORing each byte in a virus with a constant, so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions. Polymorphic code Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using signatures. Antivirus software can detect it by decrypting the viruses using an emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to have a polymorphic engine (also called mutating engine or mutation engine) somewhere in its encrypted body. See Polymorphic code for technical detail on how such engines operate.[21] Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals to obtain representative samples of the virus, because bait files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection. Metamorphic code To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be metamorphic. To enable metamorphism, a metamorphic engine is needed. A metamorphic virus is usually very large and complex. For example, W32/Simile consisted of over 14000 lines of Assembly language code, 90% of which is part of the metamorphic engine.[22] [23]
104
Computer virus said "Is your PC virus-free? Get it infected here!". The result was 409 clicks.[24] [25] As of 2006, there are relatively few security exploits targeting Mac OS X (with a Unix-based file system and kernel).[26] The number of viruses for the older Apple operating systems, known as Mac OS Classic, varies greatly from source to source, with Apple stating that there are only four known viruses, and independent sources stating there are as many as 63 viruses. Many Mac OS Classic viruses targeted the HyperCard authoring environment. The difference in virus vulnerability between Macs and Windows is a chief selling point, one that Apple uses in their Get a Mac advertising.[27] In January 2009, Symantec announced the discovery of a trojan that targets Macs.[28] This discovery did not gain much coverage until April 2009.[28] While Linux, and Unix in general, has always natively blocked normal users from having access to make changes to the operating system environment, Windows users are generally not. This difference has continued partly due to the widespread use of administrator accounts in contemporary versions like XP. In 1997, when a virus for Linux was released known as "Bliss" leading antivirus vendors issued warnings that Unix-like systems could fall prey to viruses just like Windows.[29] The Bliss virus may be considered characteristic of viruses as opposed to worms on Unix systems. Bliss requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do not log in as an administrator user except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code to Usenet, allowing researchers to see how it worked.[30]
105
106
Recovery methods
Once a computer has been compromised by a virus, it is usually unsafe to continue using the same computer without completely reinstalling the operating system. However, there are a number of recovery options that exist after a computer has a virus. These actions depend on severity of the type of virus. Virus removal One possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool known as System Restore, which restores the registry and critical system files to a previous checkpoint. Often a virus will cause a system to hang, and a subsequent hard reboot will render a system restore point from the same day corrupt. Restore points from previous days should work provided the virus is not designed to corrupt the restore files or also exists in previous restore points.[33] Some viruses, however, disable System Restore and other important tools such as Task Manager and Command Prompt. An example of a virus that does this is CiaDoor. However, many such viruses can be removed by rebooting the computer, entering Windows safe mode, and then using system tools. Administrators have the option to disable such tools from limited users for various reasons (for example, to reduce potential damage from and the spread of viruses). A virus can modify the registry to do the same even if the Administrator is controlling the computer; it blocks all users including the administrator from accessing the tools. The message "Task Manager has been disabled by your administrator" may be displayed, even to the administrator. Users running a Microsoft operating system can access Microsoft's website to run a free scan, provided they have their 20-digit registration number. Many websites run by anti-virus software companies provide free online virus scanning, with limited cleaning facilities (the purpose of the sites is to sell anti-virus products). Some websites allow a single suspicious file to be checked by many antivirus programs in one operation. Operating system reinstallation Reinstalling the operating system is another approach to virus removal. It involves either reformatting the computer's hard drive and installing the OS and all programs from original media, or restoring the entire partition with a clean backup image. User data can be restored by booting from a Live CD, or putting the hard drive into another computer and booting from its operating system with great care not to infect the second computer by executing any infected programs on the original drive; and once the system has been restored precautions must be taken to avoid reinfection from a restored executable file. These methods are simple to do, may be faster than disinfecting a computer, and are guaranteed to remove any malware. If the operating system and programs must be reinstalled from scratch, the time and effort to reinstall, reconfigure, and restore user preferences must be taken into account. Restoring from an image is much faster, totally safe, and restores the exact configuration to the state it was in when the image was made, with no further trouble.
See also
Computer virus
107
Adware Antivirus software Computer insecurity Computer worm Crimeware Cryptovirology Linux malware
List of computer viruses List of computer viruses (all) Malware Mobile viruses Multipartite virus Spam Spyware Trojan horse (computing) Virus hoax
Further reading
Mark Russinovich, Advanced Malware Cleaning video [37], Microsoft TechEd: IT Forum, November 2006 Szor, Peter (2005). The Art of Computer Virus Research and Defense. Boston: Addison-Wesley. ISBN0321304543. Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York. Digital Formations-series. ISBN 978-0-8204-8837-0 Burger, Ralf, 1991 Computer Viruses and Data Protection Ludwig, Mark, 1996 The Little Black Book of Computer Viruses [34] Ludwig, Mark, 1995 The Giant Black Book of Computer Viruses [35] Ludwig, Mark, 1993 Computer Viruses, Artificial Life and Evolution [36]
External links
Viruses [37] at the Open Directory Project US Govt CERT (Computer Emergency Readiness Team) site [38] 'Computer Viruses - Theory and Experiments' - The original paper published on the topic [39] How Computer Viruses Work [40] A Brief History of PC Viruses [41]" (early) by Dr. Alan Solomon Are 'Good' Computer Viruses Still a Bad Idea? [42] Protecting your Email from Viruses and Other MalWare [43] Hacking Away at the Counterculture [44] by Andrew Ross A Virus in Info-Space [45] by Tony Sampson Dr Aycock's Bad Idea [46] by Tony Sampson Digital Monsters, Binary Aliens [47] by Jussi Parikka The Universal Viral Machine [48]" by Jussi Parikka Hypervirus: A Clinical Report [49]" by Thierry Bardini The Cross-site Scripting Virus [50] The Virus Underground [51] Microsoft conferences about IT Security - videos on demand [52] (Video)
Computer virus
108
References
[1] Dr. Solomon's Virus Encyclopedia, 1995, ISBN 1897661002, Abstract at http:/ / vx. netlux. org/ lib/ aas10. html [2] Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York. Digital Formations-series. ISBN 978-0-8204-8837-0, p. 19 [3] http:/ / www. bartleby. com/ 61/ 97/ C0539700. html [4] http:/ / www. actlab. utexas. edu/ ~aviva/ compsec/ virus/ whatis. html [5] von Neumann, John (1966). "Theory of Self-Reproducing Automata" (http:/ / cba. mit. edu/ events/ 03. 11. ASE/ docs/ VonNeumann. pdf). Essays on Cellular Automata (University of Illinois Press): 6687. . Retrieved June 10., 2010. [6] Risak, Veith (1972), "Selbstreproduzierende Automaten mit minimaler Informationsbertragung" (http:/ / www. cosy. sbg. ac. at/ ~risak/ bilder/ selbstrep. html), Zeitschrift fr Maschinenbau und Elektrotechnik, [7] Kraus, Jrgen (February 1980), Selbstreproduktion bei Programmen (http:/ / vx. netlux. org/ lib/ pdf/ Selbstreproduktion bei programmen. pdf), [8] Cohen, Fred (1984), Computer Viruses - Theory and Experiments (http:/ / all. net/ books/ virus/ index. html), [9] Gunn, J.B. (June 1984). "Use of virus functions to provide a virtual APL interpreter under user control" (http:/ / portal. acm. org/ ft_gateway. cfm?id=801093& type=pdf& coll=GUIDE& dl=GUIDE& CFID=93800866& CFTOKEN=49244432). ACM SIGAPL APL Quote Quad archive (ACM New York, NY, USA) 14 (4): 163168. ISSN0163-6006. . [10] "Virus list" (http:/ / www. viruslist. com/ en/ viruses/ encyclopedia?chapter=153310937). . Retrieved 2008-02-07. [11] Thomas Chen, Jean-Marc Robert (2004). "The Evolution of Viruses and Worms" (http:/ / vx. netlux. org/ lib/ atc01. html). . Retrieved 2009-02-16. [12] Jussi Parikka (2007) "Digital Contagions. A Media Archaeology of Computer Viruses", Peter Lang: New York. Digital Formations-series. ISBN 978-0-8204-8837-0, p. 50 [13] See page 86 of Computer Security Basics (http:/ / books. google. co. uk/ books?id=BtB1aBmLuLEC& printsec=frontcover& source=gbs_summary_r& cad=0#PPA86,M1) by Deborah Russell and G. T. Gangemi. O'Reilly, 1991. ISBN 0937175714 [14] Anick Jesdanun (1 September 2007). "School prank starts 25 years of security woes" (http:/ / www. cnbc. com/ id/ 20534084/ ). CNBC. . Retrieved 2010-01-07. [15] "The anniversary of a nuisance" (http:/ / www. cnn. com/ 2007/ TECH/ 09/ 03/ computer. virus. ap/ ). . [16] Boot sector virus repair (http:/ / antivirus. about. com/ od/ securitytips/ a/ bootsectorvirus. htm) [17] http:/ / www. youtube. com/ watch?v=m58MqJdWgDc [18] Vesselin Bontchev. "Macro Virus Identification Problems" (http:/ / www. people. frisk-software. com/ ~bontchev/ papers/ macidpro. html). FRISK Software International. . [19] Berend-Jan Wever. "XSS bug in hotmail login page" (http:/ / seclists. org/ bugtraq/ 2002/ Oct/ 119). . [20] Wade Alcorn. "The Cross-site Scripting Virus" (http:/ / www. bindshell. net/ papers/ xssv/ ). . [21] http:/ / www. virusbtn. com/ resources/ glossary/ polymorphic_virus. xml [22] Perriot, Fredrick; Peter Ferrie and Peter Szor (May 2002). "Striking Similarities" (http:/ / securityresponse. symantec. com/ avcenter/ reference/ simile. pdf) (PDF). . Retrieved September 9, 2007. [23] http:/ / www. virusbtn. com/ resources/ glossary/ metamorphic_virus. xml [24] Need a computer virus?- download now (http:/ / www. infoniac. com/ offbeat-news/ computervirus. html) [25] http:/ / blog. didierstevens. com/ 2007/ 05/ 07/ is-your-pc-virus-free-get-it-infected-here/ [26] "Malware Evolution: Mac OS X Vulnerabilities 2005-2006" (http:/ / www. viruslist. com/ en/ analysis?pubid=191968025). Kaspersky Lab. 2006-07-24. . Retrieved August 19, 2006. [27] Apple - Get a Mac (http:/ / www. apple. com/ getamac) [28] Sutter, John D. (22 April 2009). "Experts: Malicious program targets Macs" (http:/ / www. cnn. com/ 2009/ TECH/ 04/ 22/ first. mac. botnet/ index. html). CNN.com. . Retrieved 24 April 2009. [29] McAfee. "McAfee discovers first Linux virus" (http:/ / math-www. uni-paderborn. de/ ~axel/ bliss/ mcafee_press. html). news article. . [30] Axel Boldt. "Bliss, a Linux "virus"" (http:/ / math-www. uni-paderborn. de/ ~axel/ bliss/ ). news article. . [31] "Symantec Security Summary W32.Gammima.AG." http:/ / www. symantec. com/ security_response/ writeup. jsp?docid=2007-082706-1742-99 [32] "Yahoo Tech: Viruses! In! Space!" http:/ / tech. yahoo. com/ blogs/ null/ 103826 [33] "Symantec Security Summary W32.Gammima.AG and removal details." http:/ / www. symantec. com/ security_response/ writeup. jsp?docid=2007-082706-1742-99& tabid=3 [34] http:/ / vx. netlux. org/ lib/ vml00. html [35] http:/ / vx. netlux. org/ lib/ vml01. html [36] http:/ / vx. netlux. org/ lib/ vml02. html [37] http:/ / www. dmoz. org/ Computers/ Security/ Malicious_Software/ Viruses/ / [38] http:/ / www. us-cert. gov/ [39] http:/ / all. net/ books/ virus/ index. html [40] http:/ / www. howstuffworks. com/ virus. htm [41] http:/ / vx. netlux. org/ lib/ aas14. html
Computer virus
[42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] http:/ / vx. netlux. org/ lib/ avb02. html http:/ / www. windowsecurity. com/ articles/ Protecting_Email_Viruses_Malware. html http:/ / www3. iath. virginia. edu/ pmc/ text-only/ issue. 990/ ross-1. 990 http:/ / journal. media-culture. org. au/ 0406/ 07_Sampson. php http:/ / journal. media-culture. org. au/ 0502/ 02-sampson. php http:/ / journal. fibreculture. org/ issue4/ issue4_parikka. html http:/ / www. ctheory. net/ articles. aspx?id=500 http:/ / www. ctheory. net/ articles. aspx?id=504 http:/ / www. bindshell. net/ papers/ xssv/ http:/ / www. cse. msu. edu/ ~cse825/ virusWriter. htm http:/ / www. microsoft. com/ emea/ itsshowtime/ result_search. aspx?track=1& x=37& y=7
109
Computer worm
A computer worm is a self-replicating malware computer program. It uses a computer network to send copies of itself to other nodes (computers on the network) and it may do so without any user intervention. This is due to security shortcomings on the target computer. Unlike a virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
Payloads
Many worms that have been created are only designed to spread, and don't attempt to alter the systems they pass through. However, as the Morris worm and Mydoom showed, the network traffic and other unintended effects can often cause major disruption. A "payload" is code designed to do more than spread the worm - it might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a cryptoviral extortion attack, or send documents via e-mail. A very common payload for worms is to install a backdoor in the infected computer to allow the creation of a "zombie" computer under control of the worm author. Networks of such machines are often referred to as botnets and are very commonly used by spam senders for sending junk email or to cloak their website's address.[1] Spammers are therefore thought to be a source of funding for the creation of such worms,[2] [3] and the worm writers have been caught selling lists of IP addresses of infected machines.[4] Others try to blackmail companies with threatened DoS attacks.[5]
Backdoors can be exploited by other malware, including worms. Examples include Doomjuice, which spreads better using the backdoor opened by Mydoom, and at least one instance of malware taking advantage of the rootkit and backdoor installed by the Sony/BMG DRM software utilized by millions of music CDs prior to late 2005.
Computer worm
110
Mitigation techniques
TCP Wrapper/libwrap enabled network service daemons ACLs in routers and switches Packet-filters Nullrouting
History
The actual term 'worm' was first used in John Brunner's 1975 novel, The Shockwave Rider. In that novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it... There's never been a worm with that tough a head or that long a tail!"[10] On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting perhaps 10% of the computers then on the Internet[11] [12] and prompting the formation of the CERT Coordination Center[13] and Phage mailing list [14]. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act[15] .
Computer worm
111
See also
Timeline of notable computer viruses and worms Computer virus Trojan horse (computing) Spam Computer surveillance XSS Worm Helpful worm
External links
The Wildlist [16] - List of viruses and worms 'in the wild' (i.e. regularly encountered by anti-virus companies) Jose Nazario discusses worms [17] - Worms overview by a famous security researcher. Computer worm suspect in court [18] Vernalex.com's Malware Removal Guide [19] - Guide for understanding, removing and preventing worm infections John Shoch, Jon Hupp "The "Worm" Programs - Early Experience with a Distributed Computation" [20] RFC 1135 [21] The Helminthiasis of the Internet Surfing Safe [22] - A site providing tips/advice on preventing and removing viruses. Computer Worms Information [23] The Case for Using Layered Defenses to Stop Worms [24] Worm Evolution Paper from Digital Threat [25] Step by step instructions on removing computer viruses, spyware and trojans. [26]
References
[1] The Seattle Times: Business & Technology: E-mail viruses blamed as spam rises sharply (http:/ / seattletimes. nwsource. com/ html/ businesstechnology/ 2001859752_spamdoubles18. html) [2] Cloaking Device Made for Spammers (http:/ / www. wired. com/ news/ business/ 0,1367,60747,00. html) [3] http:/ / www. channelnewsasia. com/ stories/ afp_world/ view/ 68810/ 1/ . html [4] heise online - Uncovered: Trojans as Spam Robots (http:/ / www. heise. de/ english/ newsticker/ news/ 44879) [5] BBC NEWS | Technology | Hacker threats to bookies probed (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3513849. stm) [6] USN list | Ubuntu (http:/ / www. ubuntu. com/ usn) [7] Information on the Nimda Worm (http:/ / www. microsoft. com/ technet/ security/ alerts/ info/ nimda. mspx) [8] http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?isnumber=4509574& arnumber=4358715& count=10& index=3 Sellke, SH. Shroff, NB. Bagchi, S (2008). Modeling and Automated Containment of Worms. IEEE Transactions on Dependable and Secure Computing. 5(2), 71-86 [9] Newswise: A New Way to Protect Computer Networks from Internet Worms (http:/ / newswise. com/ articles/ view/ 541456/ ) Retrieved on June 5, 2008. [10] John Brunner, The Shockwave Rider, New York, Ballantine Books, 1975 [11] The Submarine (http:/ / www. paulgraham. com/ submarine. html#f4n) [12] During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the virus from each installation was in the range of $200 - 53,000. Possibly based on these numbers, Harvard spokesman Clifford Stoll estimated the total economic impact was between $100,000 - 10,000,000. http:/ / www. bs2. com/ cvirus. htm#anchor111400 [13] Security of the Internet. CERT/CC (http:/ / www. cert. org/ encyc_article/ tocencyc. html) [14] http:/ / securitydigest. org/ phage/ [15] Dressler, J. Cases and Materials on Criminal Law, "United States v. Morris" ISBN 9780-314-17719-3 [16] http:/ / www. wildlist. org [17] http:/ / www. securityfocus. com/ print/ columnists/ 347 [18] http:/ / www. pc-news. org/ computer-worm-suspect-in-court/ virus-news [19] http:/ / www. vernalex. com/ guides/ malware/ [20] http:/ / vx. netlux. org/ lib/ ajm01. html [21] http:/ / tools. ietf. org/ rfc/ rfc1135. txt [22] http:/ / www. surfing-safe. org/
Computer worm
[23] [24] [25] [26] http:/ / virusall. com/ worms. shtml http:/ / www. nsa. gov/ snac/ support/ WORMPAPER. pdf http:/ / www. digitalthreat. net/ ?p=17 http:/ / www. freecomputerrepairguide. com/
112
Classification
There are several methods of classifying exploits. The most common is by how the exploit contacts the vulnerable software. A 'remote exploit' works over a network and exploits the security vulnerability without any prior access to the vulnerable system. A 'local exploit' requires prior access to the vulnerable system and usually increases the privileges of the person running the exploit past those granted by the system administrator. Exploits against client applications also exist, usually consisting of modified servers that send an exploit if accessed with client application. Exploits against client applications may also require some interaction with the user and thus may be used in combination with social engineering method. This is the hacker way of getting into computers and stealing data. Another classification is by the action against vulnerable system: unauthorized data access, arbitrary code execution, denial of service. Many exploits are designed to provide superuser-level access to a computer system. However, it is also possible to use several exploits, first to gain low-level access, then to escalate privileges repeatedly until one reaches root. Normally a single exploit can only take advantage of a specific software vulnerability. Often, when an exploit is published, the vulnerability is fixed through a patch and the exploit becomes obsolete for newer versions of the software. This is the reason why some blackhat hackers do not publish their exploits but keep them private to themselves or other crackers. Such exploits are referred to as 'zero day exploits' and to obtain access to such exploits is the primary desire of unskilled attackers, often nicknamed script kiddies.
Types
Exploits are commonly categorized and named by these criteria: The type of vulnerability they exploit (See the article on vulnerabilities for a list) Whether they need to be run on the same machine as the program that has the vulnerability (local) or can be run on one machine to attack a program running on another machine (remote). The result of running the exploit (EoP, DoS, Spoofing, etc...)
113
See also
Computer insecurity Computer security Computer virus Crimeware Offensive Security Exploit Database Metasploit Shellcode
Computer insecurity
114
Computer insecurity
Computer security Secure operating systems Security architecture Security by design Secure coding Computer insecurity Vulnerability Social engineering Eavesdropping Exploits Trojans Viruses and worms Denial of service Backdoors Rootkits Keyloggers
Payloads
Many current computer systems have only limited security precautions in place. This computer insecurity article describes the current battlefield of computer security exploits and defenses. Please see the computer security article for an alternative approach, based on security engineering principles.
Financial cost
Serious financial damage has been caused by computer security breaches, but reliably estimating costs is quite difficult. Figures in the billions of dollars have been quoted in relation to the damage caused by malware such as computer worms like the Code Red worm, but such estimates may be exaggerated. However, other losses, such as those caused by the compromise of credit card information, can be more easily determined, and they have been substantial, as measured by millions of individual victims of identity theft each year in each of several nations, and the severe hardship imposed on each victim, that can wipe out all of their finances, prevent them from getting a job, plus be treated as if they were the criminal. Volumes of victims of phishing and other scams may not be known. Individuals who have been infected with spyware or malware likely go through a costly and time-consuming process of having their computer cleaned. Spyware is considered to be a problem specific to the various Microsoft Windows operating systems, however this can be partially explained by the fact that Microsoft controls a major share of the PC market and thus represent the most prominent target.
Computer insecurity
115
Reasons
There are many similarities (yet many fundamental differences) between computer and physical security. Just like real-world security, the motivations for breaches of computer security vary between attackers, sometimes called hackers or crackers. Some are thrill-seekers or vandals (the kind often responsible for defacing web sites); similarly, some web site defacements are done to make political statements. However, some attackers are highly skilled and motivated with the goal of compromising computers for financial gain or espionage. An example of the latter is Markus Hess (more diligent than skilled), who spied for the KGB and was ultimately caught because of the efforts of Clifford Stoll, who wrote a memoir, The Cuckoo's Egg, about his experiences. For those seeking to prevent security breaches, the first step is usually to attempt to identify what might motivate an attack on the system, how much the continued operation and information security of the system are worth, and who might be motivated to breach it. The precautions required for a home PC are very different for those of banks' Internet banking system, and different again for a classified military network. Other computer security writers suggest that, since an attacker using a network need know nothing about you or what you have on your computer, attacker motivation is inherently impossible to determine beyond guessing. If true, blocking all possible attacks is the only plausible action to take.
Vulnerabilities
To understand the techniques for securing a computer system, it is important to first understand the various types of "attacks" that can be made against it. These threats can typically be classified into one of these seven categories:
Exploits
An exploit (from the same word in the French language, meaning "achievement", or "accomplishment") is a piece of software, a chunk of data, or sequence of commands that take advantage of a software 'bug' or 'glitch' in order to cause unintended or unanticipated behavior to occur on computer software, hardware, or something electronic (usually computerized). This frequently includes such things as gaining control of a computer system or allowing privilege escalation or a denial of service attack. Many development methodologies rely on testing to ensure the quality of any code released; this process often fails to discover unusual potential exploits. The term "exploit" generally refers to small programs designed to take advantage of a software flaw that has been discovered, either remote or local. The code from the exploit program is frequently reused in trojan horses and computer viruses. In some cases, a vulnerability can lie in certain programs' processing of a specific file type, such as a non-executable media file. Some security web sites maintain lists of currently known unpatched vulnerabilities found in common programs (see "External links" below).
Eavesdropping
Eavesdropping is the act of surreptitiously listening to a private conversation. Even machines that operate as a closed system (ie, with no contact to the outside world) can be eavesdropped upon via monitoring the faint electro-magnetic transmissions generated by the hardware such as TEMPEST. The FBI's proposed Carnivore program was intended to act as a system of eavesdropping protocols built into the systems of internet service providers.
Computer insecurity
116
Indirect attacks
An indirect attack is an attack launched by a third party computer. By using someone else's computer to launch an attack, it becomes far more difficult to track down the actual attacker. There have also been cases where attackers took advantage of public anonymizing systems, such as the tor onion router system.
Backdoors
A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification to an existing program or hardware device. A specific form of backdoors are rootkits, which replaces system binaries and/or hooks into the function calls of the operating system to hide the presence of other programs, users, services and open ports. It may also fake information about disk and memory usage.
Computer insecurity
117
Reducing vulnerabilities
Computer code is regarded by some as a form of mathematics. It is theoretically possible to prove the correctness of certain classes of computer programs, though the feasibility of actually achieving this in large-scale practical systems is regarded as small by some with practical experience in the industry see Bruce Schneier et al. It's also possible to protect messages in transit (ie, communications) by means of cryptography. One method of encryption the one-time pad is unbreakable when correctly used. This method was used by the Soviet Union during the Cold War, though flaws in their implementation allowed some cryptanalysis (See Venona Project). The method uses a matching pair of key-codes, securely distributed, which are used once-and-only-once to encode and decode a single message. For transmitted computer encryption this method is difficult to use properly (securely), and highly inconvenient as well. Other methods of encryption, while breakable in theory, are often virtually impossible to directly break by any means publicly known today. Breaking them requires some non-cryptographic input, such as a stolen key, stolen plaintext (at either end of the transmission), or some other extra cryptanalytic information. Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Even in a highly disciplined environment, such as in military organizations, social engineering attacks can still be difficult to foresee and prevent. In practice, only a small fraction of computer program code is mathematically proven, or even goes through comprehensive information technology audits or inexpensive but extremely valuable computer security audits, so it's usually possible for a determined hacker to read, copy, alter or destroy data in well secured computers, albeit at the cost of great time and resources. Few attackers would audit applications for vulnerabilities just to attack a single specific system. It is possible to reduce an attacker's chances by keeping systems up to date, using a security scanner or/and hiring competent people responsible for security. The effects of data loss/damage can be reduced by careful backing up and insurance.
Security measures
A state of computer "security" is the conceptual ideal, attained by the use of the three processes: 1. Prevention 2. Detection 3. Response User account access controls and cryptography can protect systems files and data, respectively. Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering. Intrusion Detection Systems (IDS's) are designed to detect network attacks in progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems. "Response" is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, a complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected. Today, computer security comprises mainly "preventive" measures, like firewalls or an Exit Procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide realtime filtering and blocking. Another implementation is a so called physical firewall which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet. However, relatively few organisations maintain computer systems with effective detection systems, and fewer still
118
See also
Lists and categories
Category:Computer security exploits Types of computer security vulnerabilities and attacks Category:Spyware removal Programs that find and remove spyware List of computer virus hoaxes List of computer viruses List of trojan horses Timeline of notable computer viruses and worms
Individual articles
Adware Antivirus software Black hat Computer forensics Computer virus Crash-only software Cryptography (aka cryptology) Data remanence Data spill Defensive computing Defensive programming Full disclosure Hacking Malware Mangled packet Microreboot Physical security Ring (computer security) RISKS Digest Security engineering Security through obscurity Software Security Assurance Spam Spyware Targeted threat Trojan horse Virus hoax Worm XSA Zero day attack
Computer insecurity
119
References
Ross J. Anderson: Security Engineering: A Guide to Building Dependable Distributed Systems, ISBN 0-471-38922-6 Bruce Schneier: Secrets & Lies: Digital Security in a Networked World, ISBN 0-471-25311-1 Cyrus Peikari, Anton Chuvakin: Security Warrior, ISBN 0-596-00545-8 Jack Koziol, David Litchfield: The Shellcoder's Handbook: Discovering and Exploiting Security Holes, ISBN 0-7645-4468-3 Clifford Stoll: The Cuckoo's Egg: Tracking a Spy Through the Maze of Computer Espionage, an informal and easily approachable by the non-specialist account of a real incident (and pattern) of computer insecurity, ISBN 0-7434-1146-3 Roger R. Schell: The Internet Rules but the Emperor Has No Clothes [1] ACSAC 1996 William Caelli: Relearning "Trusted Systems" in an Age of NIIP: Lessons from the Past for the Future. [2] 2002 Noel Davis: Cracked! [3] story of a community network that was cracked and what was done to recover from it 2000 Shon Harris, "CISSP All-In-One Study Guide" ISBN 0071497870 Daniel Ventre, "Information Warfare" Wiley - ISTET ISBN 9781848210943
External links
Participating With Safety [4], a guide to electronic security threats from the viewpoint of civil liberties organisations. Licensed under the GNU Free Documentation License. Article "Why Information Security is Hard An Economic Perspective [7]" by Ross Anderson The Information Security Glossary [5] The SANS Top 20 Internet Security Vulnerabilities [6] Amit Singh: A Taste of Computer Security [7] 2004 Lists of currently known unpatched vulnerabilities Lists of advisories by product [8] Lists of known unpatched vulnerabilities from Secunia Vulnerabilities [9] from SecurityFocus, home of the famous Bugtraq mailing list. List of vulnerabilities maintained by the government of the USA [10]
References
[1] http:/ / csdl. computer. org/ comp/ proceedings/ acsac/ 1996/ 7606/ 00/ 7606xiv. pdf [2] http:/ / cisse. info/ history/ CISSE%20J/ 2002/ cael. pdf [3] http:/ / rootprompt. org/ article. php3?article=403 [4] http:/ / secdocs. net/ manual/ lp-sec/ [5] http:/ / www. yourwindow. to/ information-security/ [6] http:/ / www. sans. org/ top20/ [7] http:/ / www. kernelthread. com/ publications/ security/ index. html [8] http:/ / secunia. com/ product/ [9] http:/ / www. securityfocus. com/ vulnerabilities [10] https:/ / www. kb. cert. org/ vuls/
120
5.0 Networks
Communications security
Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients. In the United States Department of Defense culture, it is often referred to by the portmanteau COMSEC. The field includes cryptosecurity, transmission security, emission security, traffic-flow security. and physical security of COMSEC equipment.
Applications
COMSEC is used to protect both classified and unclassified traffic on military communication networks, including voice, video, and data. It is used for both analog and digital applications, and both wired and wireless links. Secure voice over internet protocol (SVOIP) has become the defacto standard for securing voice communication, replacing the need for STU-X and STE equipment in much of the U.S. Department of Defense. USCENTCOM moved entirely to SVOIP in 2008.[1]
Specialties
Cryptosecurity: The component of communications security that results from the provision of technically sound cryptosystems and their proper use. This includes ensuring message confidentiality and authenticity. Emission security (EMSEC): Protection resulting from all measures taken to deny unauthorized persons information of value which might be derived from intercept and analysis of compromising emanations from crypto-equipment, automated information systems (computers), and telecommunications systems. Physical security: The component of communications security that results from all physical measures necessary to safeguard classified equipment, material, and documents from access thereto or observation thereof by unauthorized persons. Traffic-flow security: Measures that conceal the presence and properties of valid messages on a network. It includes the protection resulting from features, inherent in some cryptoequipment, that conceal the presence of valid messages on a communications circuit, normally achieved by causing the circuit to appear busy at all times. Transmission security (TRANSEC): The component of communications security that results from the application of measures designed to protect transmissions from interception and exploitation by means other than cryptanalysis (e.g. frequency hopping and spread spectrum).
Communications security
121
Related terms
AKMS = the Army Key Management System AEK = Algorithmic Encryption Key CT3 = Common Tier 3 CCI = Controlled Cryptographic Item - equipment which contains COMSEC embedded devices EKMS = Electronic Key Management System NSA = National Security Agency ACES = Automated Communications Engineering Software DTD = The Data Transfer Device DIRNSA = Director of National Security Agency TEK = Traffic Encryption Key TED = Trunk Encryption Device such as the WALBURN/KG family of CCI KEK = Key Encryption Key OWK = Over the Wire Key OTAR = Over The Air Rekeying LCMS = Local COMSEC Management Software KYK-13 = Electronic Transfer Device KOI-18 = Tape Reader General Purpose KYX-15 = Electronic Transfer Device KG-30 = TSEC family of COMSEC equipment TSEC = Telecommunications Security (sometimes referred to in error transmission security or TRANSEC) SOI = Signal Operating Instruction SKL = Simple Key Loader TPI = Two Person Integrity STU-III (secure phone) STU - Secure Terminal Equipment (secure phone)
Types of COMSEC equipment: Crypto equipment: Any equipment that embodies cryptographic logic or performs one or more cryptographic functions (key generation, encryption, and authentication). Crypto-ancillary equipment: Equipment designed specifically to facilitate efficient or reliable operation of crypto-equipment, without performing cryptographic functions itself.[2] Crypto-production equipment: Equipment used to produce or load keying material Authentication equipment:
Communications security
122
See also
Cryptography Information security Information warfare NSA encryption systems Operations security Secure Communication Signals Intelligence Traffic analysis Type 1 product
References
[1] USCENTCOM PL 117-02-1. [2] INFOSEC-99
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (http://www.its.bldrdoc.gov/fs-1037/fs-1037c.htm) (in support of MIL-STD-188). National Information Systems Security Glossary Department of Defense Dictionary of Military and Associated Terms http://www.dtic.mil/doctrine/jel/cjcsd/cjcsi/6511_01.pdf http://www.gordon.army.mil/sigbde15/Schools/25L/c03lp1.html http://www.dtic.mil/whs/directives/corres/pdf/466002p.pdf http://cryptome.sabotage.org/HB202D.PDF http://peoc3t.monmouth.army.mil/netops/akms.html Cryptography machines (http://www.jproc.ca/crypto/menu.html)
Communications security
123
External links
COMSEC/SIGINT News Group - Discussion Forum (http://groups-beta.google.com/group/sigint)
Network security
In the field of networking, the specialist area of network security[1] consists of the provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources.
Network security
124
Security management
Security Management for networks is different for all kinds of situations. A small home or an office would only require basic security while large businesses will require high maintenance and advanced software and hardware to prevent malicious attacks from hacking and spamming.
Small homes
A basic firewall like COMODO Internet Security or a unified threat management system. For Windows users, basic Antivirus software like AVG Antivirus, ESET NOD32 Antivirus, Kaspersky, McAfee, Avast!, Zone Alarm Security Suite or Norton AntiVirus. An anti-spyware program such as Windows Defender or Spybot Search & Destroy would also be a good idea. There are many other types of antivirus or anti-spyware programs out there to be considered. When using a wireless connection, use a robust password. Also try to use the strongest security supported by your wireless devices, such as WPA2 with AES encryption. If using Wireless: Change the default SSID network name, also disable SSID Broadcast; as this function is unnecessary for home use. (However, many security experts consider this to be relatively useless. http://blogs. zdnet.com/Ou/index.php?p=43 ) Enable MAC Address filtering to keep track of all home network MAC devices connecting to your router. Assign STATIC IP addresses to network devices. Disable ICMP ping on router. Review router or firewall logs to help identify abnormal network connections or traffic to the Internet. Use passwords for all accounts. Have multiple accounts per family member, using non-administrative accounts for day-to-day activities. Disable the guest account (Control Panel> Administrative Tools> Computer Management> Users).
Medium businesses
A fairly strong firewall or Unified Threat Management System Strong Antivirus software and Internet Security Software. For authentication, use strong passwords and change it on a bi-weekly/monthly basis. When using a wireless connection, use a robust password. Raise awareness about physical security to employees. Use an optional network analyzer or network monitor. An enlightened administrator or manager.
Large businesses
A strong firewall and proxy to keep unwanted people out. A strong Antivirus software package and Internet Security Software package. For authentication, use strong passwords and change it on a weekly/bi-weekly basis. When using a wireless connection, use a robust password. Exercise physical security precautions to employees. Prepare a network analyzer or network monitor and use it when needed. Implement physical security management like closed circuit television for entry areas and restricted zones. Security fencing to mark the company's perimeter.
Fire extinguishers for fire-sensitive areas like server rooms and security rooms. Security guards can help to maximize security.
Network security
125
School
An adjustable firewall and proxy to allow authorized users access from the outside and inside. Strong Antivirus software and Internet Security Software packages. Wireless connections that lead to firewalls. Children's Internet Protection Act compliance. Supervision of network to guarantee updates and changes based on popular site usage. Constant supervision by teachers, librarians, and administrators to guarantee protection against attacks by both internet and sneakernet sources.
Large government
A strong firewall and proxy to keep unwanted people out. Strong Antivirus software and Internet Security Software suites. Strong encryption. Whitelist authorized wireless connection, block all else. All network hardware is in secure zones. All host should be on a private network that is invisible from the outside. Put web servers in a DMZ, or a firewall from the outside and from the inside.
Further reading
Security of the Internet [6] (The Froehlich/Kent Encyclopedia of Telecommunications vol. 15. Marcel Dekker, New York, 1997, pp. 231-255.) Introduction to Network Security [7], Matt Curtin. Security Monitoring with Cisco Security MARS [8], Gary Halleen/Greg Kellogg, Cisco Press, Jul. 6, 2007. Self-Defending Networks: The Next Generation of Network Security [9], Duane DeCapite, Cisco Press, Sep. 8, 2006. Security Threat Mitigation and Response: Understanding CS-MARS [10], Dale Tesch/Greg Abelar, Cisco Press, Sep. 26, 2006. Deploying Zone-Based Firewalls [11], Ivan Pepelnjak, Cisco Press, Oct. 5, 2006. Network Security: PRIVATE Communication in a PUBLIC World, Charlie Kaufman | Radia Perlman | Mike Speciner, Prentice-Hall, 2002. ISBN . Network Infrastructure Security [21], Angus Wong and Alan Yeung, Springer, 2009.
See also
Cloud computing security Crimeware Data Loss Prevention Wireless LAN Security Timeline of hacker history Information Leak Prevention Network Security Toolkit TCP sequence prediction attack TCP Gender Changer
Netsentron
Network security
126
External links
[12] Definition of Network Security Cisco IT Case Studies [13] about Security and VPN Debate: The data or the source - which is the real threat to network security? - Video [14] OpenLearn - Network Security [15]
References
[1] Simmonds, A; Sandilands, P; van Ekert, L (2004). "An Ontology for Network Security Attacks". Lecture Notes in Computer Science 3285: 317323. [2] A Role-Based Trusted Network Provides Pervasive Security and Compliance (http:/ / newsroom. cisco. com/ dlls/ 2008/ ts_010208b. html?sid=BAC-NewsWire) - interview with Jayshree Ullal, senior VP of Cisco [3] Dave Dittrich, Network monitoring/Intrusion Detection Systems (IDS) (http:/ / staff. washington. edu/ dittrich/ network. html), University of Washington. [4] Honeypots, Honeynets (http:/ / www. honeypots. net) [5] Julian Fredin, Social software development program Wi-Tech [6] http:/ / www. cert. org/ encyc_article/ tocencyc. html [7] http:/ / www. interhack. net/ pubs/ network-security [8] http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052709 [9] http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052539 [10] [11] [12] [13] [14] [15] http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587052601 http:/ / www. ciscopress. com/ bookstore/ product. asp?isbn=1587053101 http:/ / www. deepnines. com/ secure-web-gateway/ definition-of-network-security http:/ / www. cisco. com/ web/ about/ ciscoitatwork/ case_studies/ security. html http:/ / www. netevents. tv/ docuplayer. asp?docid=102 http:/ / openlearn. open. ac. uk/ course/ view. php?id=2587
127
5.1 Internet
Book:Internet security
Internet security This Wikipedia book is not located in the correct namespace. Please move it to either Book:Book:Internet security or User:Username/Books/Book:Internet security. For information and help on Wikipedia books in general, see Help:Books (general tips) and WikiProject Wikipedia-Books (questions and assistance).
[ Download PDF [1] ] [ Open in Book Creator [2] ] [ Order Printed Book [3] ]
[ About ] [ FAQ ] [ Feedback ] [ Help ] [ WikiProject ] [ [4] ]
Internet security
Methods of attack Arbitrary code execution Buffer overflow Cross-site request forgery Cross-site scripting Denial-of-service attack DNS cache poisoning Drive-by download Malware Password cracking Phishing SQL injection Virus hoax Methods of prevention Cryptography Firewall
References
[1] [2] [3] [4] http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Frender_collection%2F http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Fload_collection%2F http:/ / en. wikipedia. org/ wiki/ Special%3Abook%2Forder_collection%2F http:/ / en. wikipedia. org/ wiki/ Special%3Arecentchangeslinked%2Fbook%253ainternet_security
Firewall (computing)
128
Firewall (computing)
A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices which is configured to permit or deny computer applications based upon a set of rules and other criteria. Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. There are techniques: several types of firewall
1. Packet filter: Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. It is susceptible to IP spoofing. 2. Application gateway: Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation. 3. Circuit-level gateway: Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking.
4. Proxy server: Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses.
Function
Firewall (computing) A firewall is a dedicated appliance, or software running on a computer, which inspects network traffic passing through it, and denies or permits passage based on a set of rules/criteria. It is normally placed between a protected network and an unprotected network and acts like a gate to protect assets to ensure that nothing private goes out and nothing malicious comes in. A firewall's basic task is to regulate some of the flow of traffic between computer networks of different trust levels. Typical examples are the Internet which is a zone with no trust and an internal network which is a zone of higher trust. A zone with an intermediate trust level, situated between the Internet and a trusted internal network, is often referred to as a "perimeter network" or Demilitarized zone (DMZ). A firewall's function within a network is similar to physical firewalls with fire doors in building construction. In the former case, it is used to prevent network intrusion to the private network. In the latter case, it is intended to contain and delay structural fire from spreading to adjacent structures.
129
History
The term firewall/fireblock originally meant a wall to confine a fire or potential fire within a building; cf. firewall (construction). Later uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment. The Morris Worm spread itself through multiple vulnerabilities in the machines of the time. Although it was not malicious in intent, the Morris Worm was the first large scale attack on Internet security; the online community was neither expecting an attack nor prepared to deal with one.[1]
Firewall (computing)
130
Subsequent developments
In 1992, Bob Braden and Annette DeSchon at the University of Southern California (USC) were refining the concept of a firewall. The product known as "Visas" was the first system to have a visual integration interface with colors and icons, which could be easily implemented and accessed on a computer operating system such as Microsoft's Windows or Apple's MacOS. In 1994 an Israeli company called Check Point Software Technologies built this into readily available software known as FireWall-1. The existing deep packet inspection functionality of modern firewalls can be shared by Intrusion-prevention systems (IPS). Currently, the Middlebox Communication Working Group of the Internet Engineering Task Force (IETF) is working on standardizing protocols for managing firewalls and other middleboxes. Another axis of development is about integrating identity of users into Firewall rules. Many firewalls provide such features by binding user identities to IP or MAC addresses, which is very approximate and can be easily turned around. The NuFW firewall provides real identity-based firewalling, by requesting the user's signature for each connection.
Firewall (computing)
131
Types
There are several classifications of firewalls depending on where the communication is taking place, where the communication is intercepted and the state that is being traced.
Application-layer
Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or from an application. They block other packets (usually dropping them without acknowledgment to the sender). In principle, application firewalls can prevent all unwanted outside traffic from reaching protected machines. On inspecting all packets for improper content, firewalls can restrict or prevent outright the spread of networked computer worms and trojans. The additional inspection criteria can add extra latency to the forwarding of packets to their destination.
Proxies
A proxy device (running either on dedicated hardware or as software on a general-purpose machine) may act as a firewall by responding to input packets (connection requests, for example) in the manner of an application, whilst blocking other packets. Proxies make tampering with an internal system from the external network more difficult and misuse of one internal system would not necessarily cause a security breach exploitable from outside the firewall (as long as the application proxy remains intact and properly configured). Conversely, intruders mays hijack a publicly-reachable system and use it as a proxy for their own purposes; the proxy then masquerades as that system to other internal machines. While use of internal address spaces enhances security, crackers may still employ methods such as IP spoofing to attempt to pass packets to a target network.
Firewall (computing)
132
See also
Access control list Bastion host Circuit-level gateway Comparison of firewalls Computer security Egress filtering End-to-end connectivity Firewall pinhole Firewalls and Internet Security (book) Golden Shield Project (aka Great Firewall of China) List of Linux router or firewall distributions Mangled packet network reconnaissance Packet Personal firewall Sandbox (computer security) Screened-subnet firewall Stateful firewall Stateful packet inspection Unified threat management Virtual firewall
External links
Internet Firewalls: Frequently Asked Questions [2], compiled by Matt Curtin, Marcus Ranum and Paul Robertson. Evolution of the Firewall Industry [3] - Discusses different architectures and their differences, how packets are processed, and provides a timeline of the evolution. A History and Survey of Network Firewalls [4] - provides an overview of firewalls at the various ISO levels, with references to the original papers where first firewall work was reported. Software Firewalls: Made of Straw? Part 1 [5] and Software Firewalls: Made of Straw? Part 2 [6] - a technical view on software firewall design and potential weaknesses pb: ()
Firewall (computing)
133
References
[1] [2] [3] [4] [5] [6] RFC 1135 The Helminthiasis of the Internet (http:/ / tools. ietf. org/ html/ rfc1135) http:/ / www. faqs. org/ faqs/ firewalls-faq/ http:/ / www. cisco. com/ univercd/ cc/ td/ doc/ product/ iaabu/ centri4/ user/ scf4ch3. htm http:/ / www. cs. unm. edu/ ~treport/ tr/ 02-12/ firewall. pdf http:/ / www. securityfocus. com/ infocus/ 1839 http:/ / www. securityfocus. com/ infocus/ 1840
Denial-of-service attack
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. The term is generally used with regards to computer networks, but is not limited to this field, for example, it is also used in reference to CPU resource management.[1] One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented DDoS Stacheldraht Attack diagram. by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately. Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet service providers. They also commonly constitute violations of the laws of individual nations.[2]
Denial-of-service attacks can also lead to problems in the network 'branches' around the actual computer being attacked. For example, the bandwidth of a router between the Internet and a LAN may be consumed by an attack, compromising not only the intended computer, but also the entire network. If the attack is conducted on a sufficiently large scale, entire geographical regions of Internet connectivity can be compromised without the attacker's knowledge or intent by incorrectly configured or flimsy network infrastructure
134
Methods of attack
A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. Attacks can be directed at any network device, including attacks on routing devices and web, electronic mail, or Domain Name System servers. A DoS attack can be perpetrated in a number of ways. The five basic types of attack are: 1. 2. 3. 4. 5. Consumption of computational resources, such as bandwidth, disk space, or processor time Disruption of configuration information, such as routing information. Disruption of state information, such as unsolicited resetting of TCP sessions. Disruption of physical network components. Obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
A DoS attack may include execution of malware intended to: Max out the processor's usage, preventing any work from occurring. Trigger errors in the microcode of the machine. Trigger errors in the sequencing of instructions, so as to force the computer into an unstable state or lock-up. Exploit errors in the operating system, causing resource starvation and/or thrashing, i.e. to use up all available facilities so no real work can be accomplished. Crash the operating system itself.
ICMP flood
A smurf attack is one particular variant of a flooding DoS attack on the public Internet. It relies on misconfigured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The network then serves as a smurf amplifier. In such an attack, the perpetrators will send large numbers of IP packets with the source address faked to appear to be the address of the victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their destination.[4] To combat Denial of Service attacks on the Internet, services like the Smurf Amplifier Registry have given network service providers the ability to identify misconfigured networks and to take appropriate action such as filtering. Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the "ping" command from unix-like hosts (the -t flag on Windows systems has a far less malignant function). It is very simple to launch, the primary requirement being access to greater bandwidth than the victim. SYN flood sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet, and waiting for a packet in response from the sender address. However, because the sender address is forged, the response never comes. These half-open connections saturate the number of available connections the server is able to make, keeping it from responding to legitimate requests until after the attack ends.
Denial-of-service attack
135
Teardrop attacks
A Teardrop attack involves sending mangled IP fragments with overlapping, over-sized payloads to the target machine. This can crash various operating systems due to a bug in their TCP/IP fragmentation re-assembly code.[5] Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack. Around September 2009, a vulnerability in Vista was referred to as a "teardrop attack", but the attack targeted SMB2 which is a higher layer than the TCP packets that teardrop used.[6] [7]
Peer-to-peer attacks
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++. Peer-to-peer attacks are different from regular botnet-based attacks. With peer-to-peer there is no botnet and the attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a 'puppet master,' instructing clients of large peer-to-peer file sharing hubs to disconnect from their peer-to-peer network and to connect to the victim's website instead. As a result, several thousand computers may aggressively try to connect to a target website. While a typical web server can handle a few hundred connections/sec before performance begins to degrade, most web servers fail almost instantly under five or six thousand connections/sec. With a moderately big peer-to-peer attack a site could potentially be hit with up to 750,000 connections in a short order. The targeted web server will be plugged up by the incoming connections. While peer-to-peer attacks are easy to identify with signatures, the large number of IP addresses that need to be blocked (often over 250,000 during the course of a big attack) means that this type of attack can overwhelm mitigation defenses. Even if a mitigation device can keep blocking IP addresses, there are other problems to consider. For instance, there is a brief moment where the connection is opened on the server side before the signature itself comes through. Only once the connection is opened to the server can the identifying signature be sent and detected, and the connection torn down. Even tearing down connections takes server resources and can harm the server. This method of attack can be prevented by specifying in the p2p protocol which ports are allowed or not. If port 80 is not allowed, the possibilities for attack on websites can be very limited.
Denial-of-service attack
136
Nuke
A Nuke is an old denial-of-service attack against computer networks consisting of fragmented or otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a complete stop. A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen of Death (BSOD).
Distributed attack
A distributed denial of service attack (DDoS) occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. These systems are compromised by attackers using a variety of methods. Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address prior to release of the malware and no further interaction was necessary to launch the attack. A system may also be compromised with a trojan, allowing the attacker to download a zombie agent (or the trojan may contain one). Attackers can also break into systems using automated tools that exploit flaws in programs that listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web. Stacheldraht is a classic example of a DDoS tool. It utilizes a layered structure where the attacker uses a client program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker, using automated routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts. Each handler can control up to a thousand agents.[12] These collections of systems compromisers are known as botnets. DDoS tools like stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (these are also known as bandwidth consumption attacks). SYN floods (also known as resource starvation attacks) may also be used. Newer tools can use DNS servers for DoS purposes. See next section. Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a well distributed DDoS. These flood attacks do not require completion of the TCP three way handshake and attempt to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially
Denial-of-service attack spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack enhancements such as syn cookies may be effective mitigation against SYN queue flooding, however complete bandwidth exhaustion may require involvement Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny the availability of well known websites to legitimate users.[2] More sophisticated attackers use DDoS tools for the purposes of extortion even against their business rivals.[13] It is important to note the difference between a DDoS and DoS attack. If an attacker mounts an attack from a single host it would be classified as a DoS attack. In fact, any attack against availability would be classed as a Denial of Service attack. On the other hand, if an attacker uses a thousand systems to simultaneously launch smurf attacks against a remote host, this would be classified as a DDoS attack. The major advantages to an attacker of using a distributed denial-of-service attack are that multiple machines can generate more attack traffic than one machine, multiple attack machines are harder to turn off than one attack machine, and that the behavior of each attack machine can be stealthier, making it harder to track down and shut down. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to simply add more attack machines.
137
Reflected attack
A distributed reflected denial of service attack (DRDoS) involves sending forged requests of some type to a very large number of computers that will reply to the requests. Using Internet protocol spoofing, the source address is set to that of the targeted victim, which means all the replies will go to (and flood) the target. ICMP Echo Request attacks (Smurf Attack) can be considered one form of reflected attack, as the flooding host(s) send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing many hosts to send Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack. Many services can be exploited to act as reflectors, some harder to block than others.[14] DNS amplification attacks involve a new mechanism that increased the amplification effect, using a much larger list of DNS servers than seen earlier.[15]
Degradation-of-service attacks
"Pulsing" zombies are compromised computers that are directed to launch intermittent and short-lived floodings of victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as "degradation-of-service" rather than "denial-of-service", can be more difficult to detect than regular zombie invasions and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more damage than concentrated floods.[16] [17] Exposure of degradation-of-service attacks is complicated further by the matter of discerning whether the attacks really are attacks or just healthy and likely desired increases in website traffic.[18]
Denial-of-service attack An example of this occurred when Michael Jackson died in 2009. Websites such as Google and Twitter slowed down or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a Denial of Service attack, warning users that their queries looked like "automated requests from a computer virus or spyware application". News sites and link sites sites whose primary function is to provide links to interesting content elsewhere on the Internet are most likely to cause this phenomenon. The canonical example is the Slashdot effect. Sites such as Digg, the Drudge Report, Fark, Something Awful, and the webcomic Penny Arcade have their own corresponding "effects", known as "the Digg effect", being "drudged", "farking", "goonrushing" and "wanging"; respectively. Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have created NTP vandalism by flooding NTP servers without respecting the restrictions of client types or geographical limitations. Similar unintentional denials of service can also occur via other media, e.g. when a URL is mentioned on television. If a server is being indexed by Google or another search engine during peak periods of activity, or does not have a lot of available bandwidth while being indexed, it can also experience the effects of a DoS attack. Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation sued YouTube: massive numbers of would-be youtube.com users accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading their bandwidth.[19]
138
Denial-of-Service Level II
The goal of DoS L2 (possibly DDoS) attack is to cause a launching of a defense mechanism which blocks the network segment from which the attack originated. In case of distributed attack or IP header modification (that depends on the kind of security behavior) it will fully block the attacked network from Internet, but without system crash.
Incidents
The first major attack involving DNS servers as reflectors occurred in January 2001. The target was Register.com.[21] This attack, which forged requests for the MX records of AOL.com (to amplify the attack) lasted about a week before it could be traced back to all attacking hosts and shut off. It used a list of tens of thousands of DNS records that were a year old at the time of the attack. In February, 2001, the Irish Government's Department of Finance server was hit by a denial of service attack carried out as part of a student campaign from NUI Maynooth. The Department officially complained to the University authorities and a number of students were disciplined. In July 2002, the Honeynet Project Reverse Challenge was issued.[22] The binary that was analyzed turned out to be yet another DDoS agent, which implemented several DNS related attacks, including an optimized form of a reflection attack.
Denial-of-service attack On two occasions to date, attackers have performed DNS Backbone DDoS Attacks on the DNS root servers. Since these machines are intended to provide service to all Internet users, these two denial of service attacks might be classified as attempts to take down the entire Internet, though it is unclear what the attackers' true motivations were. The first occurred in October 2002 and disrupted service at 9 of the 13 root servers. The second occurred in February 2007 and caused disruptions at two of the root servers.[23] In February 2007, more than 10,000 online game servers in games such as Return to Castle Wolfenstein, Halo, Counter-Strike and many others were attacked by the hacker group RUS. The DDoS attack was made from more than a thousand computer units located in the republics of the former Soviet Union, mostly from Russia, Uzbekistan and Belarus. Minor attacks are still continuing to be made today. In the weeks leading up to the five-day 2008 South Ossetia war, a DDoS attack directed at Georgian government sites containing the message: "win+love+in+Rusia" effectively overloaded and shut down multiple Georgian servers. Websites targeted included the Web site of the Georgian president, Mikhail Saakashvili, rendered inoperable for 24 hours, and the National Bank of Georgia. While heavy suspicion was placed on Russia for orchestrating the attack through a proxy, the St. Petersburg-based criminal gang known as the Russian Business Network, or R.B.N, the Russian government denied the allegations, stating that it was possible that individuals in Russia or elsewhere had taken it upon themselves to start the attacks.[24] During the 2009 Iranian election protests, foreign activists seeking to help the opposition engaged in DDoS attacks against Iran's government. The official website of the Iranian government (ahmedinejad.ir [25]) was rendered inaccessible on several occasions.[26] Critics claimed that the DDoS attacks also cut off internet access for protesters inside Iran; activists countered that, while this may have been true, the attacks still hindered President Mahmoud Ahmadinejad's government enough to aid the opposition. On June 25, 2009, the day Michael Jackson died, the spike in searches related to Michael Jackson was so big that Google News initially mistook it for an automated attack. As a result, for about 25 minutes, when some people searched Google News they saw a "We're sorry" page before finding the articles they were looking for.[27] June 2009 the P2P site The Pirate Bay was rendered inaccessible due to a DDoS attack. This was most likely provoked by the recent sellout to Global Gaming Factory X AB, which was seen as a "take the money and run" solution to the website's legal issues.[28] In the end, due to the buyers' financial troubles, the site was not sold. Multiple waves of July 2009 cyber attacks targeted a number of major websites in South Korea and the United States. The attacker used botnet and file update through internet is known to assist its spread. As it turns out, a computer trojan was coded to scan for existing MyDoom bots. MyDoom was a worm in 2004, and in July around 20,000-50,000 were present. MyDoom has a backdoor, which the DDoS bot could exploit. Since then, the DDoS bot removed itself, and completely formatted the hard drives. Most of the bots originated from China, and North Korea. On August 6, 2009 several social networking sites, including Twitter, Facebook, Livejournal, and Google blogging pages were hit by DDoS attacks, apparently aimed at Georgian blogger "Cyxymu". Although Google came through with only minor set-backs, these attacks left Twitter crippled for hours and Facebook did eventually restore service although some users still experienced trouble. Twitter's Site latency has continued to improve, however some web requests continue to fail.[29] [30] [31]
139
Denial-of-service attack
140
Performing DoS-attacks
A wide array of programs are used to launch DoS-attacks. Most of these programs are completely focused on performing DoS-attacks, while others are also true Packet injectors, thus able to perform other tasks as well. Some examples of such tools are hping, JAVA socket programming, and httping but these are not the only programs capable of such attacks. Such tools are intended for benign use, but they can also be utilized in launching attacks on victim networks. In addition to these tools, there exist a vast amount of underground tools used by attackers.[32]
Switches
Most switches have some rate-limiting and ACL capability. Some switches provide automatic and/or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and Bogon filtering (bogus IP filtering) to detect and remediate denial of service attacks through automatic rate filtering and WAN Link failover and balancing. These schemes will work as long as the DoS attacks are something that can be prevented by using them. For example SYN flood can be prevented using delayed binding or TCP splicing. Similarly content based DoS can be prevented using deep packet inspection. Attacks originating from dark addresses or going to dark addresses can be prevented using Bogon filtering. Automatic rate filtering can work as long as you have set rate-thresholds correctly and granularly. Wan-link failover will work as long as both links have DoS/DDoS prevention mechanism.
Routers
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually set. Most routers can be easily overwhelmed under DoS attack. If you add rules to take flow statistics out of the router during the DoS attacks, they further slow down and complicate the matter. Cisco IOS has features that prevent flooding, i.e. example settings.[34]
Denial-of-service attack
141
Blackholing/Sinkholing
With blackholing, all the traffic to the attacked DNS or IP address is sent to a "black hole" (null interface, non-existent server, ...). To be more efficient and avoid affecting your network connectivity, it can be managed by the ISP.[39] Sinkholing routes to a valid IP address which analyzes traffic and reject bad ones. Sinkholing is not efficient for most severe attacks.
Clean pipes
All traffic is passed through a "cleaning center" via a proxy, which separates "bad" traffic (DDoS and also other common internet attacks) and only sends good traffic beyond to the server. The provider needs central connectivity to the Internet to manage this kind of service.[40] Prolexic and Verisign are examples of providers of this service.[41] [42]
maintained by the
Denial-of-service attack
142
See also
Billion laughs Black fax Cybercrime Dosnet Industrial espionage Intrusion-detection system Network intrusion detection system Regular expression Denial of Service - ReDoS Wireless signal jammer
External links
RFC 4732 Internet Denial-of-Service Considerations W3C The World Wide Web Security FAQ [45] Understanding and surviving DDoS attacks [46] cert.org [47] CERT's Guide to DoS attacks. ATLAS Summary Report [48] - Real-time global report of DDoS attacks. linuxsecurity.com [49] An article on preventing DDoS attacks. Is Your PC a Zombie? [50], About.com.
References
[1] Yuval, Fledel. Uri, Kanonov. Yuval, Elovici. Shlomi, Dolev. Chanan, Glezer. "Google Android: A Comprehensive Security Assessment". IEEE Security & Privacy (IEEE) (in press). doi:10.1109/MSP.2010.2. ISSN1540-7993. [2] Phillip Boyle (2000). "SANS Institute - Intrusion Detection FAQ: Distributed Denial of Service Attack Tools: n/a" (http:/ / www. sans. org/ resources/ idfaq/ trinoo. php). SANS Institute. . Retrieved May 2, 2008. [3] Mindi McDowell (2007). "Cyber Security Tip ST04-015" (http:/ / www. us-cert. gov/ cas/ tips/ ST04-015. html). United States Computer Emergency Readiness Team. . Retrieved May 2, 2008. [4] "Types of DDoS Attacks" (http:/ / anml. iu. edu/ ddos/ types. html). 2001. . Retrieved May 2, 2008. [5] "CERT Advisory CA-1997-28 IP Denial-of-Service Attacks" (http:/ / www. cert. org/ advisories/ CA-1997-28. html). CERT. 1998. . Retrieved May 2, 2008. [6] http:/ / www. zdnet. com/ blog/ security/ windows-7-vista-exposed-to-teardrop-attack/ 4222 [7] http:/ / www. microsoft. com/ technet/ security/ advisory/ 975497. mspx [8] Leyden, John (2008-05-21). "Phlashing attack thrashes embedded systems" (http:/ / www. theregister. co. uk/ 2008/ 05/ 21/ phlashing/ ). theregister.co.uk. . Retrieved 2009-03-07. [9] "Permanent Denial-of-Service Attack Sabotages Hardware" (http:/ / www. darkreading. com/ document. asp?doc_id=154270& WT. svl=news1_1). Dark Reading. 2008. . Retrieved May 19, 2008. [10] "EUSecWest Applied Security Conference: London, U.K." (http:/ / eusecwest. com/ speakers. html#Smith). EUSecWest. 2008. . [11] http:/ / eusecwest. com [12] The "stacheldraht" distributed denial of service attack tool (http:/ / staff. washington. edu/ dittrich/ misc/ stacheldraht. analysis. txt) [13] US credit card firm fights DDoS attack (http:/ / www. theregister. co. uk/ 2004/ 09/ 23/ authorize_ddos_attack/ ) [14] Paxson, Vern (2001), An Analysis of Using Reflectors for Distributed Denial-of-Service Attacks (http:/ / www. icir. org/ vern/ papers/ reflectors. CCR. 01/ reflectors. html) [15] Vaughn, Randal and Evron, Gadi (2006), DNS Amplification Attacks (http:/ / www. isotf. org/ news/ DNS-Amplification-Attacks. pdf) [16] Encyclopaedia Of Information Technology. Atlantic Publishers & Distributors. 2007. pp.397. ISBN8126907525.
Denial-of-service attack
[17] Schwabach, Aaron (2006). Internet and the Law. ABC-CLIO. pp.325. ISBN1851097317. [18] Lu, Xicheng; Wei Zhao (2005). Networking and Mobile Computing. Birkhuser. pp.424. ISBN3540281029. [19] "YouTube sued by sound-alike site" (http:/ / news. bbc. co. uk/ 2/ hi/ business/ 6108502. stm). BBC News. 2006-11-02. . [20] "RFC 3552 - Guidelines for Writing RFC Text on Security Considerations" (http:/ / www. faqs. org/ rfcs/ rfc3552. html). July 2003. . [21] January 2001 thread on the UNISOG mailing list (http:/ / staff. washington. edu/ dittrich/ misc/ ddos/ register. com-unisog. txt) [22] Honeynet Project Reverse Challenge (http:/ / old. honeynet. org/ reverse/ index. html) [23] "Factsheet - Root server attack on 6 February 2007" (http:/ / www. icann. org/ announcements/ factsheet-dns-attack-08mar07. pdf). ICANN. 2007-03-01. . Retrieved 2009-08-01. [24] Markoff, John. "Before the Gunfire, Cyberattacks" (http:/ / www. nytimes. com/ 2008/ 08/ 13/ technology/ 13cyber. html?em). The New York Times. . Retrieved 2008-08-12. [25] http:/ / www. ahmadinejad. ir/ [26] Shachtman, Noah (2009-06-15). "Activists Launch Hack Attacks on Tehran Regime" (http:/ / www. wired. com/ dangerroom/ 2009/ 06/ activists-launch-hack-attacks-on-tehran-regime/ ). Wired. . Retrieved 2009-06-15. [27] Outpouring of searches for the late Michael Jackson (http:/ / googleblog. blogspot. com/ 2009/ 06/ outpouring-of-searches-for-late-michael. html), June 26, 2009, Official Google Blog [28] Pirate Bay Hit With DDoS Attack After "Selling Out" (http:/ / www. tomshardware. com/ news/ Pirate-Bay-DDoS-Sell-Out,8173. html), 8:01 AM - July 1, 2009, by Jane McEntegart - Tom's Hardware [29] Ongoing denial-of-service attack (http:/ / status. twitter. com/ post/ 157191978/ ongoing-denial-of-service-attack), August 6, 2009, Twitter Status Blog [30] Facebook Down. Twitter Down. Social Media Meltdown. (http:/ / mashable. com/ 2009/ 08/ 06/ facebook-down-3/ ), August 6, 2009, By Pete Cashmore, Mashable [31] Wortham, Jenna; Kramer, Andrew E. (August 8, 2009). "Professor Main Target of Assault on Twitter" (http:/ / www. nytimes. com/ 2009/ 08/ 08/ technology/ internet/ 08twitter. html?_r=1& hpw). New York Times. . Retrieved 2009-08-07. [32] Managing WLAN Risks with Vulnerability Assessment (http:/ / www. airmagnet. com/ assets/ whitepaper/ WLAN_Vulnerabilities_White_Paper. pdf), August 5, 2008, By Lisa Phifer:Core Competence, Inc. ,Technology Whitepaper, AirMagnet, Inc. [33] Froutan, Paul (June 24, 2004). "How to defend against DDoS attacks" (http:/ / www. computerworld. com/ s/ article/ 94014/ How_to_defend_against_DDoS_attacks). Computerworld. . Retrieved May 15, 2010. [34] "Some IoS tips for Internet Service (Providers)" (http:/ / mehmet. suzen. googlepages. com/ qos_ios_dos_suzen2005. pdf) (Mehmet Suzen) [35] http:/ / www. juniper. net/ products_and_services/ ex_series/ index. html [36] http:/ / www. networktest. com [37] http:/ / www. networkworld. com/ reviews/ 2008/ 071408-test-juniper-switch. html [38] http:/ / www. networkworld. com/ reviews/ 2008/ 071408-test-juniper-switch-how. html [39] Distributed Denial of Service Attacks (http:/ / www. cisco. com/ web/ about/ ac123/ ac147/ archived_issues/ ipj_7-4/ dos_attacks. html), by Charalampos Patrikakis, Michalis Masikos, and Olga Zouraraki, The Internet Protocol Journal - Volume 7, Number 4, National Technical University of Athens, Cisco Systems Inc [40] "DDoS Mitigation via Regional Cleaning Centers (Jan 2004)" (https:/ / research. sprintlabs. com/ publications/ uploads/ RR04-ATL-013177. pdf) [41] "VeriSign Rolls Out DDoS Monitoring Service" (http:/ / www. darkreading. com/ securityservices/ security/ intrusion-prevention/ showArticle. jhtml?articleID=219900002) [42] "DDoS: A Threat That's More Common Than You Think" (http:/ / developertutorials-whitepapers. tradepub. com/ free/ w_verb09/ prgm. cgi) [43] http:/ / www. caida. org/ publications/ animations/ [44] U.K. outlaws denial-of-service attacks (http:/ / news. cnet. com/ U. K. -outlaws-denial-of-service-attacks/ 2100-7348_3-6134472. html), November 10, 2006, By Tom Espiner - CNET News [45] http:/ / www. w3. org/ Security/ Faq/ wwwsf6. html [46] http:/ / www. armoraid. com/ survive/ [47] http:/ / www. cert. org/ tech_tips/ denial_of_service. html [48] http:/ / atlas. arbor. net/ summary/ dos [49] http:/ / www. linuxsecurity. com/ content/ view/ 121960/ 49/ [50] http:/ / antivirus. about. com/ od/ whatisavirus/ a/ zombiepc. htm
143
Spam (electronic)
144
Spam (electronic)
Spam is the use of electronic messaging systems (including most broadcast media, digital delivery systems) to send unsolicited bulk messages indiscriminately. While the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs, wiki spam, online classified ads spam, mobile phone messaging spam, Internet forum spam, junk fax transmissions, social networking spam, television advertising and file sharing network spam.
Spamming remains economically viable because advertisers have no operating costs beyond the management of their mailing lists, and it is difficult to hold senders accountable for their mass mailings. Because the barrier to entry is so low, spammers are numerous, and the volume of unsolicited mail has become very high. The costs, such as lost productivity and fraud, are borne by the public and by Internet service providers, which have been forced to add extra capacity to cope with the deluge. Spamming is universally reviled, and has been the subject of legislation in many jurisdictions.[1] People who create electronic spam are called spammers.[2]
In different media
E-mail
E-mail spam, known as unsolicited bulk Email (UBE), junk mail, or unsolicited commercial email (UCE), is the practice of sending unwanted e-mail messages, frequently with commercial content, in large quantities to an indiscriminate set of recipients. Spam in e-mail started to become a problem when the Internet was opened up to the general public in the mid-1990s. It grew exponentially over the following years, and today composes some 80 to 85% of all the email in the world, by a "conservative estimate".[3] Pressure to make e-mail spam illegal has been successful in some jurisdictions, but less so in others. Spammers take advantage of this fact, and frequently outsource parts of their operations to countries where spamming will not get them into legal trouble. Increasingly, e-mail spam today is sent via "zombie networks", networks of virus- or worm-infected personal computers in homes and offices around the globe; many modern worms install a backdoor which allows the spammer access to the computer and use it for malicious purposes. This complicates attempts to control the spread of spam, as in many cases the spam doesn't even originate from the spammer. In November 2008 an ISP, McColo, which was providing service to botnet operators, was depeered and spam dropped 50%-75% Internet-wide. At the same time, it is becoming clear that malware authors, spammers, and phishers are learning from each other, and possibly forming various kinds of partnerships. An industry of e-mail address harvesting is dedicated to collecting email addresses and selling compiled databases.[4] Some of these address harvesting approaches rely on users not reading the fine print of agreements, resulting in them agreeing to send messages indiscriminately to their contacts. This is a common approach in social networking spam such as that generated by the social networking site Quechup.[5]
Spam (electronic)
145
Instant Messaging
Instant Messaging spam, known also as spim (a portmanteau of spam and IM, short for instant messaging), makes use of instant messaging systems. Although less ubiquitous than its e-mail counterpart, spam is reaching more users all the time. According to a report from Ferris Research, 500 million spam IMs were sent in 2003, twice the level of 2002. As instant messaging tends to not be blocked by firewalls it is an especially useful channel for spammers. One way to protect yourself against spammers is to only allow messages from people on your friends lists. Many email services now offer spam filtering (Junk Mail) and some instant messaging providers offer hints and tips on avoiding email spam and spim (BT Yahoo for example).
Mobile phone
Mobile phone spam is directed at the text messaging service of a mobile phone. This can be especially irritating to customers not only for the inconvenience but also because of the fee they may be charged per text message received in some markets. The term "SpaSMS" was coined at the adnews website Adland in 2000 to describe spam SMS.
146
Noncommercial forms
E-mail and other forms of spamming have been used for purposes other than advertisements. Many early Usenet spams were religious or political. Serdar Argic, for instance, spammed Usenet with historical revisionist screeds. A number of evangelists have spammed Usenet and e-mail media with preaching messages. A growing number of criminals are also using spam to perpetrate various sorts of fraud,[8] and in some cases have used it to lure people to locations where they have been kidnapped, held for ransom, and even murdered.[9]
Geographical origins
A 2009 Cisco Systems report lists the origin of spam by country as follows:[10] (trillions of spam messages per year) 1. 2. 3. 4. 5. 6. 7. Brazil: 7.7; USA: 6.6; India: 3.6; South Korea: 3.1; Turkey: 2.6; Vietnam: 2.5; China: 2.4;
147
History
Pre-Internet
In the late 19th Century Western Union allowed telegraphic messages on its network to be sent to multiple destinations. The first recorded instance of a mass unsolicited commercial telegram is from May 1864.[11] Up until the Great Depression wealthy North American residents would be deluged with nebulous investment offers. This problem never fully emerged in Europe to the degree that it did in the Americas, because telegraphy was regulated by national post offices in the European region.
Etymology
According to the Internet Society and other sources, the term spam is derived from the 1970 Spam sketch of the BBC television comedy series "Monty Python's Flying Circus"[12] [13] The sketch is set in a cafe where nearly every item on the menu includes Spam canned luncheon meat. As the waiter recites the Spam-filled menu, a chorus of Viking patrons drowns out all conversations with a song repeating "Spam, Spam, Spam, Spam... lovely Spam! wonderful Spam!", hence "Spamming" the dialogue. The excessive amount of Spam mentioned in the sketch is a reference to the preponderance of imported canned meat products in the United Kingdom, particularly corned beef from Argentina, in the years after World War II, as the country struggled to rebuild its agricultural base. Spam captured a large slice of the British market within lower economic classes and became a byword among British schoolboys of the 1960s for low-grade fodder due to its commonality, monotonous taste and cheap price - hence the humour of the Python sketch. In the 1980s the term was adopted to describe certain abusive users who frequented BBSs and MUDs, who would repeat "Spam" a huge number of times to scroll other users' text off the screen.[14] In early Chat rooms services like PeopleLink and the early days of AOL, they actually flooded the screen with quotes from the Monty Python Spam sketch. With internet connections over phone lines, typically running at 1200 or even 300 baud, it could take an enormous amount of time for a spammy logo, drawn in ASCII art to scroll to completion on a viewer's terminal. Sending an irritating, large, meaningless block of text in this way was called spamming. This was used as a tactic by insiders of a group that wanted to drive newcomers out of the room so the usual conversation could continue. It was also used to prevent members of rival groups from chattingfor instance, Star Wars fans often invaded Star Trek chat rooms, filling the space with blocks of text until the Star Trek fans left.[15] This act, previously called flooding or trashing, came to be known as spamming.[16] The term was soon applied to a large amount of text broadcast by many users. It later came to be used on Usenet to mean excessive multiple postingthe repeated posting of the same message. The unwanted message would appear in many if not all newsgroups, just as Spam appeared in nearly all the menu items in the Monty Python sketch. The first usage of this sense was by Joel Furr[17] in the aftermath of the ARMM incident of March 31, 1993, in which a piece of experimental software released dozens of recursive messages onto the news.admin.policy newsgroup.[18] This use had also become establishedto spam Usenet was flooding newsgroups with junk messages. The word was also attributed to the flood of "Make Money Fast" messages that clogged many newsgroups during the 1990s. In 1998, the New Oxford Dictionary of English, which had previously only defined "spam" in relation to the trademarked food product, added a second definition to its entry for "spam": "Irrelevant or inappropriate messages sent on the Internet to a large number of newsgroups or users."[19] There are several popular false etymologies of the word "spam". One, promulgated by early spammers Laurence Canter and Martha Siegel, is that "spamming" is what happens when one dumps a can of Spam luncheon meat into a fan blade. Some others are the backronyms "shit posing as mail" and "stupid pointless annoying messages."
Spam (electronic)
148
Trademark issues
Hormel Foods Corporation, the maker of Spam luncheon meat, does not object to the Internet use of the term "spamming". However, they did ask that the capitalized word "Spam" be reserved to refer to their product and trademark.[23] By and large, this request is obeyed in forums which discuss spam. In Hormel Foods v SpamArrest, Hormel attempted to assert its trademark rights against SpamArrest, a software company, from using the mark "spam", since Hormel owns the trademark. In a dilution claim, Hormel argued that Spam Arrest's use of the term "spam" had endangered and damaged "substantial goodwill and good reputation" in connection with its trademarked lunch meat and related products. Hormel also asserts that Spam Arrest's name so closely resembles its luncheon meat that the public might become confused, or might think that Hormel endorses Spam Arrest's products. Hormel did not prevail. Attorney Derek Newman responded on behalf of Spam Arrest: "Spam has become ubiquitous throughout the world to describe unsolicited commercial e-mail. No company can claim trademark rights on a generic term." Hormel stated on its website: "Ultimately, we are trying to avoid the day when the consuming public asks, 'Why would Hormel Foods name its product after junk email?'".[24] Hormel also made two attempts that were dismissed in 2005 to revoke the marks "SPAMBUSTER".[25] and Spam Cube.[26] Hormel's Corporate Attorney Melanie J. Neumann also sent SpamCop's Julian Haight a letter on August 27, 1999 requesting that he delete an objectionable image (a can of Hormel's Spam luncheon meat product in a trash can), change references to UCE spam to all lower case letters, and confirm his agreement to do so.[27]
Costs
The European Union's Internal Market Commission estimated in 2001 that "junk e-mail" cost Internet users 10 billion per year worldwide.[28] The California legislature found that spam cost United States organizations alone more than $13 billion in 2007, including lost productivity and the additional equipment, software, and manpower needed to combat the problem.[29] Spam's direct effects include the consumption of computer and network resources, and the cost in human time and attention of dismissing unwanted messages.[30]
Spam (electronic) In addition, spam has costs stemming from the kinds of spam messages sent, from the ways spammers send them, and from the arms race between spammers and those who try to stop or control spam. In addition, there are the opportunity cost of those who forgo the use of spam-afflicted systems. There are the direct costs, as well as the indirect costs borne by the victimsboth those related to the spamming itself, and to other crimes that usually accompany it, such as financial theft, identity theft, data and intellectual property theft, virus and other malware infection, child pornography, fraud, and deceptive marketing. The cost to providers of search engines is not insignificant: "The secondary consequence of spamming is that search engine indexes are inundated with useless pages, increasing the cost of each processed query".[2] }} The methods of spammers are likewise costly. Because spamming contravenes the vast majority of ISPs' acceptable-use policies, most spammers have for many years gone to some trouble to conceal the origins of their spam. E-mail, Usenet, and instant-message spam are often sent through insecure proxy servers belonging to unwilling third parties. Spammers frequently use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. In some cases, they have used falsified or stolen credit card numbers to pay for these accounts. This allows them to quickly move from one account to the next as each one is discovered and shut down by the host ISPs. The costs of spam also include the collateral costs of the struggle between spammers and the administrators and users of the media threatened by spamming. [31] Many users are bothered by spam because it impinges upon the amount of time they spend reading their e-mail. Many also find the content of spam frequently offensive, in that pornography is one of the most frequently advertised products. Spammers send their spam largely indiscriminately, so pornographic ads may show up in a work place e-mail inboxor a child's, the latter of which is illegal in many jurisdictions. Recently, there has been a noticeable increase in spam advertising websites that contain child pornography[32] . Some spammers argue that most of these costs could potentially be alleviated by having spammers reimburse ISPs and persons for their material. There are three problems with this logic: first, the rate of reimbursement they could credibly budget is not nearly high enough to pay the direct costs , second, the human cost (lost mail, lost time, and lost opportunities) is basically unrecoverable, and third, spammers often use stolen bank accounts and credit cards to finance their operations, and would conceivably do so to pay off any fines imposed. E-mail spam exemplifies a tragedy of the commons: spammers use resources (both physical and human), without bearing the entire cost of those resources. In fact, spammers commonly do not bear the cost at all. This raises the costs for everyone. In some ways spam is even a potential threat to the entire e-mail system, as operated in the past. Since e-mail is so cheap to send, a tiny number of spammers can saturate the Internet with junk mail. Although only a tiny percentage of their targets are motivated to purchase their products (or fall victim to their scams), the low cost may provide a sufficient conversion rate to keep the spamming alive. Furthermore, even though spam appears not to be economically viable as a way for a reputable company to do business, it suffices for professional spammers to convince a tiny proportion of gullible advertisers that it is viable for those spammers to stay in business. Finally, new spammers go into business every day, and the low costs allow a single spammer to do a lot of harm before finally realizing that the business is not profitable. Some companies and groups "rank" spammers; spammers who make the news are sometimes referred to by these rankings.[33] [34] The secretive nature of spamming operations makes it difficult to determine how proliferated an individual spammer is, thus making the spammer hard to track, block or avoid. Also, spammers may target different networks to different extents, depending on how successful they are at attacking the target. Thus considerable resources are employed to actually measure the amount of spam generated by a single person or group. For example, victims that use common anti-spam hardware, software or services provide opportunities for such tracking. Nevertheless, such rankings should be taken with a grain of salt.
149
Spam (electronic)
150
General costs
In all cases listed above, including both commercial and non-commercial, "spam happens" because of a positive Cost-benefit analysis result if the cost to recipients is excluded as an externality the spammer can avoid paying. Cost is the combination of Overhead: The costs and overhead of electronic spamming include bandwidth, developing or acquiring an email/wiki/blog spam tool, taking over or acquiring a host/zombie, etc. Transaction cost: The incremental cost of contacting each additional recipient once a method of spamming is constructed, multiplied by the number of recipients. (see CAPTCHA as a method of increasing transaction costs) Risks: Chance and severity of legal and/or public reactions, including damages and punitive damages Damage: Impact on the community and/or communication channels being spammed (see Newsgroup spam) Benefit is the total expected profit from spam, which may include any combination of the commercial and non-commercial reasons listed above. It is normally linear, based on the incremental benefit of reaching each additional spam recipient, combined with the conversion rate. The conversion rate for botnet-generated spam has recently been measured to be around one in 12,000,000 for pharmaceutical spam and one in 200,000 for infection sites as used by the Storm botnet.[35] Spam is prevalent on the Internet because the transaction cost of electronic communications is radically less than any alternate form of communication, far outweighing the current potential losses, as seen by the amount of spam currently in existence. Spam continues to spread to new forms of electronic communication as the gain (number of potential recipients) increases to levels where the cost/benefit becomes positive. Spam has most recently evolved to include wikispam and blogspam as the levels of readership increase to levels where the overhead is no longer the dominating factor. According to the above analysis, spam levels will continue to increase until the cost/benefit analysis is balanced .
In crime
Spam can be used to spread computer viruses, trojan horses or other malicious software. The objective may be identity theft, or worse (e.g., advance fee fraud). Some spam attempts to capitalize on human greed whilst other attempts to use the victims' inexperience with computer technology to trick them (e.g., phishing). On May 31, 2007, one of the world's most prolific spammers, Robert Alan Soloway, was arrested by U.S. authorities.[36] Described as one of the top ten spammers in the world, Soloway was charged with 35 criminal counts, including mail fraud, wire fraud, e-mail fraud, aggravated identity theft and money laundering.[36] Prosecutors allege that Soloway used millions of "zombie" computers to distribute spam during 2003. This is the first case in which U.S. prosecutors used identity theft laws to prosecute a spammer for taking over someone else's Internet domain name.
Political issues
Spamming remains a hot discussion topic. In 2004, the seized Porsche of an indicted spammer was advertised on the Internet;[37] this revealed the extent of the financial rewards available to those who are willing to commit duplicitous acts online. However, some of the possible means used to stop spamming may lead to other side effects, such as increased government control over the Internet, loss of privacy, barriers to free expression, and the commercialization of e-mail. One of the chief values favored by many long-time Internet users and experts, as well as by many members of the public, is the free exchange of ideas. Many have valued the relative anarchy of the Internet, and bridle at the idea of restrictions placed upon it. A common refrain from spam-fighters is that spamming itself abridges the historical freedom of the Internet, by attempting to force users to carry the costs of material which they would not choose. An ongoing concern expressed by parties such as the Electronic Frontier Foundation and the ACLU has to do with so-called "stealth blocking", a term for ISPs employing aggressive spam blocking without their users' knowledge.
Spam (electronic) These groups' concern is that ISPs or technicians seeking to reduce spam-related costs may select tools which (either through error or design) also block non-spam e-mail from sites seen as "spam-friendly". SPEWS is a common target of these criticisms. Few object to the existence of these tools; it is their use in filtering the mail of users who are not informed of their use which draws fire. Some see spam-blocking tools as a threat to free expressionand laws against spamming as an untoward precedent for regulation or taxation of e-mail and the Internet at large. Even though it is possible in some jurisdictions to treat some spam as unlawful merely by applying existing laws against trespass and conversion, some laws specifically targeting spam have been proposed. In 2004, United States passed the CAN-SPAM Act of 2003 which provided ISPs with tools to combat spam. This act allowed Yahoo! to successfully sue Eric Head, reportedly one of the biggest spammers in the world, who settled the lawsuit for several thousand U.S. dollars in June 2004. But the law is criticized by many for not being effective enough. Indeed, the law was supported by some spammers and organizations which support spamming, and opposed by many in the anti-spam community. Examples of effective anti-abuse laws that respect free speech rights include those in the U.S. against unsolicited faxes and phone calls, and those in Australia and a few U.S. states against spam. In November 2004, Lycos Europe released a screen saver called make LOVE not SPAM which made Distributed Denial of Service attacks on the spammers themselves. It met with a large amount of controversy and the initiative ended in December 2004. While most countries either outlaw or at least ignore spam, Bulgaria is the first and until now only one to partially legalize it. According to recent changes in the Bulgarian E-Commerce act anyone can send spam to mailboxes, owned by company or organization, as long as there is warning that this may be unsolicited commercial email in the message body. The law contains many other inadequate texts - for example the creation of a nationwide public electronic register of email addresses that do not want to receive spam, something valuable only as source for e-mail address harvesting. Anti-spam policies may also be a form of disguised censorship, a way to ban access or reference to questioning alternative forums or blogs by an institution. This form of occult censorship is mainly used by private companies when they can not muzzle criticism by legal ways.[38]
151
Court cases
United States
Sanford Wallace and Cyber Promotions were the target of a string of lawsuits, many of which were settled out of court, up through the famous 1998 Earthlink settlement which put Cyber Promotions out of business. Attorney Laurence Canter was disbarred by the Tennessee Supreme Court in 1997 for sending prodigious amounts of spam advertising his immigration law practice. In 2005, Jason Smathers, a former America Online employee, pled guilty to charges of violating the CAN-SPAM Act. In 2003, he sold a list of approximately 93 million AOL subscriber e-mail addresses to Sean Dunaway who, in turn, sold the list to spammers.[39] [40] In 2007, Robert Soloway lost a case in a federal court against the operator of a small Oklahoma-based Internet service provider who accused him of spamming. U.S. Judge Ralph G. Thompson granted a motion by plaintiff Robert Braver for a default judgment and permanent injunction against him. The judgment includes a statutory damages award of $10,075,000 under Oklahoma law.[41] In June 2007, two men were convicted of eight counts stemming from sending millions of e-mail spam messages that included hardcore pornographic images. Jeffrey A. Kilbride, 41, of Venice, California was sentenced to six years in prison, and James R. Schaffer, 41, of Paradise Valley, Arizona, was sentenced to 63 months. In addition, the two were fined $100,000, ordered to pay $77,500 in restitution to AOL, and ordered to forfeit more than $1.1 million, the amount of illegal proceeds from their spamming operation.[42] The charges included conspiracy, fraud, money laundering, and transportation of obscene materials. The trial, which began on June 5, was the first to include
Spam (electronic) charges under the CAN-SPAM Act of 2003, according to a release from the Department of Justice. The specific law that prosecutors used under the CAN-Spam Act was designed to crack down on the transmission of pornography in spam.[43] In 2005, Scott J. Filary and Donald E. Townsend of Tampa, Florida were sued by Florida Attorney General Charlie Crist for violating the Florida Electronic Mail Communications Act.[44] The two spammers were required to pay $50,000 USD to cover the costs of investigation by the state of Florida, and a $1.1 million penalty if spamming were to continue, the $50,000 was not paid, or the financial statements provided were found to be inaccurate. The spamming operation was successfully shut down.[45] Edna Fiedler, 44, of Olympia, Washington, on June 25, 2008, pleaded guilty in a Tacoma court and was sentenced to 2 years imprisonment and 5 years of supervised release or probation in an Internet $1 million "Nigerian check scam." She conspired to commit bank, wire and mail fraud, against US citizens, specifically using Internet by having had an accomplice who shipped counterfeit checks and money orders to her from Lagos, Nigeria, last November. Fiedler shipped out $ 609,000 fake check and money orders when arrested and prepared to send additional $ 1.1 million counterfeit materials. Also, the U.S. Postal Service recently intercepted counterfeit checks, lottery tickets and eBay overpayment schemes with a face value of $2.1 billion.[46] [47]
152
United Kingdom
In the first successful case of its kind, Nigel Roberts from the Channel Islands won 270 against Media Logistics UK who sent junk e-mails to his personal account.[48] In January 2007, a Sheriff Court in Scotland awarded Mr. Gordon Dick 750 (the then maximum sum which could be awarded in a Small Claim action) plus expenses of 618.66, a total of 1368.66 against Transcom Internet Services Ltd.[49] for breaching anti-spam laws.[50] Transcom had been legally represented at earlier hearings but were not represented at the proof, so Gordon Dick got his decree by default. It is the largest amount awarded in compensation in the United Kingdom since Roberts -v- Media Logistics case in 2005 above, but it is not known if Mr Dick ever received anything. (An image of Media Logistics' cheque is shown on Roberts' website[51] ) Both Roberts and Dick are well known figures in the British Internet industry for other things. Dick is currently Interim Chairman of Nominet UK (the manager of .UK and .CO.UK) while Roberts is CEO of CHANNELISLES.NET (manager of .GG and .JE). Despite the statutory tort that is created by the Regulations implementing the EC Directive, few other people have followed their example. As the Courts engage in active case management, such cases would probably now be expected to be settled by mediation and payment of nominal damages.
New Zealand
In October 2008, a vast international internet spam operation run from New Zealand was cited by American authorities as one of the worlds largest, and for a time responsible for up to a third of all unwanted emails. In a statement the US Federal Trade Commission (FTC) named Christchurchs Lance Atkinson as one of the principals of the operation. New Zealands Internal Affairs announced it had lodged a $200,000 claim in the High Court against Atkinson and his brother Shane Atkinson and courier Roland Smits, after raids in Christchurch. This marked the first prosecution since the Unsolicited Electronic Messages Act (UEMA) was passed in September 2007. The FTC said it had received more than three million complaints about spam messages connected to this operation, and estimated that it may be responsible for sending billions of illegal spam messages. The US District Court froze the defendants assets to preserve them for consumer redress pending trial.[52] U.S. co-defendant Jody Smith forfeited more than $800,000 and faces up to five years in prison for charges to which he plead guilty.[53]
Spam (electronic)
153
Newsgroups
news.admin.net-abuse.email
See also
SPAMfighter Address munging (avoidance technique) Anti-spam techniques Bacn (electronic) E-mail fraud Identity theft Image spam Internet Troll Job scams Junk mail List of spammers Malware Network Abuse Clearinghouse Advance fee fraud (Nigerian spam) Phishing Scam Social networking spam SORBS Spam SpamCop Spamigation Spam Lit Spoetry Sporgery Virus (computer) Vishing
History Howard Carmack Make money fast Sanford Wallace Spam King UUnet Usenet Death Penalty
References
Sources
Specter, Michael (2007-08-06). "Damn Spam" [54]. The New Yorker. Retrieved 2007-08-02.
Further reading
Sjouwerman, Stu; Posluns, Jeffrey, "Inside the spam cartel: trade secrets from the dark side" [55], Elsevier/Syngress; 1st edition, November 27, 2004. ISBN 978-1-932266-86-3
External links
Spamtrackers SpamWiki [56]: a peer-reviewed spam information and analysis resource. Federal Trade Commission page advising people to forward spam e-mail to them [57] Slamming Spamming Resource on Spam [58] Why am I getting all this spam? CDT [59] Cybertelecom:: Federal spam law and policy [60] Reaction to the DEC Spam of 1978 [61] Overview and text of the first known internet email spam. Malware City - The Spam Omelette [62] BitDefenders weekly report on spam trends and techniques. 1 December 2009: arrest of a major spammer [63] EatSpam.org [64] - This website provides you with disposable e-mail addresses which expire after 15 Minutes. You can read and reply to e-mails that are sent to the temporary e-mail address within the given time frame.
Spam (electronic)
154
References
[1] The Spamhaus Project - The Definition Of Spam (http:/ / www. spamhaus. org/ definition. html) [2] Gyongyi, Zoltan; Garcia-Molina, Hector (2005). "Web spam taxonomy" (http:/ / airweb. cse. lehigh. edu/ 2005/ gyongyi. pdf). Proceedings of the First International Workshop on Adversarial Information Retrieval on the Web (AIRWeb), 2005 in The 14th International World Wide Web Conference (WWW 2005) May 10, (Tue)-14 (Sat), 2005, Nippon Convention Center (Makuhari Messe), Chiba, Japan.. New York, N.Y.: ACM Press. ISBN1-59593-046-9. [3] http:/ / www. maawg. org/ about/ MAAWG20072Q_Metrics_Report. pdf [4] FileOn List Builder-Extract URL,MetaTags,Email,Phone,Fax from www-Optimized Webcrawler (http:/ / www. listdna. com/ ) [5] Saul Hansell Social network launches worldwide spam campaign (http:/ / bits. blogs. nytimes. com/ 2007/ 09/ 13/ your-former-boyfriends-mother-wants-to-be-your-friend/ ) New York Times, September 13, 2007 [6] The (Evil) Genius of Comment Spammers (http:/ / www. wired. com/ wired/ archive/ 12. 03/ google. html?pg=7) - Wired Magazine, March 2004 [7] Fabrcio Benevenuto, Tiago Rodrigues, Virglio Almeida, Jussara Almeida and Marcos Gonalves. Detecting Spammers and Content Promoters in Online Video Social Networks. In ACM SIGIR Conference, Boston, MA, USA, July 2009. (http:/ / www. dcc. ufmg. br/ ~fabricio/ download/ sigirfp437-benevenuto. pdf). [8] See: Advance fee fraud [9] SA cops, Interpol probe murder (http:/ / www. news24. com/ News24/ South_Africa/ News/ 0,,2-7-1442_1641875,00. html) - News24.com, 2004-12-31 [10] Brasil assume a liderana do spam mundial em 2009, diz Cisco (http:/ / idgnow. uol. com. br/ seguranca/ 2009/ 12/ 08/ brasil-assume-a-lideranca-do-spam-mundial-em-2009-diz-cisco/ ) (Portuguese) [11] "Getting the message, at last" (http:/ / www. economist. com/ opinion/ PrinterFriendly. cfm?story_id=10286400). 2007-12-14. . [12] Internet Society's Internet Engineering Taskforce: A Set of Guidelines for Mass Unsolicited Mailings and Postings (spam*) (http:/ / tools. ietf. org/ html/ rfc2635) [13] Origin of the term "spam" to mean net abuse (http:/ / www. templetons. com/ brad/ spamterm. html) [14] Origin of the term "spam" to mean net abuse (http:/ / www. templetons. com/ brad/ spamterm. html) [15] The Origins of Spam in Star Trek chat rooms (http:/ / www. myshelegoldberg. com/ words/ item/ the-origins-of-spam/ ) [16] Spamming? (rec.games.mud) (http:/ / groups. google. com/ groups?selm=MAT. 90Sep25210959@zeus. organpipe. cs. arizona. edu) Google Groups USENET archive, 1990-09-26 [17] At 30, Spam Going Nowhere Soon (http:/ / www. npr. org/ templates/ story/ story. php?storyId=90160617) - Interviews with Gary Thuerk and Joel Furr [18] news.bbc.co.uk (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 7322615. stm) [19] "Oxford dictionary adds Net terms" on News.com (http:/ / news. com. com/ 2100-1023-214535. html) [20] Reaction to the DEC Spam of 1978 (http:/ / www. templetons. com/ brad/ spamreact. html) [21] Tom Abate (May 3, 2008). "A very unhappy birthday to spam, age 30". San Francisco Chronicle. [22] Danchev, Dancho. " Spammers go multilingual, use automatic translation services (http:/ / blogs. zdnet. com/ security/ ?p=3813& tag=rbxccnbzd1)." ZDNet. July 28, 2009. Retrieved on August 31, 2009. [23] (http:/ / www. spam. com/ about/ internet. aspx) - Official SPAM Website [24] Hormel Foods v SpamArrest, Motion for Summary Judgment, Redacted Version (PDF) (http:/ / img. spamarrest. com/ HormelSummaryJudgment. pdf) [25] Hormel Foods Corpn v Antilles Landscape Investments NV (2005) EWHC 13 (Ch) (http:/ / www. lawreports. co. uk/ WLRD/ 2005/ CHAN/ chanjanf0. 3. htm) [26] "Hormel Foods Corporation v. Spam Cube, Inc" (http:/ / ttabvue. uspto. gov/ ttabvue/ v?pno=91171346& pty=OPP). United States Patent and Trademark Office. . Retrieved 2008-02-12. [27] Letter from Hormel's Corporate Attorney Melanie J. Neumann to SpamCop's Julian Haight (http:/ / www. spamcop. net/ images/ hormel_letter. gif) [28] "Data protection: "Junk" e-mail costs internet users 10 billion a year worldwide - Commission study" (http:/ / europa. eu/ rapid/ pressReleasesAction. do?reference=IP/ 01/ 154& format=HTML& aged=0& language=EN& guiLanguage=en) [29] CALIFORNIA BUSINESS AND PROFESSIONS CODE (http:/ / www. spamlaws. com/ state/ ca. shtml) [30] Spam Cost Calculator: Calculate enterprise spam cost? (http:/ / www. commtouch. com/ spam-cost-calculator) [31] Thank the Spammers (http:/ / linxnet. com/ misc/ spam/ thank_spammers. html) - William R. James 2003-03-10 [32] Fadul, Jose (2010). The EPIC Generation: Experiential, Participative, Image-Driven & Connected. Raleigh, NC: Lulu Press. ISBN978-0-557-41877-0. [33] Spamhaus' "TOP 10 spam service ISPs" (http:/ / www. spamhaus. org/ statistics/ networks. lasso) [34] The 10 Worst ROKSO Spammers (http:/ / www. spamhaus. org/ statistics/ spammers. lasso) [35] Kanich, C.; C. Kreibich, K. Levchenko, B. Enright, G. Voelker, V. Paxson and S. Savage (2008-10-28). "Spamalytics: An Empirical Analysis of Spam Marketing Conversion" (http:/ / www. icsi. berkeley. edu/ pubs/ networking/ 2008-ccs-spamalytics. pdf) (PDF). . Alexandria, VA, USA. . Retrieved 2008-11-05.
Spam (electronic)
[36] Alleged 'Seattle Spammer' arrested - CNET News.com (http:/ / www. news. com/ Alleged-Seattle-Spammer-arrested/ 2100-7348_3-6187754. html) [37] timewarner.com (http:/ / www. timewarner. com/ corp/ newsroom/ pr/ 0,20812,670327,00. html) [38] See for instance the black list of the French wikipedia encyclopedia [39] U.S. v Jason Smathers and Sean Dunaway, amended complaint, US District Court for the Southern District of New York (2003). Retrieved 7 March 2007, from http:/ / www. thesmokinggun. com/ archive/ 0623042aol1. html [40] Ex-AOL employee pleads guilty in spam case. (2005, February 4). CNN. Retrieved 7 March 2007, from http:/ / www. cnn. com/ 2005/ TECH/ internet/ 02/ 04/ aol. spam. plea/ [41] Braver v. Newport Internet Marketing Corporation et al. (http:/ / www. mortgagespam. com/ soloway) -U.S. District Court - Western District of Oklahoma (Oklahoma City), 2005-02-22 [42] "Two Men Sentenced for Running International Pornographic Spamming Business" (http:/ / www. usdoj. gov/ opa/ pr/ 2007/ October/ 07_crm_813. html). United States Department of Justice. October 12, 2007. . Retrieved 2007-10-25. [43] Gaudin, Sharon, Two Men Convicted Of Spamming Pornography (http:/ / www. informationweek. com/ news/ showArticle. jhtml?articleID=200000756) InformationWeek, June 26, 2007 [44] "Crist Announces First Case Under Florida Anti-Spam Law" (http:/ / myfloridalegal. com/ __852562220065EE67. nsf/ 0/ F978639D46005F6585256FD90050AAC9?Open& Highlight=0,spam). Office of the Florida Attorney General. . Retrieved 2008-02-23. [45] "Crist: Judgment Ends Duo's Illegal Spam, Internet Operations" (http:/ / myfloridalegal. com/ __852562220065EE67. nsf/ 0/ F08DE06CB354A7D7852570CF005912A2?Open& Highlight=0,spam). Office of the Florida Attorney General. . Retrieved 2008-02-23. [46] upi.com, Woman gets prison for 'Nigerian' scam (http:/ / www. upi. com/ Top_News/ 2008/ 06/ 26/ Woman_gets_prison_for_Nigerian_scam/ UPI-73791214521169/ ) [47] yahoo.com, Woman Gets Two Years for Aiding Nigerian Internet Check Scam (PC World) (http:/ / tech. yahoo. com/ news/ pcworld/ 147575) [48] Businessman wins e-mail spam case (http:/ / news. bbc. co. uk/ 1/ hi/ world/ europe/ jersey/ 4562726. stm) - BBC News, 2005-12-27 [49] Gordon Dick v Transcom Internet Service Ltd. (http:/ / www. scotchspam. co. uk/ transcom. html) [50] Article 13-Unsolicited communications (http:/ / eur-lex. europa. eu/ LexUriServ/ LexUriServ. do?uri=CELEX:32002L0058:EN:HTML) [51] website (http:/ / www. roberts. co. uk) [52] Kiwi spam network was 'world's biggest' (http:/ / www. stuff. co. nz/ stuff/ 4729188a28. html) [53] Court Orders Australia-based Leader of International Spam Network to Pay $15.15 Million (http:/ / www. ftc. gov/ opa/ 2009/ 11/ herbalkings. shtm) [54] http:/ / www. newyorker. com/ reporting/ 2007/ 08/ 06/ 070806fa_fact_specter [55] http:/ / books. google. com/ books?id=1gsUeCcA7qMC& printsec=frontcover [56] http:/ / www. spamtrackers. eu/ wiki [57] http:/ / www. ftc. gov/ spam/ [58] http:/ / www. uic. edu/ depts/ accc/ newsletter/ adn29/ spam. html [59] http:/ / www. cdt. org/ speech/ spam/ 030319spamreport. shtml [60] http:/ / www. cybertelecom. org/ spam/ [61] http:/ / www. templetons. com/ brad/ spamreact. html [62] http:/ / www. malwarecity. com/ site/ News/ browseBlogsByCategory/ 46/ [63] http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 8388737. stm [64] http:/ / www. eatspam. org/
155
Phishing
156
Phishing
In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity in an electronic communication. Communications purporting to be from popular social web sites, auction sites, online payment processors or IT administrators are commonly used to lure the unsuspecting public. Phishing is typically carried out by e-mail or instant messaging,[1] and it often directs users to enter details at a fake An example of a phishing e-mail, disguised as an official e-mail from a (fictional) website whose look and feel are almost bank. The sender is attempting to trick the recipient into revealing confidential identical to the legitimate one. Even when information by "confirming" it at the phisher's website. Note the misspelling of the using server authentication, it may require words received and discrepancy. Such mistakes are common in most phishing tremendous skill to detect that the website is emails. Also note that although the URL of the bank's webpage appears to be legitimate, it actually links to the phisher's webpage. fake. Phishing is an example of social [2] engineering techniques used to fool users, and exploits the poor usability of current web security technologies.[3] Attempts to deal with the growing number of reported phishing incidents include legislation, user training, public awareness, and technical security measures. A phishing technique was described in detail in 1987, and the first recorded use of the term "phishing" was made in 1996. The term is a variant of fishing,[4] probably influenced by phreaking,[5] [6] and alludes to baits used to "catch" financial information and passwords.
Phishing billing information", though even this didn't prevent some people from giving away their passwords and personal information if they read and believed the IM first. A user using both an AIM account and an AOL account from an ISP simultaneously could phish AOL members with relative impunity as internet AIM accounts could be used by non-AOL internet members and could not be actioned (ie- reported to AOL TOS department for disciplinary action.) After 1997, AOL's policy enforcement with respect to phishing and warez became stricter and forced pirated software off AOL servers. AOL simultaneously developed a system to promptly deactivate accounts involved in phishing, often before the victims could respond. The shutting down of the warez scene on AOL caused most phishers to leave the service, and many phishersoften young teensgrew out of the habit.[12]
157
Phishing techniques
Recent phishing attempts
Phishers are targeting the customers of banks and online payment services. E-mails, supposedly from the Internal Revenue Service, have been used to glean sensitive data from U.S. taxpayers.[16] While the first such examples were sent indiscriminately in the expectation that some would be received by customers of a given bank or service, recent research has shown that phishers may in principle be able to determine which banks potential victims use, and target bogus e-mails accordingly.[17] Targeted versions of phishing have been termed spear phishing.[18] Several recent phishing attacks have been directed specifically at senior executives and other high profile targets within businesses, and the term whaling has been coined for these kinds of attacks.[19]
A chart showing the increase in phishing reports from October 2004 to June 2005.
Social networking sites are now a prime target of phishing, since the personal details in such sites can be used in identity theft;[20] in late 2006 a computer worm took over pages on MySpace and altered links to direct surfers to websites designed to steal login details.[21] Experiments show a success rate of over 70% for phishing attacks on social networks.[22] The RapidShare file sharing site has been targeted by phishing to obtain a premium account, which removes speed caps on downloads, auto-removal of uploads, waits on downloads, and cooldown times between downloads.[23] Attackers who broke into TD Ameritrade's database (containing all 6.3 million customers' social security numbers, account numbers and email addresses as well as their names, addresses, dates of birth, phone numbers and trading activity) also wanted the account usernames and passwords, so they launched a follow-up spear phishing attack.[24] Almost half of phishing thefts in 2006 were committed by groups operating through the Russian Business Network based in St. Petersburg.[25]
Phishing Some people are being victimized by a Facebook Scam, the link being hosted by T35 Web Hosting and people are losing their accounts.[26] There are anti-phishing websites which publish exact messages that have been recently circulating the internet, such as FraudWatch International and Millersmiles. Such sites often provide specific details about the particular messages.[27] [28]
158
Link manipulation
Most methods of phishing use some form of technical deception designed to make a link in an e-mail (and the spoofed website it leads to) appear to belong to the spoofed organization. Misspelled URLs or the use of subdomains are common tricks used by phishers. In the following example URL, http://www.yourbank.example.com/, it appears as though the URL will take you to the example section of the yourbank website; actually this URL points to the "yourbank" (i.e. phishing) section of the example website. Another common trick is to make the displayed text for a link (the text between the <A> tags) suggest a reliable destination, when the link actually goes to the phishers' site. The following example link, http:/ / en. wikipedia. org/ wiki/ Genuine, appears to take you to an article entitled "Genuine"; clicking on it will in fact take you to the article entitled "Deception". In the lower left hand corner of most browsers you can preview and verify where the link is going to take you.[29] An old method of spoofing used links containing the '@' symbol, originally intended as a way to include a username and password (contrary to the standard).[30] For example, the link http://www.google.com@members.tripod.com/ might deceive a casual observer into believing that it will open a page on www.google.com, whereas it actually directs the browser to a page on members.tripod.com, using a username of www.google.com: the page opens normally, regardless of the username supplied. Such URLs were disabled in Internet Explorer,[31] while Mozilla Firefox[32] and Opera present a warning message and give the option of continuing to the site or cancelling. A further problem with URLs has been found in the handling of Internationalized domain names (IDN) in web browsers, that might allow visually identical web addresses to lead to different, possibly malicious, websites. Despite the publicity surrounding the flaw, known as IDN spoofing[33] or homograph attack,[34] phishers have taken advantage of a similar risk, using open URL redirectors on the websites of trusted organizations to disguise malicious URLs with a trusted domain.[35] [36] [37] Even digital certificates do not solve this problem because it is quite possible for a phisher to purchase a valid certificate and subsequently change content to spoof a genuine website.
Filter evasion
Phishers have used images instead of text to make it harder for anti-phishing filters to detect text commonly used in phishing e-mails.[38]
Website forgery
Once a victim visits the phishing website the deception is not over. Some phishing scams use JavaScript commands in order to alter the address bar.[39] This is done either by placing a picture of a legitimate URL over the address bar, or by closing the original address bar and opening a new one with the legitimate URL.[40] An attacker can even use flaws in a trusted website's own scripts against the victim.[41] These types of attacks (known as cross-site scripting) are particularly problematic, because they direct the user to sign in at their bank or service's own web page, where everything from the web address to the security certificates appears correct. In reality, the link to the website is crafted to carry out the attack, making it very difficult to spot without specialist knowledge. Just such a flaw was used in 2006 against PayPal.[42]
Phishing A Universal Man-in-the-middle (MITM) Phishing Kit, discovered in 2007, provides a simple-to-use interface that allows a phisher to convincingly reproduce websites and capture log-in details entered at the fake site.[43] To avoid anti-phishing techniques that scan websites for phishing-related text, phishers have begun to use Flash-based websites. These look much like the real website, but hide the text in a multimedia object.[44]
159
Phone phishing
Not all phishing attacks require a fake website. Messages that claimed to be from a bank told users to dial a phone number regarding problems with their bank accounts.[45] Once the phone number (owned by the phisher, and provided by a Voice over IP service) was dialed, prompts told users to enter their account numbers and PIN. Vishing (voice phishing) sometimes uses fake caller-ID data to give the appearance that calls come from a trusted organization.[46]
Other techniques
Another attack used successfully is to forward the client to a bank's legitimate website, then to place a popup window requesting credentials on top of the website in a way that it appears the bank is requesting this sensitive information.[47] One of the latest phishing techniques is tabnabbing. It takes advantage of the multiple tabs that users use and silently redirect a user to the affected site.
Anti-phishing
There are several different techniques to combat phishing, including legislation and technology created specifically to protect against phishing.
Social responses
One strategy for combating phishing is to train people to recognize phishing attempts, and to deal with them. Education can be effective, especially where training provides direct feedback.[56] One newer phishing tactic, which uses phishing e-mails targeted at a specific company, known as spear phishing, has been harnessed to train individuals at various locations, including United States Military Academy at West Point, NY. In a June 2004 experiment with spear phishing, 80% of 500 West Point cadets who were sent a fake e-mail were tricked into revealing personal information.[57]
Phishing People can take steps to avoid phishing attempts by slightly modifying their browsing habits. When contacted about an account needing to be "verified" (or any other topic used by phishers), it is a sensible precaution to contact the company from which the e-mail apparently originates to check that the e-mail is legitimate. Alternatively, the address that the individual knows is the company's genuine website can be typed into the address bar of the browser, rather than trusting any hyperlinks in the suspected phishing message.[58] Nearly all legitimate e-mail messages from companies to their customers contain an item of information that is not readily available to phishers. Some companies, for example PayPal, always address their customers by their username in e-mails, so if an e-mail addresses the recipient in a generic fashion ("Dear PayPal customer") it is likely to be an attempt at phishing.[59] E-mails from banks and credit card companies often include partial account numbers. However, recent research[60] has shown that the public do not typically distinguish between the first few digits and the last few digits of an account numbera significant problem since the first few digits are often the same for all clients of a financial institution. People can be trained to have their suspicion aroused if the message does not contain any specific personal information. Phishing attempts in early 2006, however, used personalized information, which makes it unsafe to assume that the presence of personal information alone guarantees that a message is legitimate.[61] Furthermore, another recent study concluded in part that the presence of personal information does not significantly affect the success rate of phishing attacks,[62] which suggests that most people do not pay attention to such details. The Anti-Phishing Working Group, an industry and law enforcement association, has suggested that conventional phishing techniques could become obsolete in the future as people are increasingly aware of the social engineering techniques used by phishers.[63] They predict that pharming and other uses of malware will become more common tools for stealing information. Everyone can help educate the public by encouraging safe practices, and by avoiding dangerous ones. Unfortunately, even well-known players are known to incite users to hazardous behaviour, e.g. by requesting their users to reveal their passwords for third party services, such as email.[64]
160
Technical responses
Anti-phishing measures have been implemented as features embedded in browsers, as extensions or toolbars for browsers, and as part of website login procedures. The following are some of the main approaches to the problem. Helping to identify legitimate websites Most websites targeted for phishing are secure websites meaning that SSL with strong PKI cryptography is used for server authentication, where the website's URL is used as identifier. In theory it should be possible for the SSL authentication to be used to confirm the site to the user, and this was SSL v2's design requirement and the meta of secure browsing. But in practice, this is easy to trick. The superficial flaw is that the browser's security user interface (UI) is insufficient to deal with today's strong threats. There are three parts to secure authentication using TLS and certificates: indicating that the connection is in authenticated mode, indicating which site the user is connected to, and indicating which authority says it is this site. All three are necessary for authentication, and need to be confirmed by/to the user. Secure Connection. The standard display for secure browsing from the mid-1990s to mid-2000s was the padlock. In 2005, Mozilla fielded a yellow URL bar 2005 as a better indication of the secure connection. This innovation was later reversed due to the EV certificates, which replaced certain certificates providing a high level of organization identity verification with a green display, and other certificates with an extended blue favicon box to the left of the URL bar (in addition to the switch from "http" to "https" in the url itself). Which Site. The user is expected to confirm that the domain name in the browser's URL bar was in fact where they intended to go. URLs can be too complex to be easily parsed. Users often do not know or recognise the URL of the legitimate sites they intend to connect to, so that the authentication becomes meaningless.[3] A condition for
Phishing meaningful server authentication is to have a server identifier that is meaningful to the user; many ecommerce sites will change the domain names within their overall set of websites, adding to the opportunity for confusion. Simply displaying the domain name for the visited website[65] as some anti-phishing toolbars do is not sufficient. Some newer browsers, such as Internet Explorer 8, display the entire URL in grey, with just the domain name itself in black, as a means of assisting users in identifying fraudulent URLs. An alternate approach is the petname extension for Firefox which lets users type in their own labels for websites, so they can later recognize when they have returned to the site. If the site is not recognised, then the software may either warn the user or block the site outright. This represents user-centric identity management of server identities.[66] Some suggest that a graphical image selected by the user is better than a petname.[67] With the advent of EV certificates, browsers now typically display the organisation's name in green, which is much more visible and is hopefully more consistent with the user's expectations. Unfortunately, browser vendors have chosen to limit this prominent display only to EV certificates, leaving the user to fend for himself with all other certificates. Who is the Authority. The browser needs to state who the authority is that makes the claim of who the user is connected to. At the simplest level, no authority is stated, and therefore the browser is the authority, as far as the user is concerned. The browser vendors take on this responsibility by controlling a root list of acceptable CAs. This is the current standard practice. The problem with this is that not all certification authorities (CAs) employ equally good nor applicable checking, regardless of attempts by browser vendors to control the quality. Nor do all CAs subscribe to the same model and concept that certificates are only about authenticating ecommerce organisations. Certificate Manufacturing is the name given to low-value certificates that are delivered on a credit card and an email confirmation; both of these are easily perverted by fraudsters. Hence, a high-value site may be easily spoofed by a valid certificate provided by another CA. This could be because the CA is in another part of the world, and is unfamiliar with high-value ecommerce sites, or it could be that no care is taken at all. As the CA is only charged with protecting its own customers, and not the customers of other CAs, this flaw is inherent in the model. The solution to this is that the browser should show, and the user should be familiar with, the name of the authority. This presents the CA as a brand, and allows the user to learn the handful of CAs that she is likely to come into contact within her country and her sector. The use of brand is also critical to providing the CA with an incentive to improve their checking, as the user will learn the brand and demand good checking for high-value sites. This solution was first put into practice in early IE7 versions, when displaying EV certificates.[68] In that display, the issuing CA is displayed. This was an isolated case, however. There is resistance to CAs being branded on the chrome, resulting in a fallback to the simplest level above: the browser is the user's authority. Fundamental flaws in the security model of secure browsing Experiments to improve the security UI have resulted in benefits, but have also exposed fundamental flaws in the security model. The underlying causes for the failure of the SSL authentication to be employed properly in secure browsing are many and intertwined. Security before threat. Because secure browsing was put into place before any threat was evident, the security display lost out in the "real estate wars" of the early browsers. The original design of Netscape's browser included a prominent display of the name of the site and the CA's name, but these were dropped in the first release. Users are now highly experienced in not checking security information at all. Click-thru syndrome. However, warnings to poorly configured sites continued, and were not down-graded. If a certificate had an error in it (mismatched domain name, expiry), then the browser would commonly launch a popup to warn the user. As the reason was generally a minor misconfiguration, the users learned to bypass the warnings, and now, users are accustomed to treat all warnings with the same disdain, resulting in Click-thru syndrome. For
161
Phishing example, Firefox 3 has a 4-click process for adding an exception, but it has been shown to be ignored by an experienced user in a real case of MITM. Even today, as the vast majority of warnings will be for misconfigurations not real MITMs, it is hard to see how click-thru syndrome will ever be avoided. Lack of interest. Another underlying factor is the lack of support for virtual hosting. The specific causes are a lack of support for Server Name Indication in TLS webservers, and the expense and inconvenience of acquiring certificates. The result is that the use of authentication is too rare to be anything but a special case. This has caused a general lack of knowledge and resources in authentication within TLS, which in turn has meant that the attempts by browser vendors to upgrade their security UIs have been slow and lacklustre. Lateral communications. The security model for secure browser includes many participants: user, browser vendor, developers, CA, auditor, webserver vendor, ecommerce site, regulators (e.g., FDIC), and security standards committees. There is a lack of communication between different groups that are committed to the security model. E.g., although the understanding of authentication is strong at the protocol level of the IETF committees, this message does not reach the UI groups. Webserver vendors do not prioritise the Server Name Indication (TLS/SNI) fix, not seeing it as a security fix but instead a new feature. In practice, all participants look to the others as the source of the failures leading to phishing, hence the local fixes are not prioritised. Matters improved slightly with the CAB Forum, as that group includes browser vendors, auditors and CAs. But the group did not start out in an open fashion, and the result suffered from commercial interests of the first players, as well as a lack of parity between the participants. Even today, CAB forum is not open, and does not include representation from small CAs, end-users, ecommerce owners, etc. Standards gridlock. Vendors commit to standards, which results in an outsourcing effect when it comes to security. Although there have been many and good experiments in improving the security UI, these have not been adopted because they are not standard, or clash with the standards. Threat models can re-invent themselves in around a month; Security standards take around 10 years to adjust. Venerable CA model. Control mechanisms employed by the browser vendors over the CAs have not been substantially updated; the threat model has. The control and quality process over CAs is insufficiently tuned to the protection of users and the addressing of actual and current threats. Audit processes are in great need of updating. The recent EV Guidelines documented the current model in greater detail, and established a good benchmark, but did not push for any substantial changes to be made. Browsers alerting users to fraudulent websites Another popular approach to fighting phishing is to maintain a list of known phishing sites and to check websites against the list. Microsoft's IE7 browser, Mozilla Firefox 2.0, Safari 3.2, and Opera all contain this type of anti-phishing measure.[69] [70] [71] [72] Firefox 2 used Google anti-phishing software. Opera 9.1 uses live blacklists from PhishTank and GeoTrust, as well as live whitelists from GeoTrust. Some implementations of this approach send the visited URLs to a central service to be checked, which has raised concerns about privacy.[73] According to a report by Mozilla in late 2006, Firefox 2 was found to be more effective than Internet Explorer 7 at detecting fraudulent sites in a study by an independent software testing company.[74] An approach introduced in mid-2006 involves switching to a special DNS service that filters out known phishing domains: this will work with any browser,[75] and is similar in principle to using a hosts file to block web adverts. To mitigate the problem of phishing sites impersonating a victim site by embedding its images (such as logos), several site owners have altered the images to send a message to the visitor that a site may be fraudulent. The image may be moved to a new filename and the original permanently replaced, or a server can detect that the image was not requested as part of normal browsing, and instead send a warning image.[76] [77] and its totally safe
162
Phishing Augmenting password logins The Bank of America's website[78] [79] is one of several that ask users to select a personal image, and display this user-selected image with any forms that request a password. Users of the bank's online services are instructed to enter a password only when they see the image they selected. However, a recent study suggests few users refrain from entering their password when images are absent.[80] [81] In addition, this feature (like other forms of two-factor authentication) is susceptible to other attacks, such as those suffered by Scandinavian bank Nordea in late 2005,[82] and Citibank in 2006.[83] A similar system, in which an automatically-generated "Identity Cue" consisting of a colored word within a colored box is displayed to each website user, is in use at other financial institutions.[84] Security skins[85] [86] are a related technique that involves overlaying a user-selected image onto the login form as a visual cue that the form is legitimate. Unlike the website-based image schemes, however, the image itself is shared only between the user and the browser, and not between the user and the website. The scheme also relies on a mutual authentication protocol, which makes it less vulnerable to attacks that affect user-only authentication schemes. Eliminating phishing mail Specialized spam filters can reduce the number of phishing e-mails that reach their addressees' inboxes. These approaches rely on machine learning and natural language processing approaches to classify phishing e-mails.[87] [88] Monitoring and takedown Several companies offer banks and other organizations likely to suffer from phishing scams round-the-clock services to monitor, analyze and assist in shutting down phishing websites.[89] Individuals can contribute by reporting phishing to both volunteer and industry groups,[90] such as PhishTank.[91] Individuals can also contribute by reporting phone phishing attempts to Phone Phishing,[92] Federal Trade Commission.[93]
163
Legal responses
On January 26, 2004, the U.S. Federal Trade Commission filed the first lawsuit against a suspected phisher. The defendant, a Californian teenager, allegedly created a webpage designed to look like the America Online website, and used it to steal credit card information.[94] Other countries have followed this lead by tracing and arresting phishers. A phishing kingpin, Valdir Paulo de Almeida, was arrested in Brazil for leading one of the largest phishing crime rings, which in two years stole between US$18 million and US$37 million .[95] UK authorities jailed two men in June 2005 for their role in a phishing scam,[96] in a case connected to the U.S. Secret Service Operation Firewall, which targeted notorious "carder" websites.[97] In 2006 eight people were arrested by Japanese police on suspicion of phishing fraud by creating bogus Yahoo Japan Web sites, netting themselves 100 million (US$870,000 ).[98] The arrests continued in 2006 with the FBI Operation Cardkeeper detaining a gang of sixteen in the U.S. and Europe.[99] In the United States, Senator Patrick Leahy introduced the Anti-Phishing Act of 2005 in Congress on March 1, 2005. This bill, if it had been enacted into law, would have subjected criminals who created fake web sites and sent bogus e-mails in order to defraud consumers to fines of up to US$250,000 and prison terms of up to five years.[100] The UK strengthened its legal arsenal against phishing with the Fraud Act 2006,[101] which introduces a general offence of fraud that can carry up to a ten year prison sentence, and prohibits the development or possession of phishing kits with intent to commit fraud.[102] Companies have also joined the effort to crack down on phishing. On March 31, 2005, Microsoft filed 117 federal lawsuits in the U.S. District Court for the Western District of Washington. The lawsuits accuse "John Doe" defendants of obtaining passwords and confidential information. March 2005 also saw a partnership between Microsoft and the Australian government teaching law enforcement officials how to combat various cyber crimes, including phishing.[103] Microsoft announced a planned further 100 lawsuits outside the U.S. in March 2006,[104] followed by the commencement, as of November 2006, of 129 lawsuits mixing criminal and civil actions.[105] AOL
Phishing reinforced its efforts against phishing[106] in early 2006 with three lawsuits[107] seeking a total of US$18 million under the 2005 amendments to the Virginia Computer Crimes Act,[108] [109] and Earthlink has joined in by helping to identify six men subsequently charged with phishing fraud in Connecticut.[110] In January 2007, Jeffrey Brett Goodin of California became the first defendant convicted by a jury under the provisions of the CAN-SPAM Act of 2003. He was found guilty of sending thousands of e-mails to America Online users, while posing as AOL's billing department, which prompted customers to submit personal and credit card information. Facing a possible 101 years in prison for the CAN-SPAM violation and ten other counts including wire fraud, the unauthorized use of credit cards, and the misuse of AOL's trademark, he was sentenced to serve 70 months. Goodin had been in custody since failing to appear for an earlier court hearing and began serving his prison term immediately.[111] [112] [113] [114]
164
See also
Advanced Persistent Threat Anti-phishing software Brandjacking Certificate authority Computer hacking Confidence trick E-mail spoofing FBI In-session phishing Internet fraud Pharming SMiShing Social engineering Spy-phishing Vishing White collar crime Wire fraud
External links
Anti-Phishing Working Group [115] Center for Identity Management and Information Protection [116] Utica College How the bad guys actually operate [117] Ha.ckers.org Application Security Lab Plugging the "phishing" hole: legislation versus technology [118] Duke Law & Technology Review Know Your Enemy: Phishing [119] Honeynet project case study Banking Scam Revealed [120] forensic examination of a phishing attack on SecurityFocus The Phishing Guide: Understanding and Preventing Phishing Attacks [121] TechnicalInfo.net A Profitless Endeavor: Phishing as Tragedy of the Commons [122] Microsoft Corporation Database for information on phishing sites reported by the public [123] - PhishTank The Impact of Incentives on Notice and Take-down [124] Computer Laboratory, University of Cambridge (PDF, 344 kB) One Gang Responsible For Most Phishing Attacks - InternetNews.com [125]
Phishing
165
References
[1] Tan, Koon. "Phishing and Spamming via IM (SPIM)" (http:/ / isc. sans. org/ diary. php?storyid=1905). Internet Storm Center. . Retrieved December 5, 2006. [2] Microsoft Corporation. "What is social engineering?" (http:/ / www. microsoft. com/ protect/ yourself/ phishing/ engineering. mspx). . Retrieved August 22, 2007. [3] Jsang, Audun et al.. "Security Usability Principles for Vulnerability Analysis and Risk Assessment." (http:/ / www. unik. no/ people/ josang/ papers/ JAGAM2007-ACSAC. pdf) (PDF). Proceedings of the Annual Computer Security Applications Conference 2007 (ACSAC'07). . Retrieved 2007. [4] "Spam Slayer: Do You Speak Spam?" (http:/ / www. pcworld. com/ article/ id,113431-page,1/ article. html). PCWorld.com. . Retrieved August 16, 2006. [5] ""phishing, n." OED Online, March 2006, Oxford University Press." (http:/ / dictionary. oed. com/ cgi/ entry/ 30004304/ ). Oxford English Dictionary Online. . Retrieved August 9, 2006. [6] "Phishing" (http:/ / itre. cis. upenn. edu/ ~myl/ languagelog/ archives/ 001477. html). Language Log, September 22, 2004. . Retrieved August 9, 2006. [7] Felix, Jerry and Hauck, Chris (September 1987). "System Security: A Hacker's Perspective". 1987 Interex Proceedings 1: 6. [8] ""phish, v." OED Online, March 2006, Oxford University Press." (http:/ / dictionary. oed. com/ cgi/ entry/ 30004303/ ). Oxford English Dictionary Online. . Retrieved August 9, 2006. [9] Ollmann, Gunter. "The Phishing Guide: Understanding and Preventing Phishing Attacks" (http:/ / www. technicalinfo. net/ papers/ Phishing. html). Technical Info. . Retrieved July 10, 2006. [10] "Phishing" (http:/ / www. wordspy. com/ words/ phishing. asp). Word Spy. . Retrieved September 28, 2006. [11] Stutz, Michael (January 29, 1998). "AOL: A Cracker's Paradise?" (http:/ / wired-vig. wired. com/ news/ technology/ 0,1282,9932,00. html). Wired News. . [12] "History of AOL Warez" (http:/ / www. rajuabju. com/ warezirc/ historyofaolwarez. htm). . Retrieved September 28, 2006. [13] "GP4.3 - Growth and Fraud Case #3 - Phishing" (https:/ / financialcryptography. com/ mt/ archives/ 000609. html). Financial Cryptography. December 30, 2005. . [14] "In 2005, Organized Crime Will Back Phishers" (http:/ / itmanagement. earthweb. com/ secu/ article. php/ 3451501). IT Management. December 23, 2004. . [15] "The economy of phishing: A survey of the operations of the phishing market" (http:/ / www. firstmonday. org/ issues/ issue10_9/ abad/ ). First Monday. September 2005. . [16] "Suspicious e-Mails and Identity Theft" (http:/ / www. irs. gov/ newsroom/ article/ 0,,id=155682,00. html). Internal Revenue Service. . Retrieved July 5, 2006. [17] "Phishing for Clues" (http:/ / www. browser-recon. info/ ). Indiana University Bloomington. September 15, 2005. . [18] "What is spear phishing?" (http:/ / www. microsoft. com/ athome/ security/ email/ spear_phishing. mspx). Microsoft Security At Home. . Retrieved July 10, 2006. [19] Goodin, Dan (April 17, 2008). "Fake subpoenas harpoon 2,100 corporate fat cats" (http:/ / www. theregister. co. uk/ 2008/ 04/ 16/ whaling_expedition_continues/ . ). The Register. . [20] Kirk, Jeremy (June 2, 2006). "Phishing Scam Takes Aim at [[MySpace.com (http:/ / www. pcworld. com/ resource/ article/ 0,aid,125956,pg,1,RSS,RSS,00. asp)]"]. IDG Network. . [21] "Malicious Website / Malicious Code: MySpace XSS QuickTime Worm" (http:/ / www. websense. com/ securitylabs/ alerts/ alert. php?AlertID=708). Websense Security Labs. . Retrieved December 5, 2006. [22] Tom Jagatic and Nathan Johnson and Markus Jakobsson and Filippo Menczer. "Social Phishing" (http:/ / www. indiana. edu/ ~phishing/ social-network-experiment/ phishing-preprint. pdf) (PDF). To appear in the CACM (October 2007). . Retrieved June 3, 2006. [23] "1-Click Hosting at RapidTec Warning of Phishing!" (http:/ / rapidshare. de/ en/ phishing. html). . Retrieved December 21, 2008. [24] "Torrent of spam likely to hit 6.3 million TD Ameritrade hack victims" (http:/ / www. webcitation. org/ 5gY2R1j1g). Archived from the original (http:/ / www. sophos. com/ pressoffice/ news/ articles/ 2007/ 09/ ameritrade. html) on 2009-05-05. . [25] Shadowy Russian Firm Seen as Conduit for [[Cybercrime (http:/ / www. washingtonpost. com/ wp-dyn/ content/ story/ 2007/ 10/ 12/ ST2007101202661. html?hpid=topnews)]], by Brian Krebs, Washington post, October 13, 2007 [26] Phishsos.Blogspot.com (http:/ / phishsos. blogspot. com/ 2010/ 01/ facebook-scam. html) [27] "Millersmiles Home Page" (http:/ / www. millersmiles. co. uk). Oxford Information Services. . Retrieved 2010-01-03. [28] "FraudWatch International Home Page" (http:/ / www. fraudwatchinternational. com). FraudWatch International. . Retrieved 2010-01-03. [29] HSBCUSA.com (http:/ / www. hsbcusa. com/ security/ recognize_fraud. html) [30] Berners-Lee, Tim. "Uniform Resource Locators (URL)" (http:/ / www. w3. org/ Addressing/ rfc1738. txt). IETF Network Working Group. . Retrieved January 28, 2006. [31] Microsoft. "A security update is available that modifies the default behavior of Internet Explorer for handling user information in HTTP and in HTTPS URLs" (http:/ / support. microsoft. com/ kb/ 834489). Microsoft Knowledgebase. . Retrieved August 28, 2005. [32] Fisher, Darin. "Warn when HTTP URL auth information isn't necessary or when it's provided" (https:/ / bugzilla. mozilla. org/ show_bug. cgi?id=232567). Bugzilla. . Retrieved August 28, 2005.
Phishing
[33] Johanson, Eric. "The State of Homograph Attacks Rev1.1" (http:/ / www. shmoo. com/ idn/ homograph. txt). The Shmoo Group. . Retrieved August 11, 2005. [34] Evgeniy Gabrilovich and Alex Gontmakher (February 2002). "The Homograph Attack" (http:/ / www. cs. technion. ac. il/ ~gabr/ papers/ homograph_full. pdf) (PDF). Communications of the ACM 45(2): 128. . [35] Leyden, John (August 15, 2006). "Barclays scripting SNAFU exploited by phishers" (http:/ / www. theregister. co. uk/ 2006/ 08/ 15/ barclays_phish_scam/ ). The Register. . [36] Levine, Jason. "Goin' phishing with eBay" (http:/ / q. queso. com/ archives/ 001617). Q Daily News. . Retrieved December 14, 2006. [37] Leyden, John (December 12, 2007). "Cybercrooks lurk in shadows of big-name websites" (http:/ / www. theregister. co. uk/ 2007/ 12/ 12/ phishing_redirection/ ). The Register. . [38] Mutton, Paul. "Fraudsters seek to make phishing sites undetectable by content filters" (http:/ / news. netcraft. com/ archives/ 2005/ 05/ 12/ fraudsters_seek_to_make_phishing_sites_undetectable_by_content_filters. html). Netcraft. . Retrieved July 10, 2006. [39] Mutton, Paul. "Phishing Web Site Methods" (http:/ / www. fraudwatchinternational. com/ phishing-fraud/ phishing-web-site-methods/ ). FraudWatch International. . Retrieved December 14, 2006. [40] "Phishing con hijacks browser bar" (http:/ / news. bbc. co. uk/ 1/ hi/ technology/ 3608943. stm). BBC News. April 8, 2004. . [41] Krebs, Brian. "Flaws in Financial Sites Aid Scammers" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 06/ flaws_in_financial_sites_aid_s. html). Security Fix. . Retrieved June 28, 2006. [42] Mutton, Paul. "PayPal Security Flaw allows Identity Theft" (http:/ / news. netcraft. com/ archives/ 2006/ 06/ 16/ paypal_security_flaw_allows_identity_theft. html). Netcraft. . Retrieved June 19, 2006. [43] Hoffman, Patrick (January 10, 2007). "RSA Catches Financial Phishing Kit" (http:/ / www. eweek. com/ article2/ 0,1895,2082039,00. asp). eWeek. . [44] Miller, Rich. "Phishing Attacks Continue to Grow in Sophistication" (http:/ / news. netcraft. com/ archives/ 2007/ 01/ 15/ phishing_attacks_continue_to_grow_in_sophistication. html). Netcraft. . Retrieved December 19, 2007. [45] Gonsalves, Antone (April 25, 2006). "Phishers Snare Victims With VoIP" (http:/ / www. techweb. com/ wire/ security/ 186701001). Techweb. . [46] "Identity thieves take advantage of VoIP" (http:/ / www. silicon. com/ research/ specialreports/ voip/ 0,3800004463,39128854,00. htm). Silicon.com. March 21, 2005. . [47] "Internet Banking Targeted Phishing Attack" (http:/ / www. met. police. uk/ fraudalert/ docs/ internet_bank_fraud. pdf). Metropolitan Police Service. 2005-06-03. . Retrieved 2009-03-22. [48] Kerstein, Paul (July 19, 2005). "How Can We Stop Phishing and Pharming Scams?" (http:/ / www. csoonline. com/ talkback/ 071905. html). CSO. . [49] McCall, Tom (December 17, 2007). "Gartner Survey Shows Phishing Attacks Escalated in 2007; More than $3 Billion Lost to These Attacks" (http:/ / www. gartner. com/ it/ page. jsp?id=565125). Gartner. . [50] "A Profitless Endeavor: Phishing as Tragedy of the Commons" (http:/ / research. microsoft. com/ ~cormac/ Papers/ PhishingAsTragedy. pdf) (PDF). Microsoft. . Retrieved November 15, 2008. [51] "UK phishing fraud losses double" (http:/ / www. finextra. com/ fullstory. asp?id=15013). Finextra. March 7, 2006. . [52] Richardson, Tim (May 3, 2005). "Brits fall prey to phishing" (http:/ / www. theregister. co. uk/ 2005/ 05/ 03/ aol_phishing/ ). The Register. . [53] Miller, Rich. "Bank, Customers Spar Over Phishing Losses" (http:/ / news. netcraft. com/ archives/ 2006/ 09/ 13/ bank_customers_spar_over_phishing_losses. html). Netcraft. . Retrieved December 14, 2006. [54] "Latest News" (http:/ / applications. boi. com/ updates/ Article?PR_ID=1430). . [55] "Bank of Ireland agrees to phishing refunds vnunet.com" (http:/ / www. vnunet. com/ vnunet/ news/ 2163714/ bank-ireland-backtracks). . [56] Ponnurangam Kumaraguru, Yong Woo Rhee, Alessandro Acquisti, Lorrie Cranor, Jason Hong and Elizabeth Nunge (November 2006). "Protecting People from Phishing: The Design and Evaluation of an Embedded Training Email System" (http:/ / www. cylab. cmu. edu/ files/ cmucylab06017. pdf) (PDF). Technical Report CMU-CyLab-06-017, CyLab, Carnegie Mellon University.. . Retrieved November 14, 2006. [57] Bank, David (August 17, 2005). "'Spear Phishing' Tests Educate People About Online Scams" (http:/ / online. wsj. com/ public/ article/ 0,,SB112424042313615131-z_8jLB2WkfcVtgdAWf6LRh733sg_20060817,00. html?mod=blogs). The Wall Street Journal. . [58] "Anti-Phishing Tips You Should Not Follow" (http:/ / www. hexview. com/ sdp/ node/ 24). HexView. . Retrieved June 19, 2006. [59] "Protect Yourself from Fraudulent Emails" (https:/ / www. paypal. com/ us/ cgi-bin/ webscr?cmd=_vdc-security-spoof-outside). PayPal. . Retrieved July 7, 2006. [60] Markus Jakobsson, Alex Tsow, Ankur Shah, Eli Blevis, Youn-kyung Lim.. "What Instills Trust? A Qualitative Study of Phishing." (http:/ / www. informatics. indiana. edu/ markus/ papers/ trust_USEC. pdf) (PDF). USEC '06. . [61] Zeltser, Lenny (March 17, 2006). "Phishing Messages May Include Highly-Personalized Information" (http:/ / isc. incidents. org/ diary. php?storyid=1194). The SANS Institute. . [62] Markus Jakobsson and Jacob Ratkiewicz. "Designing Ethical Phishing Experiments" (http:/ / www2006. org/ programme/ item. php?id=3533). WWW '06. . [63] Kawamoto, Dawn (August 4, 2005). "Faced with a rise in so-called pharming and crimeware attacks, the Anti-Phishing Working Group will expand its charter to include these emerging threats." (http:/ / www. zdnetindia. com/ news/ features/ stories/ 126569. html). ZDNet India. . [64] "Social networking site teaches insecure password practices" (http:/ / blog. anta. net/ 2008/ 11/ 09/ social-networking-site-teaches-insecure-password-practices/ ). blog.anta.net. 2008-11-09. ISSN1797-1993. . Retrieved 2008-11-09.
166
Phishing
[65] Brandt, Andrew. "Privacy Watch: Protect Yourself With an Antiphishing Toolbar" (http:/ / www. pcworld. com/ article/ 125739-1/ article. html). PC World Privacy Watch. . Retrieved September 25, 2006. [66] Jsangm Audun and Pope, Simon. "User Centric Identity Management" (http:/ / www. unik. no/ people/ josang/ papers/ JP2005-AusCERT. pdf) (PDF). Proceedings of AusCERT 2005. . Retrieved 2008. [67] " Phishing - What it is and How it Will Eventually be Dealt With (http:/ / www. arraydev. com/ commerce/ jibc/ 2005-02/ jibc_phishing. HTM)" by Ian Grigg 2005 [68] " Brand matters (IE7, Skype, Vonage, Mozilla) (https:/ / financialcryptography. com/ mt/ archives/ 000645. html)" Ian Grigg [69] Franco, Rob. "Better Website Identification and Extended Validation Certificates in IE7 and Other Browsers" (http:/ / blogs. msdn. com/ ie/ archive/ 2005/ 11/ 21/ 495507. aspx). IEBlog. . Retrieved May 20, 2006. [70] "Bon Echo Anti-Phishing" (http:/ / www. mozilla. org/ projects/ bonecho/ anti-phishing/ ). Mozilla. . Retrieved June 2, 2006. [71] "Safari 3.2 finally gains phishing protection" (http:/ / arstechnica. com/ journals/ apple. ars/ 2008/ 11/ 13/ safari-3-2-finally-gains-phishing-protection). Ars Technica. November 13, 2008. . Retrieved November 15, 2008. [72] "Gone Phishing: Evaluating Anti-Phishing Tools for Windows" (http:/ / www. 3sharp. com/ projects/ antiphish/ index. htm). 3Sharp. September 27, 2006. . Retrieved 2006-10-20. [73] "Two Things That Bother Me About Googles New Firefox Extension" (http:/ / www. oreillynet. com/ onlamp/ blog/ 2005/ 12/ two_things_that_bother_me_abou. html). Nitesh Dhanjani on O'Reilly ONLamp. . Retrieved July 1, 2007. [74] "Firefox 2 Phishing Protection Effectiveness Testing" (http:/ / www. mozilla. org/ security/ phishing-test. html). . Retrieved January 23, 2007. [75] Higgins, Kelly Jackson. "DNS Gets Anti-Phishing Hook" (http:/ / www. darkreading. com/ document. asp?doc_id=99089& WT. svl=news1_1). Dark Reading. . Retrieved October 8, 2006. [76] Krebs, Brian (August 31, 2006). "Using Images to Fight Phishing" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 08/ using_images_to_fight_phishing. html). Security Fix. . [77] Seltzer, Larry (August 2, 2004). "Spotting Phish and Phighting Back" (http:/ / www. eweek. com/ article2/ 0,1759,1630161,00. asp). eWeek. . [78] Bank of America. "How Bank of America SiteKey Works For Online Banking Security" (http:/ / www. bankofamerica. com/ privacy/ sitekey/ ). . Retrieved January 23, 2007. [79] Brubaker, Bill (July 14, 2005). "Bank of America Personalizes Cyber-Security" (http:/ / www. washingtonpost. com/ wp-dyn/ content/ article/ 2005/ 07/ 13/ AR2005071302181. html). Washington Post. . [80] Stone, Brad (February 5, 2007). "Study Finds Web Antifraud Measure Ineffective" (http:/ / www. nytimes. com/ 2007/ 02/ 05/ technology/ 05secure. html?ex=1328331600& en=295ec5d0994b0755& ei=5090& partner=rssuserland& emc=rss). New York Times. . Retrieved February 5, 2007. [81] Stuart Schechter, Rachna Dhamija, Andy Ozment, Ian Fischer (May 2007). "The Emperor's New Security Indicators: An evaluation of website authentication and the effect of role playing on usability studies" (http:/ / www. deas. harvard. edu/ ~rachna/ papers/ emperor-security-indicators-bank-sitekey-phishing-study. pdf) (PDF). IEEE Symposium on Security and Privacy, May 2007. . Retrieved February 5, 2007. [82] "Phishers target Nordea's one-time password system" (http:/ / www. finextra. com/ fullstory. asp?id=14384). Finextra. October 12, 2005. . [83] Krebs, Brian (July 10, 2006). "Citibank Phish Spoofs 2-Factor Authentication" (http:/ / blog. washingtonpost. com/ securityfix/ 2006/ 07/ citibank_phish_spoofs_2factor_1. html). Security Fix. . [84] Graham Titterington. "More doom on phishing" (http:/ / www. ovum. com/ news/ euronews. asp?id=4166). Ovum Research, April 2006. . [85] Schneier, Bruce. "Security Skins" (http:/ / www. schneier. com/ blog/ archives/ 2005/ 07/ security_skins. html). Schneier on Security. . Retrieved December 3, 2006. [86] Rachna Dhamija, J.D. Tygar (July 2005). "The Battle Against Phishing: Dynamic Security Skins" (http:/ / people. deas. harvard. edu/ ~rachna/ papers/ securityskins. pdf) (PDF). Symposium On Usable Privacy and Security (SOUPS) 2005. . Retrieved February 5, 2007. [87] Madhusudhanan Chandrasekaran, Krishnan Narayanan, Shambhu Upadhyaya (March 2006). "Phishing E-mail Detection Based on Structural Properties" (http:/ / www. albany. edu/ iasymposium/ 2006/ chandrasekaran. pdf) (PDF). NYS Cyber Security Symposium. . [88] Ian Fette, Norman Sadeh, Anthony Tomasic (June 2006). "Learning to Detect Phishing Emails" (http:/ / reports-archive. adm. cs. cmu. edu/ anon/ isri2006/ CMU-ISRI-06-112. pdf) (PDF). Carnegie Mellon University Technical Report CMU-ISRI-06-112. . [89] "Anti-Phishing Working Group: Vendor Solutions" (http:/ / www. antiphishing. org/ solutions. html#takedown). Anti-Phishing Working Group. . Retrieved July 6, 2006. [90] McMillan, Robert (March 28, 2006). "New sites let users find and report phishing" (http:/ / www. linuxworld. com. au/ index. php/ id;1075406575;fp;2;fpid;1. ). LinuxWorld. . [91] Schneier, Bruce (2006-10-05). "PhishTank" (http:/ / www. schneier. com/ blog/ archives/ 2006/ 10/ phishtank. html). Schneier on Security. . Retrieved 2007-12-07. [92] "Phone Phishing" (http:/ / phonephishing. info). Phone Phishing. . Retrieved Feb 25, 2009. [93] "Federal Trade Commission" (http:/ / www. ftc. gov/ phonefraud). Federal Trade Commission. . Retrieved Mar 6, 2009. [94] Legon, Jeordan (January 26, 2004). "'Phishing' scams reel in your identity" (http:/ / www. cnn. com/ 2003/ TECH/ internet/ 07/ 21/ phishing. scam/ index. html). CNN. . [95] Leyden, John (March 21, 2005). "Brazilian cops net 'phishing kingpin'" (http:/ / www. channelregister. co. uk/ 2005/ 03/ 21/ brazil_phishing_arrest/ ). The Register. .
167
Phishing
[96] Roberts, Paul (June 27, 2005). "UK Phishers Caught, Packed Away" (http:/ / www. eweek. com/ article2/ 0,1895,1831960,00. asp). eWEEK. . [97] "Nineteen Individuals Indicted in Internet 'Carding' Conspiracy" (http:/ / www. cybercrime. gov/ mantovaniIndict. htm). . Retrieved November 20, 2005. [98] "8 held over suspected phishing fraud". The Daily Yomiuri. May 31, 2006. [99] "Phishing gang arrested in USA and Eastern Europe after FBI investigation" (http:/ / www. sophos. com/ pressoffice/ news/ articles/ 2006/ 11/ phishing-arrests. html). . Retrieved December 14, 2006. [100] "Phishers Would Face 5 Years Under New Bill" (http:/ / informationweek. com/ story/ showArticle. jhtml?articleID=60404811). Information Week. March 2, 2005. . [101] "Fraud Act 2006" (http:/ / www. opsi. gov. uk/ ACTS/ en2006/ 2006en35. htm). . Retrieved December 14, 2006. [102] "Prison terms for phishing fraudsters" (http:/ / www. theregister. co. uk/ 2006/ 11/ 14/ fraud_act_outlaws_phishing/ ). The Register. November 14, 2006. . [103] "Microsoft Partners with Australian Law Enforcement Agencies to Combat Cyber Crime" (http:/ / www. microsoft. com/ australia/ presspass/ news/ pressreleases/ cybercrime_31_3_05. aspx). . Retrieved August 24, 2005. [104] Espiner, Tom (March 20, 2006). "Microsoft launches legal assault on phishers" (http:/ / news. zdnet. co. uk/ 0,39020330,39258528,00. htm). ZDNet. . [105] Leyden, John (November 23, 2006). "MS reels in a few stray phish" (http:/ / www. theregister. co. uk/ 2006/ 11/ 23/ ms_anti-phishing_campaign_update/ ). The Register. . [106] "A History of Leadership - 2006" (http:/ / corp. aol. com/ whoweare/ history/ 2006. shtml). . [107] "AOL Takes Fight Against Identity Theft To Court, Files Lawsuits Against Three Major Phishing Gangs" (http:/ / media. aoltimewarner. com/ media/ newmedia/ cb_press_view. cfm?release_num=55254535). . Retrieved March 8, 2006. [108] "HB 2471 Computer Crimes Act; changes in provisions, penalty." (http:/ / leg1. state. va. us/ cgi-bin/ legp504. exe?051+ sum+ HB2471). . Retrieved March 8, 2006. [109] Brulliard, Karin (April 10, 2005). "Va. Lawmakers Aim to Hook Cyberscammers" (http:/ / www. washingtonpost. com/ wp-dyn/ articles/ A40578-2005Apr9. html). Washington Post. . [110] "Earthlink evidence helps slam the door on phisher site spam ring" (http:/ / www. earthlink. net/ about/ press/ pr_phishersite/ ). . Retrieved December 14, 2006. [111] Prince, Brian (January 18, 2007). "Man Found Guilty of Targeting AOL Customers in Phishing Scam" (http:/ / www. pcmag. com/ article2/ 0,1895,2085183,00. asp). PCMag.com. . [112] Leyden, John (January 17, 2007). "AOL phishing fraudster found guilty" (http:/ / www. theregister. co. uk/ 2007/ 01/ 17/ aol_phishing_fraudster/ ). The Register. . [113] Leyden, John (June 13, 2007). "AOL phisher nets six years' imprisonment" (http:/ / www. theregister. co. uk/ 2007/ 06/ 13/ aol_fraudster_jailed/ ). The Register. . [114] Gaudin, Sharon (June 12, 2007). "California Man Gets 6-Year Sentence For Phishing" (http:/ / www. informationweek. com/ story/ showArticle. jhtml?articleID=199903450). InformationWeek. . [115] http:/ / www. antiphishing. org [116] http:/ / www. utica. edu/ academic/ institutes/ cimip/ [117] http:/ / ha. ckers. org/ blog/ 20060609/ how-phishing-actually-works/ [118] http:/ / www. law. duke. edu/ journals/ dltr/ articles/ 2005dltr0006. html [119] http:/ / www. honeynet. org/ papers/ phishing/ [120] http:/ / www. securityfocus. com/ infocus/ 1745 [121] http:/ / www. technicalinfo. net/ papers/ Phishing. html [122] http:/ / research. microsoft. com/ en-us/ um/ people/ cormac/ Papers/ PhishingAsTragedy. pdf [123] http:/ / www. phishtank. com/ [124] http:/ / www. cl. cam. ac. uk/ %7Ernc1/ takedown. pdf [125] http:/ / www. internetnews. com/ security/ article. php/ 3882136/ One+ Gang+ Responsible+ For+ Most+ Phishing+ Attacks. htm
168
169
6.0 Information
Data security
Data security is the means of ensuring that data is kept safe from corruption and that access to it is suitably controlled. Thus data security helps to ensure privacy. It also helps in protecting personal data.
Data security
170
Backups
Backups are used to ensure data which is lost can be recovered
Data Masking
Data Masking of structured data is the process of obscuring (masking) specific data within a database table or cell to ensure that data security is maintained and sensitive information is not exposed to unauthorized personnel. This may include masking the data from users (for example so banking customer representatives can only see the last 4 digits of a customers national identity number), developers (who need real production data to test new software releases but should not be able to see sensitive financial data), outsourcing vendors, etc.
Data Erasure
Data erasure is a method of software-based overwriting that completely destroys all electronic data residing on a hard drive or other digital media to ensure that no sensitive data is leaked when an asset is retired or reused.
International Standards
The International Standard ISO/IEC 17799 covers data security under the topic of information security, and one of its cardinal principles is that all stored information, i.e. data, should be owned so that it is clear whose responsibility it is to protect and control access to that data. The Trusted Computing Group is an organization that helps standardize computing security technologies.
See also
Copy Protection Data masking Data erasure Data recovery Digital inheritance Disk encryption Comparison of disk encryption software Hardware Based Security for Computers, Data and Information Pre-boot authentication Secure USB drive Security Breach Notification Laws Single sign-on Smartcard
Information security
171
Information security
Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification or destruction.[1] The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them. These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer.
Information Security Components: or qualities, i.e., Confidentiality, Integrity and Availability (CIA). Information Systems are decomposed in three main portions, hardware, software and communications with the purpose to identify and apply information security industry standards, as mechanisms of protection and prevention, at three levels or layers: Physical, personal and organizational. Essentially, procedures or policies are implemented to tell people (administrators, users and operators)how to use products to ensure information security within the organizations.
Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement. For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures. The field of information security has grown and evolved significantly in recent years. There are many ways of gaining entry into the field as a career. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, to name a few, which are carried out by Information Security Consultants This article presents a general overview of information security and its core concepts.
Information security
172
History
Since the early days of writing, heads of state and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of written correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. World War II brought about many advancements in information security and marked the beginning of the professional field of information security. The end of the 20th century and early years of the 21st century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful and less expensive computing equipment made electronic data processing within the reach of small business and the home user. These computers quickly became interconnected through a network generically called the Internet or World Wide Web. The rapid growth and widespread use of electronic data processing and electronic business conducted through the Internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process and transmit. The academic disciplines of computer security, information security and information assurance emerged along with numerous professional organizations - all sharing the common goals of ensuring the security and reliability of information systems.
Basic principles
Key concepts
For over twenty years, information security has held confidentiality, integrity and availability (known as the CIA triad) to be the core principles of information security. There is continuous debate about extending this classic trio. Other principles such as Accountability have sometimes been proposed for addition - it has been pointed out that issues such as Non-Repudiation do not fit well within the three core concepts, and as regulation of computer systems has increased (particularly amongst the Western nations) Legality is becoming a key consideration for practical security installations. In 2002, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian hexad are a subject of debate amongst security professionals. Confidentiality Confidentiality is the term used to prevent the disclosure of information to unauthorized individuals or systems. For example, a credit card transaction on the Internet requires the credit card number to be transmitted from the buyer to the merchant and from the merchant to a transaction processing network. The system attempts to enforce confidentiality by encrypting the card number during transmission, by limiting the places where it might appear (in databases, log files, backups, printed receipts, and so on), and by restricting access to the places where it is stored. If an unauthorized party obtains the card number in any way, a breach of confidentiality has occurred. Breaches of confidentiality take many forms. Permitting someone to look over your shoulder at your computer screen while you have confidential data displayed on it could be a breach of confidentiality. If a laptop computer containing sensitive information about a company's employees is stolen or sold, it could result in a breach of confidentiality. Giving out confidential information over the telephone is a breach of confidentiality if the caller is not authorized to have the information.
Information security Confidentiality is necessary (but not sufficient) for maintaining the privacy of the people whose personal information a system holds. Integrity In information security, integrity means that data cannot be modified without authorization. This is not the same thing as referential integrity in databases. Integrity is violated when an employee accidentally or with malicious intent deletes important data files, when a computer virus infects a computer, when an employee is able to modify his own salary in a payroll database, when an unauthorized user vandalizes a web site, when someone is able to cast a very large number of votes in an online poll, and so on. There are many ways in which integrity could be violated without malicious intent. In the simplest case, a user on a system could mis-type someone's address. On a larger scale, if an automated process is not written and tested correctly, bulk updates to a database could alter data in an incorrect way, leaving the integrity of the data compromised. Information security professionals are tasked with finding ways to implement controls that prevent errors of integrity. Availability For any information system to serve its purpose, the information must be available when it is needed. This means that the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks.
173
Authenticity
In computing, e-Business and information security it is necessary to ensure that the data, transactions, communications or documents (electronic or physical) are genuine. It is also important for authenticity to validate that both parties involved are who they claim they are.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction nor can the other party deny having sent a transaction. Electronic commerce uses technology such as digital signatures and encryption to establish authenticity and non-repudiation.
Risk management
A comprehensive treatment of the topic of risk management is beyond the scope of this article. However, a useful definition of risk management will be provided as well as some basic terminology and a commonly used process for risk management. The CISA Review Manual 2006 provides the following definition of risk management: "Risk management is the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."[2] There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerability emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected.
Information security Risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property). It should be pointed out that it is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called residual risk. A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis. The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment: security policy, organization of information security, asset management, human resources security, physical and environmental security, communications and operations management, access control, information systems acquisition, development and maintenance, information security incident management, business continuity management, and regulatory compliance.
174
In broad terms the risk management process consists of: 1. Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies. 2. Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization. 3. Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security. 4. Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis. 5. Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset. 6. Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity. For any given risk, Executive Management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or out-sourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk. This is itself a potential risk.
Information security
175
Controls
When Management chooses to mitigate a risk, they will do so by implementing one or more of three different types of controls. Administrative Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day to day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards and guidelines that must be followed - the Payment Card Industry (PCI) Data Security Standard required by Visa and Master Card is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies. Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls. Administrative controls are of paramount importance. Logical Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. For example: passwords, network and host based firewalls, network intrusion detection systems, access control lists, and data encryption are logical controls. An important logical control that is frequently overlooked is the principle of least privilege. The principle of least privilege requires that an individual, program or system process is not granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read Email and surf the Web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, or they are promoted to a new position, or they transfer to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges which may no longer be necessary or appropriate. Physical Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities. For example: doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and work place into functional areas are also physical controls. An important physical control that is frequently overlooked is the separation of duties. Separation of duties ensures that an individual can not complete a critical task by himself. For example: an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator - these roles and responsibilities must be separated from one another.[3]
Information security
176
Access control
Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected - the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe." they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of something you know include such things as a PIN, a password, or your mother's maiden name. Examples of something you have include a driver's license or a magnetic swipe card. Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor
Information security authentication. On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include Role-based access control available in many advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. All failed and successful authentication attempts must be logged, and all access to information must leave some type of audit trail.
177
Cryptography
Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user, who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage. Cryptography provides information security with other useful applications as well including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older less secure application such as telnet and ftp are slowly being replaced with more secure applications such as ssh that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU-T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and Email. Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the
Information security same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction and they must be available when needed. PKI solutions address many of the problems that surround key management.
178
Defense in depth
Information security must protect information throughout the life span of the information, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its life time, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on and overlapping of security measures is called defense in depth. The strength of any system is no greater than its weakest link. Using a defence in depth strategy, should one defensive measure fail there are other defensive measures in place that continue to provide protection. Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense-in-depth strategy. With this approach, defense-in-depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense-in- depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people as the outer layer of the onion, and network security, host-based security and application security forming the inner layers of the onion. Both perspectives are equally valid and each provides valuable insight into the implementation of a good defense-in-depth strategy.
Process
The terms reasonable and prudent person, due care and due diligence have been used in the fields of Finance, Securities, and Law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S.A. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems. In the business world, stockholders, customers, business partners and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal ethical manner. A prudent person is also diligent (mindful, attentive, and ongoing) in their due care of the business. In the field of Information Security, Harris[4] offers the following definitions of due care and due diligence: "Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational." Attention should be made to two important points in these definitions. First, in due care, steps are taken to show this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there
Information security are continual activities - this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing.
179
Security governance
The Software Engineering Institute at Carnegie Mellon University, in a publication titled "Governing for Enterprise Security (GES)", defines characteristics of effective security governance. These include: An enterprise-wide issue Leaders are accountable Viewed as a business requirement Risk-based Roles, responsibilities, and segregation of duties defined Addressed and enforced in policy Adequate resources committed Staff aware and trained A development life cycle requirement Planned, managed, measurable, and measured Reviewed and audited
Change management
Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented. Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of Managements many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented. Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not
Information security generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system. Change management is usually overseen by a Change Review Board composed of representatives from key business areas, security, networking, systems administrators, Database administration, applications development, desktop support and the help desk. The tasks of the Change Review Board can be facilitated with the use of automated work flow application. The responsibility of the Change Review Board is to ensure the organizations documented change management procedures are followed. The change management process is as follows: Requested: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change. Approved: Management runs the business and controls the allocation of resources therefore, Management must approve requests for changes and assign a priority for every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change. Planned: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing and documenting both implementation and backout plans. Need to define the criteria on which a decision to back out will be made. Tested: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested. Scheduled: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities. Communicated: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the Help Desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change. Implemented: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented. Documented: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed. Post change review: The change review board should hold a post implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement. Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve
180
Information security the over all quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation and communication. ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps [5] (Full book summary [6]), and Information Technology Infrastructure Library all provide valuable guidance on implementing an efficient and effective change management program. information security
181
Business continuity
Business continuity is the mechanism by which an organization continues to operate its critical business units, during planned or unplanned disruptions that affect normal business operations, by invoking planned and managed procedures. Unlike what most people think business continuity is not necessarily an IT system or process, simply because it is about the business. Today disasters or disruptions to business are a reality. Whether the disaster is natural or man-made (the TIME magazine has a website on the top 10), it affects normal life and so business. So why is planning so important? Let us face reality that "all businesses recover", whether they planned for recovery or not, simply because business is about earning money for survival. The planning is merely getting better prepared to face it, knowing fully well that the best plans may fail. Planning helps to reduce cost of recovery, operational overheads and most importantly sail through some smaller ones effortlessly. For businesses to create effective plans they need to focus upon the following key questions. Most of these are common knowledge, and anyone can do a BCP. 1. Should a disaster strike, what are the first few things that I should do? Should I call people to find if they are OK or call up the bank to figure out my money is safe? This is Emergencey Response. Emergency Response services help take the first hit when the disaster strikes and if the disaster is serious enough the Emergency Response teams need to quickly get a Crisis Management team in place. 2. What parts of my business should I recover first? The one that brings me most money or the one where I spend the most, or the one that will ensure I shall be able to get sustained future growth? The identified sections are the critical business units. There is no magic bullet here, no one answer satisfies all. Businesses need to find answers that meet business requirements. 3. How soon should I target to recover my critical business units? In BCP technical jargon this is called Recovery Time Objective, or RTO. This objective will define what costs the business will need to spend to recover from a disruption. For example, it is cheaper to recover a business in 1 day than in 1 hour. 4. What all do I need to recover the business? IT, machinery, records...food, water, people...So many aspects to dwell upon. The cost factor becomes clearer now...Business leaders need to drive business continuity. Hold on. My IT manager spent $200000 last month and created a DRP (Disaster Recovery Plan), whatever happened to that? a DRP is about continuing an IT system, and is one of the sections of a comprehensive Business Continuity Plan. Look below for more on this. 5. And where do I recover my business from... Will the business center give me space to work, or would it be flooded by many people queuing up for the same reasons that I am. 6. But once I do recover from the disaster and work in reduced production capacity, since my main operational sites are unavailable, how long can this go on. How long can I do without my original sites, systems, people? this defines the amount of business resilience a business may have. 7. Now that I know how to recover my business. How do I make sure my plan works? Most BCP pundits would recommend testing the plan at least once a year, reviewing it for adequacy and rewriting or updating the plans either annually or when businesses change.
Information security
182
Information security State Security Breach Notification Laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen. Personal Information Protection and Electronics Document Act (PIPEDA) - An Act to support and promote electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.
183
Sources of standards
International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries with a Central Secretariat in Geneva Switzerland that coordinates the system. The ISO is the world's largest developer of standards. The ISO-15443: "Information technology - Security techniques - A framework for IT security assurance", ISO-27002 (previously ISO-17799): "Information technology - Security techniques - Code of practice for information security management", ISO-20000: "Information technology - Service management", and ISO-27001: "Information technology - Security techniques - Information security management systems" are of particular interest to information security professionals. The USA National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division develops standards, metrics, tests and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management and operation. NIST is also the custodian of the USA Federal Information Processing Standard publications (FIPS). The Internet Society is a professional membership society with more than 100 organization and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the Internet, and is the organization home for the groups responsible for Internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook. The Information Security Forum is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It provides research into best practice and practice advice summarized in its biannual Standard of Good Practice, incorporating detail specifications across many areas. The IT Baseline Protection Catalogs, or IT-Grundschutz Catalogs, ("IT Baseline Protection Manual" before 2005) are a collection of documents from the German Federal Office for Security in Information Technology (FSI), useful for detecting and combating security-relevant weak points in the IT environment (IT cluster). The collection encompasses over 3000 pages with the introduction and catalogs.
Professionalism
In 1989, Carnegie Mellon University established the Information Networking Institute, the United States' first research and education center devoted to information networking. The academic disciplines of computer security, information security and information assurance emerged along with numerous professional organizations during the later years of the 20th century and early years of the 21st century. Entry into the field can be accomplished through self-study, college or university schooling in the field, or through week long focused training camps. Many colleges, universities and training companies offer many of their programs on- line. The GIAC-GSEC and Security+ certifications are both entry level security certifications. Membership of the Institute of Information Security Professionals (IISP) is gaining traction in the U.K. as the professional standard for Information Security Professionals.
Information security The Certified Information Systems Security Professional (CISSP) is a mid- to senior-level information security certification. The Information Systems Security Architecture Professional (ISSAP), Information Systems Security Engineering Professional (ISSEP), Information Systems Security Management Professional (ISSMP), and Certified Information Security Manager (CISM) certifications are well-respected advanced certifications in information-security architecture, engineering, and management respectively. Within the UK a recognised senior level information security certification is provided by CESG. CLAS is the CESG Listed Adviser Scheme - a partnership linking the unique Information Assurance knowledge of CESG with the expertise and resources of the private sector. CESG recognises that there is an increasing demand for authoritative Information Assurance advice and guidance. This demand has come as a result of an increasing awareness of the threats and vulnerabilities that information systems are likely to face in an ever-changing world. The Scheme aims to satisfy this demand by creating a pool of high quality consultants approved by CESG to provide Information Assurance advice to government departments and other organisations who provide vital services for the United Kingdom. CLAS consultants are approved to provide Information Assurance advice on systems processing protectively marked information up to, and including, SECRET. Potential customers of the CLAS Scheme should also note that if the information is not protectively marked then they do not need to specify membership of CLAS in their invitations to tender, and may be challenged if equally competent non-scheme members are prevented from bidding. The profession of information security has seen an increased demand for security professionals who are experienced in network security auditing, penetration testing, and digital forensics investigation. In addition, many smaller companies have cropped up as the result of this increased demand in information security training and consulting.
184
Conclusion
Information security is the ongoing process of exercising due care and due diligence to protect information, and information systems, from unauthorized access, use, disclosure, destruction, modification, or disruption or distribution. The never ending process of information security involves ongoing training, assessment, protection, monitoring & detection, incident response & repair, documentation, and review. This makes Information Security Consultant an indispensable part of all the business operations across different domains.
See also
Computer insecurity Enterprise information security architecture Data erasure Data loss prevention products Disk encryption Information assurance Information security audit Information Security Forum Information security governance Information security management Information security management system Information security policies Information security standards Information technology security audit ISO/IEC 27001 ITIL security management Network Security Services Physical information security Privacy enhancing technologies Parkerian Hexad Security breach notification laws Security information management Security of Information Act Security level management Security bug Single sign-on Standard of Good Practice Verification and validation
Information security
185
Further reading
Anderson, K., "IT Security Professionals Must Evolve for Changing Market [9]", SC Magazine, October 12, 2006. Aceituno, V., "On Information Security Paradigms [10]",ISSA Journal, September, 2005. Dhillon, G., "Principles of Information Systems Security: text and cases", John Wiley & Sons, 2007. Lambo, T."ISO/IEC 27001: The future of infosec certification [11]",ISSA Journal, November, 2006.
pp.85. ISBN1-933284-15-3.
[3] "Segregation of Duties Control matrix" (http:/ / www. isaca. org/ AMTemplate. cfm?Section=CISA1& Template=/ ContentManagement/ ContentDisplay. cfm& ContentID=40835). ISACA. 2008. . Retrieved 2008-09-30. [4] Harris, Shon (2003). All-in-one CISSP Certification Exam Guide (2nd Ed. ed.). Emeryville, CA: McGraw-Hill/Osborne.
ISBN0-07-222966-7.
[5] http:/ / www. itpi. org/ home/ visibleops2. php [6] http:/ / wikisummaries. org/ Visible_Ops [7] Harris, Shon (2008). All-in-one CISSP Certification Exam Guide (4th Ed. ed.). New York, NY: McGraw-Hill.
ISBN978-0-07-149786-2.
[8] http:/ / www. law. cornell. edu/ uscode/ 20/ 1232. html [9] http:/ / www. scmagazineus. com/ IT-security-professionals-must-evolve-for-changing-market/ article/ 33990/ [10] http:/ / www. issa. org/ Library/ Journals/ 2005/ September/ Aceituno%20Canal%20-%20On%20Information%20Security%20Paradigms. pdf [11] https:/ / www. issa. org/ Library/ Journals/ 2006/ November/ Lambo-ISO-IEC%2027001-The%20future%20of%20infosec%20certification. pdf
External links
InfoSecNews.us (http://www.infosecnews.us/) Information Security News DoD IA Policy Chart (http://iac.dtic.mil/iatac/ia_policychart.html) on the DoD Information Assurance Technology Analysis Center web site. patterns & practices Security Engineering Explained (http://msdn2.microsoft.com/en-us/library/ms998382. aspx) Open Security Architecture- Controls and patterns to secure IT systems (http://www.opensecurityarchitecture. org) Introduction to Security Governance (http://www.logicalsecurity.com/resources/resources_articles.html) COE Security - Information Security Articles (http://www.coesecurity.com/services/resources.asp) An Introduction to Information Security (http://security.practitioner.com/introduction/) Example Security Policy (http://www.davidstclair.co.uk/example-security-templates/ example-internet-e-mail-usage-policy-2.html) Link broken IWS - Information Security Chapter (http://www.iwar.org.uk/comsec/)
Information security
186
Bibliography
Allen, Julia H. (2001). The CERT Guide to System and Network Security Practices. Boston, MA: Addison-Wesley. ISBN0-201-73723-X. Krutz, Ronald L.; Russell Dean Vines (2003). The CISSP Prep Guide (Gold Edition ed.). Indianapolis, IN: Wiley. ISBN0-471-26802-X. Layton, Timothy P. (2007). Information Security: Design, Implementation, Measurement, and Compliance. Boca Raton, FL: Auerbach publications. ISBN978-0-8493-7087-8. McNab, Chris (2004). Network Security Assessment. Sebastopol, CA: O'Reilly. ISBN0-596-00611-X. Peltier, Thomas R. (2001). Information Security Risk Analysis. Boca Raton, FL: Auerbach publications. ISBN0-8493-0880-1. Peltier, Thomas R. (2002). Information Security Policies, Procedures, and Standards: guidelines for effective information security management. Boca Raton, FL: Auerbach publications. ISBN0-8493-1137-3. White, Gregory (2003). All-in-one Security+ Certification Exam Guide. Emeryville, CA: McGraw-Hill/Osborne. ISBN0-07-222633-1. Dhillon, Gurpreet (2007). Principles of Information Systems Security: text and cases. NY: John Wiley & Sons. ISBN978-0471450566.
Encryption
In cryptography, encryption is the process of transforming information (referred to as plaintext) using an algorithm (called cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key. The result of the process is encrypted information (in cryptography, referred to as ciphertext). In many contexts, the word encryption also implicitly refers to the reverse process, decryption (e.g. software for encryption can typically also perform decryption), to make the encrypted information readable again (i.e. to make it unencrypted). Encryption has long been used by militaries and governments to facilitate secret communication. Encryption is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage.[1] Encryption can be used to protect data "at rest", such as files on computers and storage devices (e.g. USB flash drives). In recent years there have been numerous reports of confidential data such as customers' personal records being exposed through loss or theft of laptops or backup drives. Encrypting such files at rest helps protect them should physical security measures fail. Digital rights management systems which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection) are another somewhat different example of using encryption on data at rest. Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years.[2] Encrypting data in transit also helps to secure it as it is often difficult to physically secure all access to networks. Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature. Standards and cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single slip-up in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See, e.g., traffic analysis, TEMPEST, or Trojan horse.
Encryption One of the earliest public key encryption applications was called Pretty Good Privacy (PGP). It was written in 1991 by Phil Zimmermann and was purchased by Network Associates (now PGP Corporation) in 1997. There are a number of reasons why an encryption product may not be suitable in all cases. First, e-mail must be digitally signed at the point it was created to provide non-repudiation for some legal purposes, otherwise the sender could argue that it was tampered with after it left their computer but before it was encrypted at a gateway. An encryption product may also not be practical when mobile users need to send e-mail from outside the corporate network.[3]
187
See also
Cryptography Cold boot attack Cyberspace Electronic Security Act (in the US) Encryption software Cipher Key Famous ciphertexts Rip van Winkle cipher Strong secrecy Disk encryption Secure USB drive Secure Network Communications Security and Freedom Through Encryption Act
References
Helen Fouch Gaines, Cryptanalysis, 1939, Dover. ISBN 0-486-20097-3 David Kahn, The Codebreakers - The Story of Secret Writing (ISBN 0-684-83130-9) (1967) Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966. ISBN 0-88385-622-0
External links
http://www.enterprisenetworkingplanet.com/_featured/article.php/3792771/ PGPs-Universal-Server-Provides-Unobtrusive-Encryption.htm SecurityDocs Resource for encryption whitepapers [4] A guide to encryption for beginners [5] Accumulative archive of various cryptography mailing lists. [6] Includes Cryptography list at metzdowd and SecurityFocus Crypto list.
References
[1] Robert Richardson, 2008 CSI Computer Crime and Security Survey at 19. Online at http:/ / i. cmpnet. com/ v2. gocsi. com/ pdf/ CSIsurvey2008. pdf. [2] Fiber Optic Networks Vulnerable to Attack, Information Security Magazine, November 15, 2006, Sandra Kay Miller [3] http:/ / www. enterprisenetworkingplanet. com/ _featured/ article. php/ 3792771/ PGPs-Universal-Server-Provides-Unobtrusive-Encryption. htm. [4] http:/ / www. securitydocs. com/ Encryption [5] http:/ / rexor. codeplex. com/ documentation [6] http:/ / www. xml-dev. com/ lurker/ list/ crypto. en. html
Cryptography
188
Cryptography
Cryptography (or cryptology; from Greek , kryptos, "hidden, secret"; and , grph, "I write", or - , -logia, respectively)[1] is the practice and study of hiding information. Modern cryptography intersects the disciplines of mathematics, computer science, and engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptology prior to the modern age was almost synonymous with encryption, the conversion of information from a readable state to nonsense. The sender retained the ability to decrypt the information German Lorenz cipher machine, used in World War II to encrypt and therefore avoid unwanted persons being able to very-high-level general staff messages read it. Since WWI and the advent of the computer, the methods used to carry out cryptology have become increasingly complex and its application more widespread. Alongside the advancement in cryptology-related technology, the practice has raised a number of legal issues, some of which remain unresolved.
Terminology
Until modern times cryptography referred almost exclusively to encryption, which is the process of converting ordinary information (plaintext) into unintelligible gibberish (i.e., ciphertext).[2] Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms that create the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and in each instance by a key. This is a secret parameter (ideally known only to the communicants) for a specific message exchange context. Keys are important, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore useless (or even counter-productive) for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning. It means the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, wallaby replaces attack at dawn). Codes are no longer used in serious cryptographyexcept incidentally for such things as unit designations (e.g., Bronco Flight or Operation Overlord)since properly chosen ciphers are both more practical and more secure than even the best codes and also are better adapted to computers. Cryptanalysis is the term used for the study of methods for obtaining the meaning of encrypted information without access to the key normally required to do so; i.e., it is the study of how to crack encryption algorithms or their implementations. Some use the terms cryptography and cryptology interchangeably in English, while others (including US military practice generally) use cryptography to refer specifically to the use and practice of cryptographic techniques and cryptology to refer to the combined study of cryptography and cryptanalysis.[3] [4] English is more flexible than several other languages in which cryptology (done by cryptologists) is always used in the second sense above. In the English Wikipedia the general term used for the entire field is cryptography (done by cryptographers).
Cryptography The study of characteristics of languages which have some application in cryptography (or cryptology), i.e. frequency data, letter combinations, universal patterns, etc., is called cryptolinguistics.
189
Classic cryptography
The earliest forms of secret writing required little more than local pen and paper analogs, as most people could not read. More literacy, or literate opponents, required actual cryptography. The main classical cipher types are transposition ciphers, which rearrange the order of letters in a message (e.g., 'hello world' becomes 'ehlol owrdl' in a trivially simple rearrangement scheme), and substitution ciphers, which systematically replace letters or groups of letters with other letters or groups of letters (e.g., 'fly at once' becomes 'gmz bu podf' by replacing each letter with the one following it in Reconstructed ancient Greek scytale the Latin alphabet). Simple versions of either offered little confidentiality (rhymes with "Italy"), an early cipher device from enterprising opponents, and still do. An early substitution cipher was the Caesar cipher, in which each letter in the plaintext was replaced by a letter some fixed number of positions further down the alphabet. It was named after Julius Caesar who is reported to have used it, with a shift of 3, to communicate with his generals during his military campaigns, just like EXCESS-3 code in boolean algebra. There is record of several early Hebrew ciphers as well. The earliest known use of cryptography is some carved ciphertext on stone in Egypt (ca 1900 BC), but this may have been done for the amusement of literate observers. The next oldest is bakery recipes from Mesopotamia. Cryptography is recommended in the Kama Sutra as a way for lovers to communicate without inconvenient discovery.[5] Steganography (i.e., hiding even the existence of a message so as to keep it confidential) was also first developed in ancient times. An early example, from Herodotus, concealed a messagea tattoo on a slave's shaved headunder the regrown hair.[2] More modern examples of steganography include the use of invisible ink, microdots, and digital watermarks to conceal information. Ciphertexts produced by a classical cipher (and some modern ciphers) always reveal statistical information about the plaintext, which can often be used to break them. After the discovery of frequency analysis perhaps by the Arab mathematician and polymath, Al-Kindi (also known as Alkindus), in the 9th century, nearly all such ciphers became more or less readily breakable by any informed attacker. Such classical ciphers still enjoy popularity today, though mostly as puzzles (see cryptogram). Al-Kindi wrote a book on cryptography entitled Risalah fi Istikhraj al-Mu'amma (Manuscript for the Deciphering Cryptographic Messages), in which described the first cryptanalysis techniques, including some for polyalphabetic ciphers.[6] [7]
Cryptography
190
Essentially all ciphers remained vulnerable to cryptanalysis using the frequency analysis technique until the development of the polyalphabetic cipher, most clearly by Leon Battista Alberti around the year 1467, though there is some indication that it was already known to Al-Kindi.[7] Alberti's innovation was to use different ciphers (i.e., substitution alphabets) for various parts of a message (perhaps for each successive plaintext letter at the limit). He also invented what was probably the first automatic cipher device, a wheel which implemented a partial realization of his invention. In the polyalphabetic Vigenre cipher, encryption uses a key word, which controls letter substitution depending on which letter of the key word is used. In the mid 1800s Charles Babbage showed that polyalphabetic ciphers of this type remained partially vulnerable to extended frequency analysis techniques.[2]
Although frequency analysis is a powerful and general technique against many ciphers, encryption has still been often effective in practice; many a would-be cryptanalyst was unaware of the technique. Breaking a message without using frequency analysis essentially required knowledge of the cipher Enciphered letter from Gabriel de Luetz used and perhaps of the key involved, thus making espionage, bribery, d'Aramon, French Ambassador to the burglary, defection, etc., more attractive approaches to the cryptanalytically Ottoman Empire, after 1546, with partial uninformed. It was finally explicitly recognized in the 19th century that decipherment secrecy of a cipher's algorithm is not a sensible nor practical safeguard of message security; in fact, it was further realized that any adequate cryptographic scheme (including ciphers) should remain secure even if the adversary fully understands the cipher algorithm itself. Security of the key used should alone be sufficient for a good cipher to maintain confidentiality under an attack. This fundamental principle was first explicitly stated in 1883 by Auguste Kerckhoffs and is generally called Kerckhoffs' principle; alternatively and more bluntly, it was restated by Claude Shannon, the inventor of information theory and the fundamentals of theoretical cryptography, as Shannon's Maxim'the enemy knows the system'. Different physical devices and aids have been used to assist with ciphers. One of the earliest may have been the scytale of ancient Greece, a rod supposedly used by the Spartans as an aid for a transposition cipher (see image above). In medieval times, other aids were invented such as the cipher grille, which was also used for a kind of steganography. With the invention of polyalphabetic ciphers came more sophisticated aids such as Alberti's own cipher disk, Johannes Trithemius' tabula recta scheme, and Thomas Jefferson's multi-cylinder (not publicly known, and reinvented independently by Bazeries around 1900). Many mechanical encryption/decryption devices were invented early in the 20th century, and several patented, among them rotor machinesfamously including the Enigma machine used by the German government and military from the late '20s and during World War II.[8] The ciphers implemented by better quality examples of these machine designs brought about a substantial increase in cryptanalytic difficulty after WWI.[9]
Cryptography cipher is very efficient (i.e., fast and requiring few resources, such as memory or CPU capability), while breaking it requires an effort many orders of magnitude larger, and vastly larger than that required for any classical cipher, making cryptanalysis so inefficient and impractical as to be effectively impossible. Alternate methods of attack (bribery, burglary, threat, torture, ...) have become more attractive in consequence. Extensive open academic research into cryptography is relatively recent; it began only in the mid-1970s. In recent times, IBM personnel designed the algorithm that became the Federal (i.e., US) Data Encryption Standard; Whitfield Diffie and Martin Hellman published their key agreement algorithm,[10] ; and the RSA algorithm was published in Martin Gardner's Scientific American column. Since then, cryptography has become a widely used tool in communications, computer networks, and computer security generally. Some modern cryptographic techniques can only keep their keys secret if certain mathematical problems are intractable, such as the integer factorization or the discrete Credit card with smart-card capabilities. The 3-by-5-mm chip embedded in the card is shown, logarithm problems, so there are deep connections with abstract enlarged. Smart cards combine low cost and portability mathematics. There are no absolute proofs that a cryptographic with the power to compute cryptographic algorithms. technique is secure (but see one-time pad); at best, there are proofs that some techniques are secure if some computational problem is difficult to solve, or this or that assumption about implementation or practical use is met. As well as being aware of cryptographic history, cryptographic algorithm and system designers must also sensibly consider probable future developments while working on their designs. For instance, continuous improvements in computer processing power have increased the scope of brute-force attacks, thus when specifying key lengths, the required key lengths are similarly advancing. The potential effects of quantum computing are already being considered by some cryptographic system designers; the announced imminence of small implementations of these machines may be making the need for this preemptive caution rather more than merely speculative.[11] Essentially, prior to the early 20th century, cryptography was chiefly concerned with linguistic and lexicographic patterns. Since then the emphasis has shifted, and cryptography now makes extensive use of mathematics, including aspects of information theory, computational complexity, statistics, combinatorics, abstract algebra, number theory, and finite mathematics generally. Cryptography is, also, a branch of engineering, but an unusual one as it deals with active, intelligent, and malevolent opposition (see cryptographic engineering and security engineering); other kinds of engineering (e.g., civil or chemical engineering) need deal only with neutral natural forces. There is also active research examining the relationship between cryptographic problems and quantum physics (see quantum cryptography and quantum computing).
191
Modern cryptography
The modern field of cryptography can be divided into several areas of study. The chief ones are discussed here; see Topics in Cryptography for more.
Symmetric-key cryptography
Symmetric-key cryptography refers to encryption methods in which both the sender and receiver share the same key (or, less commonly, in which their keys are different, but related in an easily computable way). This was the only kind of encryption publicly known until June 1976.[10]
Cryptography
192
The modern study of symmetric-key ciphers relates mainly to the study of block ciphers and stream ciphers and to their applications. A block cipher is, in a sense, a modern embodiment of Alberti's polyalphabetic cipher: block ciphers take as input a block of plaintext and a key, and output a block of ciphertext of the same size. Since messages are almost always longer than a single block, some method of knitting together successive blocks is required. Several have been developed, some with better security in one aspect or another than others. They are the modes of operation and must be carefully considered when using a block cipher in a cryptosystem. The Data Encryption Standard (DES) and the Advanced Encryption Standard (AES) are block cipher designs which have been designated One round (out of 8.5) of the patented IDEA cipher, used in some versions of PGP for cryptography standards by the US government (though DES's high-speed encryption of, for instance, e-mail designation was finally withdrawn after the AES was adopted).[12] Despite its deprecation as an official standard, DES (especially its still-approved and much more secure triple-DES variant) remains quite popular; it is used across a wide range of applications, from ATM encryption[13] to e-mail privacy[14] and secure remote access.[15] Many other block ciphers have been designed and released, with considerable variation in quality. Many have been thoroughly broken; see Category:Block ciphers.[11] [16] Stream ciphers, in contrast to the 'block' type, create an arbitrarily long stream of key material, which is combined with the plaintext bit-by-bit or character-by-character, somewhat like the one-time pad. In a stream cipher, the output stream is created based on a hidden internal state which changes as the cipher operates. That internal state is initially set up using the secret key material. RC4 is a widely used stream cipher; see Category:Stream ciphers.[11] Block ciphers can be used as stream ciphers; see Block cipher modes of operation. Cryptographic hash functions are a third type of cryptographic algorithm. They take a message of any length as input, and output a short, fixed length hash which can be used in (for example) a digital signature. For good hash functions, an attacker cannot find two messages that produce the same hash. MD4 is a long-used hash function which is now broken; MD5, a strengthened variant of MD4, is also widely used but broken in practice. The U.S. National Security Agency developed the Secure Hash Algorithm series of MD5-like hash functions: SHA-0 was a flawed algorithm that the agency withdrew; SHA-1 is widely deployed and more secure than MD5, but cryptanalysts have identified attacks against it; the SHA-2 family improves on SHA-1, but it isn't yet widely deployed, and the U.S. standards authority thought it "prudent" from a security perspective to develop a new standard to "significantly improve the robustness of NIST's overall hash algorithm toolkit."[17] Thus, a hash function design competition is underway and meant to select a new U.S. national standard, to be called SHA-3, by 2012. Message authentication codes (MACs) are much like cryptographic hash functions, except that a secret key can be used to authenticate the hash value[11] upon receipt.
Public-key cryptography
Symmetric-key cryptosystems use the same key for encryption and decryption of a message, though a message or group of messages may have a different key than others. A significant disadvantage of symmetric ciphers is the key management necessary to use them securely. Each distinct pair of communicating parties must, ideally, share a different key, and perhaps each ciphertext exchanged as well. The number of keys required increases as the square of the number of network members, which very quickly requires complex key management schemes to keep them all straight and secret. The difficulty of securely establishing a secret key between two communicating parties, when a secure channel does not already exist between them, also presents a chicken-and-egg problem which is a
Cryptography considerable practical obstacle for cryptography users in the real world. In a groundbreaking 1976 paper, Whitfield Diffie and Martin Hellman proposed the notion of public-key (also, more generally, called asymmetric key) cryptography in which two different but mathematically related keys are useda public key and a private key.[18] A public key system is so constructed that calculation of one key (the 'private key') is computationally infeasible from the other (the 'public key'), even though they are necessarily related. Instead, both keys are generated secretly, as an interrelated pair.[19] The historian David Kahn described public-key cryptography as "the most revolutionary new concept in the field since polyalphabetic substitution emerged in the Renaissance".[20]
193
Whitfield Diffie and Martin Hellman, authors of the first published paper on public-key cryptography
In public-key cryptosystems, the public key may be freely distributed, while its paired private key must remain secret. The public key is typically used for encryption, while the private or secret key is used for decryption. Diffie and Hellman showed that public-key cryptography was possible by presenting the DiffieHellman key exchange protocol.[10] In 1978, Ronald Rivest, Adi Shamir, and Len Adleman invented RSA, another public-key system.[21] In 1997, it finally became publicly known that asymmetric key cryptography had been invented by James H. Ellis at GCHQ, a British intelligence organization, and that, in the early 1970s, both the DiffieHellman and RSA algorithms had been previously developed (by Malcolm J. Williamson and Clifford Cocks, respectively).[22] The DiffieHellman and RSA algorithms, in addition to being the first publicly known examples of high quality public-key algorithms, have been among the most widely used. Others include the CramerShoup cryptosystem, ElGamal encryption, and various elliptic curve techniques. See Category:Asymmetric-key cryptosystems. In addition to encryption, public-key cryptography can be used to implement digital signature schemes. A digital signature is reminiscent of an ordinary signature; they both have the characteristic that they are easy Padlock icon from the Firefox Web browser, for a user to produce, but difficult for anyone else to forge. Digital meant to indicate a page has been sent in SSL or TLS-encrypted protected form. signatures can also be permanently tied to the content of the message being However, such an icon is not a guarantee of signed; they cannot then be 'moved' from one document to another, for any security; any subverted browser might attempt will be detectable. In digital signature schemes, there are two mislead a user by displaying such an icon algorithms: one for signing, in which a secret key is used to process the when a transmission is not actually being protected by SSL or TLS. message (or a hash of the message, or both), and one for verification, in which the matching public key is used with the message to check the validity of the signature. RSA and DSA are two of the most popular digital signature schemes. Digital signatures are central to the operation of public key infrastructures and many network security schemes (e.g., SSL/TLS, many VPNs, etc.).[16] Public-key algorithms are most often based on the computational complexity of "hard" problems, often from number theory. For example, the hardness of RSA is related to the integer factorization problem, while DiffieHellman and DSA are related to the discrete logarithm problem. More recently, elliptic curve cryptography has developed in which security is based on number theoretic problems involving elliptic curves. Because of the difficulty of the underlying problems, most public-key algorithms involve operations such as modular multiplication and exponentiation, which are much more computationally expensive than the techniques used in most block ciphers, especially with typical key sizes. As a result, public-key cryptosystems are commonly hybrid cryptosystems, in which a fast high-quality symmetric-key encryption algorithm is used for the message itself, while the relevant symmetric key is sent with the message, but encrypted using a public-key algorithm. Similarly, hybrid signature
Cryptography schemes are often used, in which a cryptographic hash function is computed, and only the resulting hash is digitally signed.[11]
194
Cryptanalysis
The goal of cryptanalysis is to find some weakness or insecurity in a cryptographic scheme, thus permitting its subversion or evasion. It is a common misconception that every encryption method can be broken. In connection with his WWII work at Bell Labs, Claude Shannon proved that the one-time pad cipher is unbreakable, provided the key material is truly random, never reused, kept secret from all possible attackers, and of equal or greater length than the message.[23] Most ciphers, apart from the one-time pad, can be broken with enough computational effort by brute force attack, but the amount of effort needed may be exponentially dependent on the key size, as compared to the effort needed to use the cipher. In such cases, effective security could be achieved if it is proven that the effort required (i.e., "work factor", in Shannon's terms) is beyond the ability of any adversary. This means it must be shown that no efficient method (as opposed to the time-consuming brute force method) can be found to break the cipher. Since no such showing can be made currently, as of today, the one-time-pad remains the only theoretically unbreakable cipher.
There are a wide variety of cryptanalytic attacks, and they can be classified in any of several ways. A common distinction turns on what an attacker knows and what capabilities are available. In a ciphertext-only attack, the cryptanalyst has access only to the ciphertext (good modern cryptosystems are usually effectively immune to ciphertext-only attacks). In a known-plaintext attack, the cryptanalyst has access to a ciphertext and its corresponding plaintext (or to many such pairs). In a chosen-plaintext attack, the cryptanalyst may choose a plaintext and learn its corresponding ciphertext (perhaps many times); an example is gardening, used by the British during WWII. Finally, in a chosen-ciphertext attack, the cryptanalyst may be able to choose ciphertexts and learn their corresponding plaintexts.[11] Also important, often overwhelmingly so, are mistakes (generally in the design or use of one of the protocols involved; see Cryptanalysis of the Enigma for some historical examples of this).
Variants of the Enigma machine, used by Germany's military and civil authorities from the late 1920s through World War II, implemented a complex electro-mechanical polyalphabetic cipher. Breaking and reading of the Enigma cipher at Poland's Cipher Bureau, for 7 years before the war, and subsequent decryption at [2] Bletchley Park, was important to Allied victory.
Cryptography
195
Cryptanalysis of symmetric-key ciphers typically involves looking for attacks against the block ciphers or stream ciphers that are more efficient than any attack that could be against a perfect cipher. For example, a simple brute force attack against DES requires one known plaintext and 255 decryptions, trying approximately half of the possible keys, to reach a point at which chances are better than even the key sought will have been found. But this may not be enough assurance; a linear cryptanalysis attack against DES requires 243 known plaintexts and approximately 243 DES operations.[24] This is a considerable improvement on brute force attacks.
Pozna monument (center) to Polish cryptologists whose breaking of Germany's Enigma machine ciphers, beginning in 1932, altered the course of World War II
Public-key algorithms are based on the computational difficulty of various problems. The most famous of these is integer factorization (e.g., the RSA algorithm is based on a problem related to integer factoring), but the discrete logarithm problem is also important. Much public-key cryptanalysis concerns numerical algorithms for solving these computational problems, or some of them, efficiently (i.e., in a practical time). For instance, the best known algorithms for solving the elliptic curve-based version of discrete logarithm are much more time-consuming than the best known algorithms for factoring, at least for problems of more or less equivalent size. Thus, other things being equal, to achieve an equivalent strength of attack resistance, factoring-based encryption techniques must use larger keys than elliptic curve techniques. For this reason, public-key cryptosystems based on elliptic curves have become popular since their invention in the mid-1990s. While pure cryptanalysis uses weaknesses in the algorithms themselves, other attacks on cryptosystems are based on actual use of the algorithms in real devices, and are called side-channel attacks. If a cryptanalyst has access to, for example, the amount of time the device took to encrypt a number of plaintexts or report an error in a password or PIN character, he may be able to use a timing attack to break a cipher that is otherwise resistant to analysis. An attacker might also study the pattern and length of messages to derive valuable information; this is known as traffic analysis,[25] and can be quite useful to an alert adversary. Poor administration of a cryptosystem, such as permitting too short keys, will make any system vulnerable, regardless of other virtues. And, of course, social engineering, and other attacks against the personnel who work with cryptosystems or the messages they handle (e.g., bribery, extortion, blackmail, espionage, torture, ...) may be the most productive attacks of all.
Cryptographic primitives
Much of the theoretical work in cryptography concerns cryptographic primitivesalgorithms with basic cryptographic propertiesand their relationship to other cryptographic problems. More complicated cryptographic tools are then built from these basic primitives. These primitives provide fundamental properties, which are used to develop more complex tools called cryptosystems or cryptographic protocols, which guarantee one or more high-level security properties. Note however, that the distinction between cryptographic primitives and cryptosystems, is quite arbitrary; for example, the RSA algorithm is sometimes considered a cryptosystem, and sometimes a primitive. Typical examples of cryptographic primitives include pseudorandom functions, one-way functions, etc.
Cryptography
196
Cryptosystems
One or more cryptographic primitives are often used to develop a more complex algorithm, called a cryptographic system, or cryptosystem. Cryptosystems (e.g. El-Gamal encryption) are designed to provide particular functionality (e.g. public key encryption) while guaranteeing certain security properties (e.g. CPA security in the random oracle model). Cryptosystems use the properties of the underlying cryptographic primitives to support the system's security properties. Of course, as the distinction between primitives and cryptosystems is somewhat arbitrary, a sophisticated cryptosystem can be derived from a combination of several more primitive cryptosystems. In many cases, the cryptosystem's structure involves back and forth communication among two or more parties in space (e.g., between the sender of a secure message and its receiver) or across time (e.g., cryptographically protected backup data). Such cryptosystems are sometimes called cryptographic protocols. Some widely known cryptosystems include RSA encryption, Schnorr signature, El-Gamal encryption, PGP, etc. More complex cryptosystems include electronic cash[26] systems, signcryption systems, etc. Some more 'theoretical' cryptosystems include interactive proof systems,[27] (like zero-knowledge proofs,[28] ), systems for secret sharing,[29] [30] etc. Until recently, most security properties of most cryptosystems were demonstrated using empirical techniques, or using ad hoc reasoning. Recently, there has been considerable effort to develop formal techniques for establishing the security of cryptosystems; this has been generally called provable security. The general idea of provable security is to give arguments about the computational difficulty needed to compromise some security aspect of the cryptosystem (i.e., to any adversary). The study of how best to implement and integrate cryptography in software applications is itself a distinct field; see: Cryptographic engineering and Security engineering.
Legal issues
Prohibitions
Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Actually secret communications may be criminal or even treasonous; those whose communications are open to inspection may be less likely to be either. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high quality cryptography possible. In some countries, even the domestic use of cryptography is, or has been, restricted. Until 1999, France significantly restricted the use of cryptography domestically, though it has relaxed many of these. In China, a license is still required to use cryptography. Many countries have tight restrictions on the use of cryptography. Among the more restrictive are laws in Belarus, Kazakhstan, Mongolia, Pakistan, Singapore, Tunisia, and Vietnam.[31] In the United States, cryptography is legal for domestic use, but there has been much conflict over legal issues related to cryptography. One particularly important issue has been the export of cryptography and cryptographic software and hardware. Probably because of the importance of cryptanalysis in World War II and an expectation that cryptography would continue to be important for national security, many Western governments have, at some point, strictly regulated export of cryptography. After World War II, it was illegal in the US to sell or distribute encryption technology overseas; in fact, encryption was designated as auxiliary military equipment and put on the United States Munitions List.[32] Until the development of the personal computer, asymmetric key algorithms (i.e., public key techniques), and the Internet, this was not especially problematic. However, as the Internet grew and computers became more widely available, high quality encryption techniques became well-known around the globe. As a result, export controls came to be seen to be an impediment to commerce and to research.
Cryptography
197
Export controls
In the 1990s, there were several challenges to US export regulations of cryptography. One involved Philip Zimmermann's Pretty Good Privacy (PGP) encryption program; it was released in the US, together with its source code, and found its way onto the Internet in June 1991. After a complaint by RSA Security (then called RSA Data Security, Inc., or RSADSI), Zimmermann was criminally investigated by the Customs Service and the FBI for several years. No charges were ever filed, however.[33] [34] Also, Daniel Bernstein, then a graduate student at UC Berkeley, brought a lawsuit against the US government challenging some aspects of the restrictions based on free speech grounds. The 1995 case Bernstein v. United States ultimately resulted in a 1999 decision that printed source code for cryptographic algorithms and systems was protected as free speech by the United States Constitution.[35] In 1996, thirty-nine countries signed the Wassenaar Arrangement, an arms control treaty that deals with the export of arms and "dual-use" technologies such as cryptography. The treaty stipulated that the use of cryptography with short key-lengths (56-bit for symmetric encryption, 512-bit for RSA) would no longer be export-controlled.[36] Cryptography exports from the US are now much less strictly regulated than in the past as a consequence of a major relaxation in 2000;[31] there are no longer very many restrictions on key sizes in US-exported mass-market software. In practice today, since the relaxation in US export restrictions, and because almost every personal computer connected to the Internet, everywhere in the world, includes US-sourced web browsers such as Mozilla Firefox or Microsoft Internet Explorer, almost every Internet user worldwide has access to quality cryptography (i.e., when using sufficiently long keys with properly operating and unsubverted software, etc.) in their browsers; examples are Transport Layer Security or SSL stack. The Mozilla Thunderbird and Microsoft Outlook E-mail client programs similarly can connect to IMAP or POP servers via TLS, and can send and receive email encrypted with S/MIME. Many Internet users don't realize that their basic application software contains such extensive cryptosystems. These browsers and email programs are so ubiquitous that even governments whose intent is to regulate civilian use of cryptography generally don't find it practical to do much to control distribution or use of cryptography of this quality, so even when such laws are in force, actual enforcement is often effectively impossible.
NSA involvement
Another contentious issue connected to cryptography in the United States is the influence of the National Security Agency on cipher development and policy. NSA was involved with the design of DES during its development at IBM and its consideration by the National Bureau of Standards as a possible Federal Standard for cryptography.[37] DES was designed to be resistant to differential cryptanalysis,[38] a powerful and general cryptanalytic technique known to NSA and IBM, that became publicly known only when it was rediscovered in the late 1980s.[39] According to Steven Levy, IBM rediscovered differential cryptanalysis,[40] but kept the technique secret at NSA's request. The technique became publicly known only when Biham and Shamir re-rediscovered and announced it some years later. The entire affair illustrates the difficulty of determining what resources and knowledge an attacker might actually have. Another instance of NSA's involvement was the 1993 Clipper chip affair, an encryption microchip intended to be part of the Capstone cryptography-control initiative. Clipper was widely criticized by cryptographers for two reasons. The cipher algorithm was then classified (the cipher, called Skipjack, though it was declassified in 1998 long after the Clipper initiative lapsed). The secret cipher caused concerns that NSA had deliberately made the cipher weak in order to assist its intelligence efforts. The whole initiative was also criticized based on its violation of Kerckhoffs' principle, as the scheme included a special escrow key held by the government for use by law enforcement, for example in wiretaps.[34]
Cryptography
198
See also
Books on cryptography Watermarking Watermark detection Category:Cryptographers Encyclopedia of Cryptography and Security List of cryptographers List of important publications in computer science#Cryptography Topics in cryptography Cipher System Identification Unsolved problems in computer science CrypTool Most widespread e-learning program about cryptography and cryptanalysis, open source List of multiple discoveries (see "RSA") Flexiprovider open source Java Cryptographic Provider Strong secrecy, a term used in cryptography
Further reading
Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins, July 2010. Becket, B (1988). Introduction to Cryptology. Blackwell Scientific Publications. ISBN0-632-01836-4. OCLC16832704. Excellent coverage of many classical ciphers and cryptography concepts and of the "modern" DES and RSA systems. Cryptography and Mathematics by Bernhard Esslinger, 200 pages, part of the free open-source package CrypTool, PDF download [43].
Cryptography In Code: A Mathematical Journey by Sarah Flannery (with David Flannery). Popular account of Sarah's award-winning project on public-key cryptography, co-written with her father. James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001, ISBN 1-57488-367-4. Oded Goldreich, Foundations of Cryptography [44], in two volumes, Cambridge University Press, 2001 and 2004. Introduction to Modern Cryptography [45] by Jonathan Katz and Yehuda Lindell. Alvin's Secret Code by Clifford B. Hicks (children's novel that introduces some basic cryptography and cryptanalysis). Ibrahim A. Al-Kadi, "The Origins of Cryptology: the Arab Contributions," Cryptologia, vol. 16, no. 2 (April 1992), pp.97126. Handbook of Applied Cryptography [46] by A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone CRC Press, (PDF download available), somewhat more mathematical than Schneier's Applied Cryptography. Christof Paar [47], Jan Pelzl, Understanding Cryptography, A Textbook for Students and Practitioners. [48] Springer, 2009. (Slides and other information available on the web site.) Very accessible introduction to practical cryptography for non-mathematicians. Introduction to Modern Cryptography by Phillip Rogaway and Mihir Bellare, a mathematical introduction to theoretical cryptography including reduction-based security proofs. PDF download [49]. Cryptonomicon by Neal Stephenson (novel, WW2 Enigma cryptanalysis figures into the story, though not always realistically). Johann-Christoph Woltag, 'Coded Communications (Encryption)' in Rdiger Wolfrum (ed) Max Planck Encyclopedia of Public International Law (Oxford University Press 2009). *"Max Planck Encyclopedia of Public International Law" [50]., giving an overview of international law issues regarding cryptography.
199
External links
"DNA computing and cryptology: the future for Basel in Switzerland?" [51] Crypto Glossary and Dictionary of Technical Cryptography [52] Attack/Prevention [53] Resource for Cryptography Whitepapers, Tools, Videos, and Podcasts. Cryptography: The Ancient Art of Secret Messages [54] by Monica Pawlan - February 1998 Handbook of Applied Cryptography [46] by A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone (PDF download available), somewhat more mathematical than Schneier's book. NSA's CryptoKids [55]. Overview and Applications of Cryptology [56] by the CrypTool Team; PDF; 3.8MBJuly 2008 RSA Laboratories' frequently asked questions about today's cryptography [57] sci.crypt mini-FAQ [58] Slides of a two-semester course Introduction to Cryptography [59] by Prof. Christof Paar, University of Bochum (slides are in English, site contains also videos in German) Early Cryptology, Cryptographic Shakespeare [60] GCHQ: Britain's Most Secret Intelligence Agency [61] Cryptix, complete cryptography solution for Mac OS X. [62]
Cryptography
200
References
[1] [2] [3] [4] Liddell and Scott's Greek-English Lexicon. Oxford University Press. (1984) David Kahn, The Codebreakers, 1967, ISBN 0-684-83130-9. Oded Goldreich, Foundations of Cryptography, Volume 1: Basic Tools, Cambridge University Press, 2001, ISBN 0-521-79172-3 "Cryptology (definition)" (http:/ / www. merriam-webster. com/ dictionary/ cryptology). Merriam-Webster's Collegiate Dictionary (11th edition ed.). Merriam-Webster. . Retrieved 2008-02-01. [5] Kama Sutra, Sir Richard F. Burton, translator, Part I, Chapter III, 44th and 45th arts. [6] Simon Singh, The Code Book, pp. 14-20 [7] Ibrahim A. Al-Kadi (April 1992), "The origins of cryptology: The Arab contributions, Cryptologia 16 (2): 97126 [8] Hakim, Joy (1995). A History of Us: War, Peace and all that Jazz. New York: Oxford University Press. ISBN0-19-509514-6. [9] James Gannon, Stealing Secrets, Telling Lies: How Spies and Codebreakers Helped Shape the Twentieth Century, Washington, D.C., Brassey's, 2001, ISBN 1-57488-367-4. [10] Whitfield Diffie and Martin Hellman, "New Directions in Cryptography", IEEE Transactions on Information Theory, vol. IT-22, Nov. 1976, pp: 644654. ( pdf (http:/ / citeseer. ist. psu. edu/ rd/ 86197922,340126,1,0. 25,Download/ http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/ 16749/ http:zSzzSzwww. cs. rutgers. eduzSz~tdnguyenzSzclasseszSzcs671zSzpresentationszSzArvind-NEWDIRS. pdf/ diffie76new. pdf)) [11] AJ Menezes, PC van Oorschot, and SA Vanstone, Handbook of Applied Cryptography (http:/ / web. archive. org/ web/ 20050307081354/ www. cacr. math. uwaterloo. ca/ hac/ ) ISBN 0-8493-8523-7. [12] FIPS PUB 197: The official Advanced Encryption Standard (http:/ / www. csrc. nist. gov/ publications/ fips/ fips197/ fips-197. pdf). [13] NCUA letter to credit unions (http:/ / www. ncua. gov/ letters/ 2004/ 04-CU-09. pdf), July 2004 [14] RFC 2440 - Open PGP Message Format [15] SSH at windowsecurity.com (http:/ / www. windowsecurity. com/ articles/ SSH. html) by Pawel Golen, July 2004 [16] Bruce Schneier, Applied Cryptography, 2nd edition, Wiley, 1996, ISBN 0-471-11709-9. [17] National Institute of Standards and Technology (http:/ / csrc. nist. gov/ groups/ ST/ hash/ documents/ FR_Notice_Nov07. pdf) [18] Whitfield Diffie and Martin Hellman, "Multi-user cryptographic techniques" [Diffie and Hellman, AFIPS Proceedings 45, pp109112, June 8, 1976]. [19] Ralph Merkle was working on similar ideas at the time and encountered publication delays, and Hellman has suggested that the term used should be DiffieHellmanMerkle aysmmetric key cryptography. [20] David Kahn, "Cryptology Goes Public", 58 Foreign Affairs 141, 151 (fall 1979), p. 153. [21] R. Rivest, A. Shamir, L. Adleman. A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (http:/ / theory. lcs. mit. edu/ ~rivest/ rsapaper. pdf). Communications of the ACM, Vol. 21 (2), pp.120126. 1978. Previously released as an MIT "Technical Memo" in April 1977, and published in Martin Gardner's Scientific American Mathematical recreations column [22] Clifford Cocks. A Note on 'Non-Secret Encryption', CESG Research Report, 20 November 1973 (http:/ / www. fi. muni. cz/ usr/ matyas/ lecture/ paper2. pdf). [23] "Shannon": Claude Shannon and Warren Weaver, The Mathematical Theory of Communication, University of Illinois Press, 1963, ISBN 0-252-72548-4 [24] Pascal Junod, "On the Complexity of Matsui's Attack" (http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/ 22094/ http:zSzzSzeprint. iacr. orgzSz2001zSz056. pdf/ junod01complexity. pdf), SAC 2001. [25] Dawn Song, David Wagner, and Xuqing Tian, "Timing Analysis of Keystrokes and Timing Attacks on SSH" (http:/ / citeseer. ist. psu. edu/ cache/ papers/ cs/ 22094/ http:zSzzSzeprint. iacr. orgzSz2001zSz056. pdf/ junod01complexity. pdf), In Tenth USENIX Security Symposium, 2001. [26] S. Brands, "Untraceable Off-line Cash in Wallets with Observers" (http:/ / scholar. google. com/ url?sa=U& q=http:/ / ftp. se. kde. org/ pub/ security/ docs/ ecash/ crypto93. ps. gz), In Advances in CryptologyProceedings of CRYPTO, Springer-Verlag, 1994. [27] Lszl Babai. "Trading group theory for randomness" (http:/ / portal. acm. org/ citation. cfm?id=22192). Proceedings of the Seventeenth Annual Symposium on the Theory of Computing, ACM, 1985. [28] S. Goldwasser, S. Micali, and C. Rackoff, "The Knowledge Complexity of Interactive Proof Systems", SIAM J. Computing, vol. 18, num. 1, pp. 186208, 1989. [29] G. Blakley. "Safeguarding cryptographic keys." In Proceedings of AFIPS 1979, volume 48, pp. 313317, June 1979. [30] A. Shamir. "How to share a secret." In Communications of the ACM, volume 22, pp. 612613, ACM, 1979. [31] RSA Laboratories' Frequently Asked Questions About Today's Cryptography (http:/ / www. rsasecurity. com/ rsalabs/ node. asp?id=2152) [32] Cryptography & Speech (http:/ / web. archive. org/ web/ 20051201184530/ http:/ / www. cyberlaw. com/ cylw1095. html) from Cyberlaw [33] "Case Closed on Zimmermann PGP Investigation" (http:/ / www. ieee-security. org/ Cipher/ Newsbriefs/ 1996/ 960214. zimmerman. html), press note from the IEEE. [34] Levy, Steven (2001). "Crypto: How the Code Rebels Beat the GovernmentSaving Privacy in the Digital Age. Penguin Books. p.56. ISBN0-14-024432-8. OCLC244148644 48066852 48846639. [35] Bernstein v USDOJ (http:/ / www. epic. org/ crypto/ export_controls/ bernstein_decision_9_cir. html), 9th Circuit court of appeals decision. [36] The Wassenaar Arrangement on Export Controls for Conventional Arms and Dual-Use Goods and Technologies (http:/ / www. wassenaar. org/ guidelines/ index. html)
Cryptography
[37] "The Data Encryption Standard (DES)" (http:/ / www. schneier. com/ crypto-gram-0006. html#DES) from Bruce Schneier's CryptoGram newsletter, June 15, 2000 [38] Coppersmith, D. (May 1994). "The Data Encryption Standard (DES) and its strength against attacks" (http:/ / www. research. ibm. com/ journal/ rd/ 383/ coppersmith. pdf) (PDF). IBM Journal of Research and Development 38 (3): 243. doi:10.1147/rd.383.0243. . [39] E. Biham and A. Shamir, "Differential cryptanalysis of DES-like cryptosystems" (http:/ / scholar. google. com/ url?sa=U& q=http:/ / www. springerlink. com/ index/ K54H077NP8714058. pdf), Journal of Cryptology, vol. 4 num. 1, pp. 372, Springer-Verlag, 1991. [40] Levy, pg. 56 [41] Digital Millennium Copyright Act (http:/ / www. copyright. gov/ legislation/ dmca. pdf) [42] http:/ / www. macfergus. com/ niels/ dmca/ cia. html [43] https:/ / www. cryptool. org/ download/ CrypToolScript-en. pdf [44] http:/ / www. wisdom. weizmann. ac. il/ ~oded/ foc-book. html [45] http:/ / www. cs. umd. edu/ ~jkatz/ imc. html [46] http:/ / www. cacr. math. uwaterloo. ca/ hac/ [47] http:/ / www. crypto. rub. de/ en_paar. html [48] http:/ / www. cryptography-textbook. com [49] http:/ / www. cs. ucdavis. edu/ ~rogaway/ classes/ 227/ spring05/ book/ main. pdf [50] http:/ / www. mpepil. com [51] http:/ / www. basel-research. eu. com [52] http:/ / ciphersbyritter. com/ GLOSSARY. HTM [53] http:/ / www. attackprevention. com/ Cryptology/ [54] http:/ / www. pawlan. com/ Monica/ crypto/ [55] http:/ / www. nsa. gov/ kids/ [56] [57] [58] [59] [60] [61] [62] http:/ / www. cryptool. org/ download/ CrypToolPresentation-en. pdf http:/ / www. rsasecurity. com/ rsalabs/ node. asp?id=2152 http:/ / www. spinstop. com/ schlafly/ crypto/ faq. htm http:/ / wiki. crypto. rub. de/ Buch/ slides_movies. php http:/ / www. baconscipher. com/ EarlyCryptology. html http:/ / www2. warwick. ac. uk/ fac/ soc/ pais/ staff/ aldrich/ vigilant/ lectures/ gchq http:/ / www. rbcafe. com/ Cryptix
201
Bruce Schneier
202
Bruce Schneier
Bruce Schneier
[1]
Computer science
Institutions Counterpane Internet Security Bell Labs United States Department of Defense BT Group Alma mater American University University of Rochester Knownfor Cryptography, security
Bruce Schneier (born January 15, 1963,[1] pronounced /nar/) is an American cryptographer, computer security specialist, and writer. He is the author of several books on computer security and cryptography, and is the founder and chief technology officer of BT Counterpane, formerly Counterpane Internet Security, Inc. He received his master's degree in computer science from the American University in Washington, DC in 1988[2] .
Bruce Schneier subsequently plagiarized another article by Ville Hallivuori on "Real-time Transport Protocol (RTP) security" as well.[6] Schneier complained to the editors of the periodical, which generated a minor controversy.[7] The editor of the SIGCSE Bulletin removed the paper from their website and demanded official letters of admission and apology. Schneier noted on his blog that International Islamic University personnel had requested him "to close comments in this blog entry"; Schneier refused to close comments on the blog, but he did delete posts which he deemed "incoherent or hostile".[6]
203
Other writing
Schneier and Karen Cooper were nominated in 2000 for the Hugo Award, in the category of Best Related Book, for their Minicon 34 Restaurant Guide, a work originally published for the Minneapolis science fiction convention Minicon which gained a readership internationally in science fiction fandom for its wit and good humor.[8]
Cryptographic algorithms
Schneier has been involved in the creation of many cryptographic algorithms. Hash functions: Skein Stream ciphers: Solitaire Phelix Helix Pseudo-random number generators: Fortuna Yarrow algorithm Block ciphers: Twofish Blowfish Threefish MacGuffin
Publications
Schneier, Bruce. Applied Cryptography, John Wiley & Sons, 1994. ISBN 0-471-59756-2 Schneier, Bruce. Protect Your Macintosh, Peachpit Press, 1994. ISBN 1-56609-101-2 Schneier, Bruce. E-Mail Security, John Wiley & Sons, 1995. ISBN 0-471-05318-X Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. ISBN 0-471-11709-9 Schneier, Bruce; Kelsey, John; Whiting, Doug; Wagner, David; Hall, Chris; Ferguson, Niels. The Twofish Encryption Algorithm, John Wiley & Sons, 1996. ISBN 0-471-35381-7 Schneier, Bruce; Banisar, David. The Electronic Privacy Papers, John Wiley & Sons, 1997. ISBN 0-471-12297-1 Schneier, Bruce. Secrets and Lies: Digital Security in a Networked World, John Wiley & Sons, 2000. ISBN 0-471-25311-1 Schneier, Bruce. Beyond Fear: Thinking Sensibly about Security in an Uncertain World, Copernicus Books, 2003. ISBN 0-387-02620-7 Ferguson, Niels; Schneier, Bruce. Practical Cryptography, John Wiley & Sons, 2003. ISBN 0-471-22357-3
Schneier, Bruce. Schneier on Security, John Wiley & Sons, 2008. ISBN 978-0-470-39535-6
Bruce Schneier Ferguson, Niels; Schneier, Bruce; Kohno, Tadayoshi. Cryptography Engineering, John Wiley & Sons, 2010. ISBN 978-0-470-47424-2
204
See also
Attack tree Failing badly Security theater Snake oil (cryptography) Schneier's Law
External links
Personal website, Schneier.com [9] Talking security with Bruce Almighty [10] Schneier at the 2009 RSA conference [11], video with Schneier participating on the Cryptographer's Panel, April 21, 2009, Moscone Center, San Francisco Bruce Schneier Facts [12] (Parody)
References
[1] http:/ / www. facebook. com/ bruce. schneier [2] Charles C. Mann Homeland Insecurity (http:/ / www. theatlantic. com/ doc/ 200209/ mann) www.theatlantic.com [3] Blood, Rebecca (January 2007). "Bruce Schneier" (http:/ / www. rebeccablood. net/ bloggerson/ bruceschneier. html). Bloggers on Blogging. . Retrieved 2007-04-19. [4] Schneier, Bruce. "Security Matters" (http:/ / www. wired. com/ commentary/ securitymatters). Wired Magazine. . Retrieved 2008-03-10. [5] Homeland Insecurity (http:/ / charlesmann. org/ articles/ Homeland-Insecurity-Atlantic. pdf), Atlantic Monthly, September 2002 [6] "Schneier on Security: Plagiarism and Academia: Personal Experience" (http:/ / www. schneier. com/ blog/ archives/ 2005/ 08/ plagiarism_and. html). Schneier.com. . Retrieved 2009-06-09. [7] "ONLINE - International News Network" (http:/ / www. onlinenews. com. pk/ details. php?id=85519). Onlinenews.com.pk. 2007-06-09. . Retrieved 2009-06-09. [8] "Hugo Awards Nominations" (http:/ / www. locusmag. com/ 2000/ News/ News04d. html). Locus Magazine. 2000-04-21. . [9] http:/ / www. schneier. com/ [10] http:/ / www. itwire. com/ content/ view/ 16422/ 1090/ 1/ 0/ [11] http:/ / media. omediaweb. com/ rsa2009/ preview/ webcast. htm?id=1_5 [12] http:/ / geekz. co. uk/ schneierfacts/
205
7.0 Application
Application security
Application security encompasses measures taken throughout the application's life-cycle to prevent exceptions in the security policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application, . Applications only control the use of resources granted to them, and not which resources are granted to them. They, in turn, determine the use of these resources by users of the application through application security. Open Web Application Security Project (OWASP) and Web Application Security Consortium (WASC) updates on the latest threats which impair web based applications. This aids developers, security testers and architects to focus on better design and mitigation strategy. OWASP Top 10 has become an industrial norm is assessing Web Applications.
Methodology
According to the patterns & practices Improving Web Application Security book, a principle-based approach for application security includes: [1] Know your threats. Secure the network, host and application. Incorporate security into your application life cycle Note that this approach is technology / platform independent. It is focused on principles, patterns, and practices. For more information on a principle-based approach to application security, see patterns & practices Application Security Methodology [2]
Application security
206
Threats / Attacks Buffer overflow; cross-site scripting; SQL injection; canonicalization Network eavesdropping ; Brute force attack; dictionary attacks; cookie replay; credential theft Elevation of privilege; disclosure of confidential data; data tampering; luring attacks Unauthorized access to administration interfaces; unauthorized access to configuration stores; retrieval of clear text configuration data; lack of individual accountability; over-privileged process and service accounts
Sensitive information Access sensitive data in storage; network eavesdropping; data tampering Session management Cryptography Parameter manipulation Exception management Session hijacking; session replay; man in the middle Poor key generation or key management; weak or custom encryption Query string manipulation; form field manipulation; cookie manipulation; HTTP header manipulation
Auditing and logging User denies performing an operation; attacker exploits an application without trace; attacker covers his or her tracks
Application security (through the acquisition of SPI Dynamics [6] ), Nikto (open source). Tools in the static code analysis arena include Veracode [7] , Pre-Emptive Solutions[8] , and Parasoft [9] . Banking and large E-Commerce corporations have been the very early adopter customer profile for these types of tools. It is commonly held within these firms that both Black Box testing and White Box testing tools are needed in the pursuit of application security. Typically sited, Black Box testing (meaning Penetration Testing tools) are ethical hacking tools used to attack the application surface to expose vulnerabilities suspended within the source code hierarchy. Penetration testing tools are executed on the already deployed application. White Box testing (meaning Source Code Analysis tools) are used by either the application security groups or application development groups. Typically introduced into a company through the application security organization, the White Box tools complement the Black Box testing tools in that they give specific visibility into the specific root vulnerabilities within the source code in advance of the source code being deployed. Vulnerabilities identified with White Box testing and Black Box testing are typically in accordance with the OWASP taxonomy for software coding errors. White Box testing vendors have recently introduced dynamic versions of their source code analysis methods; which operates on deployed applications. Given that the White Box testing tools have dynamic versions similar to the Black Box testing tools, both tools can be correlated in the same software error detection paradigm ensuring full application protection to the client company. The advances in professional Malware targeted at the Internet customers of online organizations has seen a change in Web application design requirements since 2007. It is generally assumed that a sizable percentage of Internet users will be compromised through malware and that any data coming from their infected host may be tainted. Therefore application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the back-office, rather than within the client-side or Web server code.[10] devv
207
Application security ISO/IEC 9798-6:2005 Information technology -- Security techniques -- Entity authentication -- Part 6: Mechanisms using manual data transfer ISO/IEC 14888-1:1998 Information technology -- Security techniques -- Digital signatures with appendix -- Part 1: General ISO/IEC 14888-2:1999 Information technology -- Security techniques -- Digital signatures with appendix -- Part 2: Identity-based mechanisms ISO/IEC 14888-3:2006 Information technology -- Security techniques -- Digital signatures with appendix -- Part 3: Discrete logarithm based mechanisms ISO/IEC 17799:2005 Information technology -- Security techniques -- Code of practice for information security management ISO/IEC 24762:2008 Information technology -- Security techniques -- Guidelines for information and communications technology disaster recovery services ISO/IEC 27006:2007 Information technology -- Security techniques -- Requirements for bodies providing audit and certification of information security management systems Gramm-Leach-Bliley Act PCI Data Security Standard (PCI DSS)
208
See also
Countermeasure Data security Database security Information security Trustworthy Computing Security Development Lifecycle Web application Web application framework XACML HERAS-AF
External links
Open Web Application Security Project [11] The Web Application Security Consortium [12] The Microsoft Security Development Lifecycle (SDL) [13] patterns & practices Security Guidance for Applications [14] QuietMove Web Application Security Testing Plug-in Collection for FireFox [15] Advantages of an integrated security solution for HTML and XML [16]
Application security
209
References
[1] Improving Web Application Security: Threats and Countermeasures (http:/ / msdn2. microsoft. com/ en-us/ library/ ms994920. aspx#), published by Microsoft Corporation. [2] http:/ / channel9. msdn. com/ wiki/ default. aspx/ SecurityWiki. ApplicationSecurityMethodology [3] "Platform Security Concepts" (http:/ / developer. symbian. com/ main/ documentation/ books/ books_files/ sops/ plat_sec_chap. pdf), Simon Higginson. [4] Application Security Framework (https:/ / www. omtp. org/ Publications/ Display. aspx?Id=c4ee46b6-36ae-46ae-95e2-cfb164b758b5), Open Mobile Terminal Platform [5] Application security: Find web application security vulnerabilities during every phase of the software development lifecycle (https:/ / h10078. www1. hp. com/ cda/ hpms/ display/ main/ hpms_content. jsp?zn=bto& cp=1-11-201_4000_100__), HP center [6] HP acquires SPI Dynamics (http:/ / news. cnet. com/ 8301-10784_3-9731312-7. html), CNET news.com [7] http:/ / www. veracode. com/ solutions Veracode Security Static Analysis Solutions [8] http:/ / www. preemptive. com/ application-protection. html Application Protection [9] http:/ / www. parasoft. com/ parasoft_security Parasoft Application Security Solution [10] "Continuing Business with Malware Infected Customers" (http:/ / www. technicalinfo. net/ papers/ MalwareInfectedCustomers. html). Gunter Ollmann. October, 2008. . [11] http:/ / www. owasp. org [12] http:/ / www. webappsec. org [13] http:/ / msdn. microsoft. com/ en-us/ security/ cc420639. aspx [14] http:/ / msdn. microsoft. com/ en-gb/ library/ ms998408. aspx [15] https:/ / addons. mozilla. org/ en-US/ firefox/ collection/ webappsec [16] http:/ / community. citrix. com/ blogs/ citrite/ sridharg/ 2008/ 11/ 17/ Advantages+ of+ an+ integrated+ security+ solution+ for+ HTML+ and+ XML
Application software
Application software, also known as an application, is computer software designed to help the user to perform singular or multiple related specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but typically do not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect OpenOffice.org Writer word processor. OpenOffice.org is a popular analogy in the world of hardware would be the example of open source application software relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.
Terminology
In computer science, an application is a computer program designed to help people perform an activity. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming language (with which computer programs are created). Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as
Application software word processing; others, called integrated software, offer somewhat less power but include several applications.[1] User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven. The above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an application: see Application Portfolio Management.
210
211
Entertainment software
Digital pets Screen savers Video Games Arcade games Emulators for console games Personal computer games Console games Mobile games
Educational software
Classroom Management Entertainment Software Learning/Training Management Software Reference software Sales Readiness Software Survey Management
List of numerical software Physics software Science software List of statistical software Neural network software Collaborative software E-mail Blog Wiki Reservation systems Financial Software Day trading software Banking systems Clearing systems
Application software
212
Simulation software
Computer simulators Scientific simulators Social simulators Battlefield simulators Emergency simulators Vehicle simulators Flight simulators Driving simulators Simulation games Vehicle simulation games
References
[1] Ceruzzi, Paul E. (1998). A History of Modern Computing. Cambridge, Mass.: MIT Press. ISBN 0262032554. [2] Campbell-Kelly, Martin; Aspray, William (1996). Computer: A History of the Information Machine. New York: Basic Books. ISBN 0465029906.
Software cracking
213
Software cracking
Software cracking is the modification of software to remove or disable features which are considered undesirable by the person cracking the software, usually related to protection methods: copy protection, trial/demo version, serial number, hardware key, date checks, CD check or software annoyances like nag screens and adware. The distribution and use of cracked copies is illegal in almost every Economic development. There have been many lawsuits over cracking software.
History
The first software copy protection was on early Apple II, Atari 800 and Commodore 64 software. Software publishers, particularly of gaming software, have over time resorted to increasingly complex measures to try to stop unauthorized copying of their software. On the Apple II, unlike modern computers that use standardized device drivers to manage device communications, the operating system directly controlled the step motor that moves the floppy drive head, and also directly interpreted the raw data called nibbles) read from each track to find the data sectors. This allowed complex disk-based software copy protection, by storing data on half tracks (0, 1, 2.5, 3.5, 5, 6...), quarter tracks (0, 1, 2.25, 3.75, 5, 6...), and any combination thereof. In addition, tracks did not need to be perfect rings, but could be sectioned so that sectors could be staggered across overlapping offset tracks, the most extreme version being known as spiral tracking. It was also discovered that many floppy drives did not have a fixed upper limit to head movement, and it was sometimes possible to write an additional 36th track above the normal 35 tracks. The standard Apple II copy programs could not read such protected floppy disks, since the standard DOS assumed that all disks had a uniform 35-track, 13- or 16-sector layout. Special nibble-copy programs such as Locksmith and Copy II Plus could sometimes duplicate these disks by using a reference library of known protection methods; when protected programs were cracked they would be completely stripped of the copy protection system, and transferred onto a standard format disk that any normal Apple II copy program could read. One of the primary routes to hacking these early copy protections was to run a program that simulates the normal CPU operation. The CPU simulator provides a number of extra features to the hacker, such as the ability to single-step through each processor instruction and to examine the CPU registers and modified memory spaces as the simulation runs. The Apple II provided a built-in opcode disassembler, allowing raw memory to be decoded into CPU opcodes, and this would be utilized to examine what the copy-protection was about to do next. Generally there was little to no defense available to the copy protection system, since all its secrets are made visible through the simulation. But because the simulation itself must run on the original CPU, in addition to the software being hacked, the simulation would often run extremely slowly even at maximum speed. On Atari 8-bit computers, the most common protection method was via "bad sectors". These were sectors on the disk that were intentionally unreadable by the disk drive. The software would look for these sectors when the program was loading and would stop loading if an error code was not returned when accessing these sectors. Special copy programs were available that would copy the disk and remember any bad sectors. The user could then use an application to spin the drive by constantly reading a single sector and display the drive RPM. With the disk drive top removed a small screwdriver could be used to slow the drive RPM below a certain point. Once the drive was slowed down the application could then go and write "bad sectors" where needed. When done the drive RPM was sped up back to normal and an uncracked copy was made. Of course cracking the software to expect good sectors made for readily copied disks without the need to meddle with the disk drive. As time went on more sophisticated methods were developed, but almost all involved some form of malformed disk data, such as a sector that might return different data on separate accesses due to bad data alignment. Products became available (from companies such as Happy Computers) which replaced the controller BIOS in Atari's "smart" drives. These upgraded drives allowed the user to make exact copies of the original program with copy protections in place on the new disk.
Software cracking On the Commodore 64, several methods were used to protect software. For software distributed on ROM cartridges, subroutines were included which attempted to write over the program code. If the software was on ROM, nothing would happen, but if the software had been moved to RAM, the software would be disabled. Because of the operation of Commodore floppy drives, some write protection schemes would cause the floppy drive head to bang against the end of its rail, which could cause the drive head to become misaligned. In some cases, cracked versions of software were desirable to avoid this result. Most of the early software crackers were computer hobbyists who often formed groups that competed against each other in the cracking and spreading of software. Breaking a new copy protection scheme as quickly as possible was often regarded as an opportunity to demonstrate one's technical superiority rather than a possibility of money-making. Some low skilled hobbyists would take already cracked software and edit various unencrypted strings of text in it to change messages a game would tell a game player, often something not suitable for children. Then pass the altered copy along in the pirate networks, mainly for laughs among adult users. The cracker groups of the 1980s started to advertise themselves and their skills by attaching animated screens known as crack intros in the software programs they cracked and released. Once the technical competition had expanded from the challenges of cracking to the challenges of creating visually stunning intros, the foundations for a new subculture known as demoscene were established. Demoscene started to separate itself from the illegal "warez scene" during the 1990s and is now regarded as a completely different subculture. Many software crackers have later grown into extremely capable software reverse engineers; the deep knowledge of assembly required in order to crack protections enables them to reverse engineer drivers in order to port them from binary-only drivers for Windows to drivers with source code for Linux and other free operating systems. With the rise of the Internet, software crackers developed secretive online organizations. In the latter half of the nineties, one of the most respected sources of information about "software protection reversing" was Fravia's website. Most of the well-known or "elite" cracking groups make software cracks entirely for respect in the "The Scene", not profit. From there, the cracks are eventually leaked onto public Internet sites by people/crackers who use well-protected/secure FTP release archives, which are made into pirated copies and sometimes sold illegally by other parties. The Scene today is formed of small groups of very talented people, who informally compete to have the best crackers, methods of cracking, and reverse engineering.
214
Methods
The most common software crack is the modification of an application's binary to cause or prevent a specific key branch in the program's execution. This is accomplished by reverse engineering the compiled program code using a debugger such as SoftICE, OllyDbg, GDB, or MacsBug until the software cracker reaches the subroutine that contains the primary method of protecting the software (or by disassembling an executable file with a program such as IDA). The binary is then modified using the debugger or a hex editor in a manner that replaces a prior branching opcode with its complement or a NOP opcode so the key branch will either always execute a specific subroutine or skip over it. Almost all common software cracks are a variation of this type. Proprietary software developers are constantly developing techniques such as code obfuscation, encryption, and self-modifying code to make this modification increasingly difficult. A specific example of this technique is a crack that removes the expiration period from a time-limited trial of an application. These cracks are usually programs that patch the program executable and sometimes the .dll or .so linked to the application. Similar cracks are available for software that requires a hardware dongle. A company can also break the copy protection of programs that they have legally purchased but that are licensed to particular hardware, so that there is no risk of downtime due to hardware failure (and, of course, no need to restrict oneself to running the software on bought hardware only).
Software cracking Another method is the use of special software such as CloneCD to scan for the use of a commercial copy protection application. After discovering the software used to protect the application, another tool may be used to remove the copy protection from the software on the CD or DVD. This may enable another program such as Alcohol 120%, CloneDVD, Game Jackal, or Daemon Tools to copy the protected software to a user's hard disk. Popular commercial copy protection applications which may be scanned for include SafeDisc and StarForce.[1] In other cases, it might be possible to decompile a program in order to get access to the original source code or code on a level higher than machine code. This is often possible with scripting languages and languages utilizing JIT compilation. An example is cracking (or debugging) on the .NET platform where one might consider manipulating CIL to achieve one's needs. Java's bytecode also works in a similar fashion in which there is an intermediate language before the program is compiled to run on the platform dependent machine code. Advanced reverse engineering for protections such as Securom, Safedisc or StarForce requires a cracker, or many crackers to spend much time studying the protection, eventually finding every flaw within the protection code, and then coding their own tools to "unwrap" the protection automatically from executable (.EXE) and library (.DLL) files. There are a number of sites on the Internet that let users download cracks for popular games and applications (although at the danger of acquiring malicious software that is sometimes distributed via such sites). Although these cracks are used by legal buyers of software, they can also be used by people who have downloaded or otherwise obtained pirated software (often through P2P networks).
215
Effects
The most visible and controversial effect of software cracking is the releasing of fully operable proprietary software without any copy protection. Software companies represented by the Business Software Alliance estimate and claim losses due to piracy.
Industry response
Apple Computer has begun incorporating a Trusted Platform Module into their Apple Macintosh line of computers, and making use of it in such applications as Rosetta. Parts of the operating system not fully x86-native run through the Rosetta PowerPC binary translator, which in turn requires the Trusted Platform Module for proper operation. (This description applies to the developer preview version, but the mechanism differs in the release version.) Recently, the OSx86 project has been releasing patches to circumvent this mechanism. There are also industrial solutions available like Matrix Software License Protection System. Microsoft reduced common Windows based software cracking with the release of the Next-Generation Secure Computing Base initiative in future versions of their operating system.[2]
References
[1] Gamecopyworld Howto (http:/ / m0001. gamecopyworld. com/ games/ gcw_cd-backup. shtml) [2] Evers, Joris (2005-08-30). "Microsoft's leaner approach to Vista security" (http:/ / www. builderau. com. au/ news/ soa/ Microsoft-s-leaner-approach-to-Vista-security/ 0,339028227,339205781,00. htm?feed=pt_windows_7). BuilderAU. . Retrieved 2008-12-31.
216
217
218
219
220
221
222
223
224
License
225
License
Creative Commons Attribution-Share Alike 3.0 Unported http:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/