Documente Academic
Documente Profesional
Documente Cultură
Version 1.8
CAESARS
September 2010
Table of Contents
1. Introduction ..............................................................................................................................1
1.1 Objective ..............................................................................................................................1
1.2 Intended Audience ...............................................................................................................1
1.3 References ............................................................................................................................2
1.4 Review of FISMA Controls and Continuous Monitoring....................................................2
1.5 CAESARS Reference Architecture Concept of Operations ................................................4
1.5.1 Definition .......................................................................................................................4
1.5.2 Operating Principles.......................................................................................................4
1.5.3 Relationship of CAESARS to CyberScope ...................................................................5
1.5.4 Cautionary Note What Risk Scoring Can and Cannot Do ..........................................6
1.5.5 CAESARS and Risk Management.................................................................................7
1.5.6 Risk Management Process .............................................................................................8
1.6 The CAESARS Subsystems ................................................................................................9
1.7 Document Structure: The Architecture of CAESARS.......................................................10
1.7.1 CAESARS Sensor Subsystem .....................................................................................11
1.7.2 CAESARS Database/Repository Subsystem ...............................................................12
1.7.3 CAESARS Analysis/Risk Scoring Subsystem ............................................................13
1.7.4 CAESARS Presentation and Reporting Subsystem .....................................................13
2. The Sensor Subsystem ...........................................................................................................14
2.1 Goals ..................................................................................................................................14
2.1.1 Definitions....................................................................................................................14
2.1.2 Operating Environment Assumptions for the Sensor Subsystem ................................15
2.2 Solution Concept for the Sensor Subsystem ......................................................................16
2.2.1 Tools for Assessing Security Configuration Compliance ............................................19
2.2.2 Security Assessment Tools for Assessing Patch-Level Compliance ...........................23
2.2.3 Tools for Discovering and Identifying Security Vulnerabilities..................................25
2.2.4 Tools for Providing Virus Definition Identification ....................................................29
2.2.5 Other Sensors ...............................................................................................................30
2.2.6 Sensor Controller .........................................................................................................32
2.3 Recommended Technology in the Sensor Subsystem .......................................................33
CAESARS
iii
September 2010
iv
September 2010
5.3.1 Raw Local Data Is Always Directly Checkable by Local Administrators ..................55
5.3.2 Different Processor Types or Zones Can Be Analyzed Separately .............................56
5.3.3 Region- and Site-Specific Factors Can Be Analyzed by Local Administrators ..........56
5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use .......................56
5.4 Description of the iPost Implementation of the Analysis and Scoring Engines ................56
5.4.1 Synopsis of the iPost Scoring Methodology ................................................................57
5.4.2 Using iPost for Centralized Analyses ..........................................................................58
5.4.3 Using iPost for Decentralized Analyses ......................................................................58
5.4.4 The Scoring Methodologies for iPost Risk Components .............................................59
6. CAESARS Presentation and Reporting Subsystem ...........................................................61
6.1 Goals ..................................................................................................................................61
6.1.1 Modular Presentation, Independent of Presentation and Reporting Subsystem
Technologies ..........................................................................................................................61
6.1.2 Allow Either Convenient Dashboard Displays or Direct, Detailed View of Data ...61
6.2 Consistent Display of Enterprise-Wide Scores ..................................................................62
6.2.1 Device-Level Reporting ...............................................................................................63
6.2.2 Site-, Subsystem-, or Organizational-Level Reporting ................................................63
6.2.3 Enterprise-Level Reporting ..........................................................................................64
6.2.4 Risk Exception Reporting ............................................................................................64
6.2.5 Time-Based Reporting .................................................................................................64
7. Areas for Further Study ........................................................................................................65
8. Conclusions and Recommendations .....................................................................................67
Appendix A. NIST-Specified Security Content Automation Protocol (SCAP) ......................68
Appendix B. Addressing NIST SP 800-53 Security Control Families ....................................69
Appendix C. Addressing the Automatable Controls in the Concensus Audit Guidelines ....71
Appendix D. List of Applicable Tools ........................................................................................74
Appendix E. Sample List of SCAP Security Configuration Checklists ..................................82
Appendix F. Sample Risk Scoring Formulas ............................................................................84
Acronyms
CAESARS
.................................................................................................................................90
September 2010
List of Figures
Figure ES-1. Contextual Description of the CAESARS System .................................................... x
Figure 1. Continuous Monitoring of a Systems Security Posture in the NIST-Defined System
Life Cycle and Risk Management Framework ............................................................................... 4
Figure 2. Contextual Description of the CAESARS System ........................................................ 11
Figure 3. Relationships Between Security Configuration Benchmarks, Baseline, and RBDs ..... 15
Figure 4. Contextual Description of the Sensor Subsystem ......................................................... 17
Figure 5. Contextual Description of Interfaces Between an FDCC Scanner Tool and the
Database/Repository/Subsystem ................................................................................................... 21
Figure 6. Contextual Description of Interfaces Between an Authenticated Security Configuration
Scanner Tool and the Database/Repository Subsystem ................................................................ 23
Figure 7. Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch
Scanner Tool and the Database/Repository Subsystem ................................................................ 24
Figure 8: Contextual Description of Interfaces Between an Unauthenticated Vulnerability
Scanner Tool and the Database/Repository Subsystem ................................................................ 25
Figure 9. Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and
the Database/Repository Subsystem ............................................................................................. 28
Figure 10. Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool
and the Database/Repository Subsystem ...................................................................................... 29
Figure 11. Contextual Description of Interfaces Between an Authenticated Security
Configuration Scanner Tool and the Database Subsystem ........................................................... 30
Figure 12. Contextual Description of Sensor Controller to Control Security Assessment Tools. 32
Figure 13. Agent-Based Deployment Configuration .................................................................... 33
Figure 14. Agentless Deployment Configuration ......................................................................... 35
Figure 15. Proxy-Hybrid Deployment Configuration Agentless ............................................... 37
Figure 16. NAC-Remote Deployment Configuration Agent-Based .......................................... 37
Figure 17. Contextual Description of Database/Repository Subsystem ....................................... 40
Figure 18. Contextual Description of Interfaces Between the Database Subsystem and an FDCC
Scanner Tool ................................................................................................................................. 43
Figure 19. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Security Configuration Scanner Tool .................................................................... 44
Figure 20. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Vulnerability and Patch Scanner Tool ................................................................... 45
CAESARS
vi
September 2010
Figure 21. Contextual Description of Interfaces Between the Database Subsystem and an
Unauthenticated Vulnerability Scanner Tool................................................................................ 46
Figure 22. Contextual Description of Interfaces Between the Database Subsystem and a Web
Vulnerability Scanner Tool ........................................................................................................... 47
Figure 23. Contextual Description of Interfaces Between the Database Subsystem and a Database
Vulnerability Scanner Tool ........................................................................................................... 48
List of Tables
Table 1. Recommended Security Tools for Providing Data to Support Risk Scoring ................. 18
Table 2. Currently Scored iPost Components ............................................................................... 57
Table 3. Components Under Consideration for iPost Scoring ...................................................... 57
Table 4. Reportable Scoring Elements (Sample) .......................................................................... 63
CAESARS
vii
September 2010
Executive Summary
Continuous monitoring is the backbone of true security.
Vivek Kundra, Federal Chief Information Officer, Office of Management and Budget
A target-state reference architecture is proposed for security posture monitoring and risk scoring,
based on the work of three leading federal agencies: the Department of State (DOS) Security
Risk Scoring System, the Department of Treasury, Internal Revenue Service (IRS) Security
Compliance Posture Monitoring and Reporting (SCPMaR) System, and the Department of
Justice (DOJ) use of BigFix and the Cyber Security Assessment and Management (CSAM) tool
along with related security posture monitoring tools for asset discovery and management of
configuration, vulnerabilities, and patches. The target reference architecture presented in this
document the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring
(CAESARS) reference architecture represents the essential functional components of a security
risk scoring system, independent of specific technologies, products, or vendors, and using the
combined elements of the DOS, IRS, and DOJ approaches. The objective of the CAESARS
reference architecture is to provide an abstraction of the various posture monitoring and risk
scoring systems that can be applied by other agencies seeking to use risk scoring principles in
their information security program. The reference architecture is intended to support managers
and security administrators of federal information technology (IT) systems. It may be used to
develop detailed technical and functional requirements and build a detailed design for tools that
perform similar functions of automated asset monitoring and situational awareness.
The CAESARS reference architecture and the information security governance processes that it
supports differ from those in most federal agencies in key respects. Many agencies have
automated tools to monitor and assess information security risk from factors like missing
patches, vulnerabilities, variance from approved configurations, or violations of security control
policies. Some have automated tools for remediating vulnerabilities, either automatically or
through some user action. These tools can provide current security status to network operations
centers and security operations centers, but they typically do not support prioritized remediation
actions and do not provide direct incentive for improvements in risk posture. Remedial actions
can be captured in Plans of Actions and Milestones, but plans are not based on quantitative and
objective assessment of the benefits of measurably reducing risk, because the potential risk
reduction is not measured in a consistent way.
What makes CAESARS different is its integrated approach and end-to-end processes for:
Assessing the actual state of each IT asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to use in taking the
actions to make the most critical changes needed to reduce risk and improve their grades
CAESARS
viii
September 2010
CAESARS
ix
September 2010
completion. This assignment of responsibility is key to initiating and motivating actions that
measurably improve the security posture of the enterprise.
The subsystems interact with the database and through it with each other, as depicted in Figure
ES-1.
Figure ES-1. Contextual Description of the CAESARS System
Using an implementation based on this reference architecture, risk scoring can complement and
enhance the effectiveness of security controls that are susceptible to automated monitoring and
reporting, comparing asset configurations with expected results from an approved security
baseline. It can provide direct visualization of the effect of various scored risk elements on the
overall posture of a site, system, or organization.
Risk scoring is not a substitute for other essential operational and management controls, such as
incident response, contingency planning, and personnel security. It cannot determine which IT
systems have the most impact on agency operations, nor can it determine how various kinds of
security failures loss of confidentiality, integrity, and availability will affect the functions and
mission of the organization. In other words, risk scoring cannot score risks about which it has no
information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration
CAESARS
September 2010
management tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.
Neither is it a substitute for the underlying governance and management processes that assign
responsibility and accountability for agency processes and results; but it does help make explicit
what those responsibilities are for the security of IT systems, and by extension it helps identify
when there are overlaps or gaps in responsibility so that they can be addressed.
The reference architecture presented here abstracts the design and implementation of various
configuration benchmarking and reporting tools to a model that can be implemented with a
variety of tools and products, depending on the existing infrastructure and system management
technologies available to federal IT managers. The reference architecture also enables IT
managers to add capabilities beyond those implemented in existing implementations, to extend
risk scoring and continuous monitoring to all IT components (network, database, server,
workstations, and other elements) in a modular, interoperable, standards-based implementation
that is scalable, flexible, and tailorable to each agencys organizational and technical
environment.
In a memorandum dated April 5, 2010, the OMB Chief Information Officer restated the
commitment to continuous monitoring:
Continuous monitoring is the backbone of true security security that moves beyond
compliance and paperwork. The threats to our nations information security continue to
evolve and therefore our approach to cybersecurity must confront these new realities on a
real time basis. The Department of State (DOS) the Department of Justice (DOJ) and the
Department of the Treasury (Treasury) have each developed systems that allow
monitoring in real time of certain aspects of their security enterprises. To evaluate best
practices and scale them across the government, the Office of Management and Budget
(OMB) is requesting that DOS, DOJ, and Treasury coordinate with the Department of
Homeland Security (DHS) on a comprehensive assessment of their monitoring systems.
This reference architecture summarizes the conclusions of that assessment. DHS will engage
with federal stakeholders to refine this reference architecture based on the experience of other
federal agencies with similar capabilities and to establish a federal forum for sharing capabilities
and knowledge that support the goals of risk scoring and continuous monitoring.
CAESARS
xi
September 2010
Acknowledgements
The information published in the reference architecture is based on the compilation of work in
support of the continuous monitoring of computing and network assets at the Department of
State, the Department of Justice, and the Department of the Treasury. The Federal Network
Security Branch of the National Cyber Security Division of the Department of Homeland
Security especially acknowledges the contributions, dedication, and assistance of the following
individuals in making this reference architecture a reality:
John Streufert, Department of State
George Moore, Department of State
Ed Roback, Department of Treasury
Duncan Hays, Internal Revenue Service
Gregg Bryant, Internal Revenue Service
LaTonya Gutrick, Internal Revenue Service
Andrew Hartridge, Internal Revenue Service
Kevin Deeley, Department of Justice
Holly Ridgeway, Department of Justice
Marty Burkhouse, Department of Justice
CAESARS
September 2010
CAESARS
September 2010
1. Introduction
The Federal Network Security (FNS) Branch of the National Cyber Security Division (NCSD) of
the Department of Homeland Security (DHS) is chartered with leading, directing, and supporting
the day-to-day operations for improving the effectiveness and consistency of information
systems security across government networks. FNS is also the Program Management Office for
the Information Systems Security Line of Business (ISSLOB). On April 5, 2010, the Office of
Management and Budget (OMB) tasked the DHS with assessing solutions for the continuous
monitoring of computing and network assets of the Department of State (DOS), the Department
of Justice (DOJ), and the Department of the Treasury. The results of the assessment gave rise to
a reference architecture that represents the essential architectural components of a risk scoring
system. The reference architecture is independent of specific technologies, products, or vendors.
On April 21,2010, the Office of Management and Budget (OMB) released memorandum M-1015, providing guidelines to the federal departments and agencies (D/A) for FY2010 Federal
Information Security Management Act (FISMA) reporting. The OMB memorandum urges D/As
to continuously monitor security-related information from across the enterprise in a manageable
and actionable way. The reference architecture defined in this document the Continuous Asset
Evaluation, Situational Awareness, and Risk Scoring (CAESARS) reference architecture is
provided to the federal D/As to help develop this important capability. Continuous monitoring of
computing and network assets requires up-to-date knowledge of the security posture of every
workstation, server, and network device, including operating system, software, patches,
vulnerabilities, and antivirus signatures. Information Security managers will use the summary
and detailed information to manage and report the security posture of their respective D/A.
1.1 Objective
The objective of this document is to describe a reference architecture that is an abstraction of a
security posture monitoring and risk scoring system, informed by the sources noted above, and
that can be applied to other agencies seeking to apply risk scoring principles to their information
security program. This reference architecture is to be vendor-neutral and product-neutral and will
incorporate the key elements of the DOS, Internal Revenue Service (IRS), and DOJ
implementations: targeted, timely, prioritized risk scoring based on frequent monitoring of
objective measures of IT system risk.
CAESARS
September 2010
1.3 References
National Institute of Standards and Technology (NIST) Special Publication (SP) 800-37,
Revision 1, Guide for Applying the Risk Management Framework to Federal Information
Systems, February 2010.
NIST SP 800-39, DRAFT Managing Risk from Information Systems: An Organizational
Perspective, April 2008.
NIST SP 800-53, Revision 3, Recommended Security Controls for Federal Information
Systems and Organizations, August, 2009.
NIST SP 800-64, Revision 2, Security Considerations in the System Development Life Cycle,
October 2008.
NIST SP 800-126, Revision 1 (Second Public Draft), The Technical Specification for the
Security Content Automation Protocol (SCAP): SCAP Version 1.1, May 2010.
NIST, Frequently Asked Questions, Continuous Monitoring, June 1, 2010.1
OMB Memorandum M-07-18, Ensuring New Acquisitions Include Common Security
Configuration, June 1, 2007.
OMB Memorandum M-08-22, Guidance on the Federal Desktop Core Configuration
(FDCC), August 11, 2008.
OMB Chief Information Officer Memorandum, Subject: Security Testing for Agency
Systems, April 5, 2010.
OMB Memorandum M-10-15, FY2010 Reporting Instructions for the Federal Information
Security Management Act and Agency Privacy Management, April 21, 2010.
Department of State, Enterprise Network Management, iPost: Implementing Continuous Risk
Monitoring at the Department of State, Version 1.4, November, 2009.
The MITRE Corporation, Security Risk Scoring System (iPost) Architecture Study, Version
1.1, February 2009.
The MITRE Corporation, Security Compliance Posture Monitoring and Reporting
(SCPMaR) System: The Internal Revenue Service Solution Concept and Architecture for
Continuous Risk Monitoring, Version 1.0, February 1, 2009.
http://csrc.nist.gov/groups/SMA/fisma/documents/faq-continuous-monitoring.pdf
CAESARS
September 2010
Commercially available automated tools, such as those described in the NIST RMF, support
situational awarenesswhat NIST refers to as maintaining awareness of the security state of
information systems on an ongoing basis through enhanced monitoring processesof the state
of the security of IT networks and systems. Tools are available to monitor and assess the
information security risk from numerous factors such as missing patches, known vulnerabilities,
lack of compliance with approved configurations, or violations of security control policies. Many
if not all of these tools can provide current security status to network operations centers and
security operations centers.
What is generally lacking are tools and processes that provide information in a form and at a
level of detail that support prioritized remediation actions and that recognize improvements
commensurate with the timely, targeted reduction in risk. As a result, system security assessment
and authorization is usually based on infrequently conducted system vulnerability scans that test
security controls at the time of initial assessment but do not reflect the real state of system risk
between security control test cycles.
Faced with a broad range of residual risks, security managers and system administrators have no
reliable, objective way to prioritize actions to address these risks. Remedial actions are often
embodied in Plans of Actions and Milestones (POA&M), but assigning resources to take action
is not based on rational assessment of the benefits of actually reducing risk, because the potential
risk reduction is not measurable in a consistent way.
The CAESARS reference architecture and the information security governance processes that it
supports, combined, provide support different from that available to most federal agencies in key
respects. Many agencies have automated tools to monitor and assess information security risk
from factors like missing patches, vulnerabilities, variance from approved configurations, or
violations of security control policies. Some have automated tools for remediating
vulnerabilities, either automatically or through some user action. These tools can provide current
security status to network operations centers and security operations centers, but they typically
do not support prioritized remediation actions and do not provide direct incentive for
improvements in risk posture.
What make CAESARS different is its integrated approach and end-to-end process for:
Assessing the actual state of each information technology (IT) asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to take the actions to
make the most critical changes needed to reduce risk and improve their grades
Making these assessments on a continuous or nearly continuous basis is a prerequisite for
moving IT security management from isolated assessments, supporting infrequent authorization
decisions, to continuous risk management as described in current federal guidance of the NIST
and OMB mandates.
The CAESARS approach provides a means of monitoring the security controls in place and
focusing staff efforts on those most likely to enhance the agencys information security posture.
CAESARS
September 2010
The system consolidates and scores data from multiple network and computer security
monitoring applications into a single point and presents the data in an easy-to-comprehend
manner. The system allows a highly distributed workforce with varying levels of skill and
authority to recognize security issues within their scope of control. Once recognized, the staff
can then focus their efforts on those that remediate the highest vulnerabilities.
Security
Activities
Categorize
Select
Implement
Assess
Authorize
Monitor
NIST SP 800-64, Rev. 2, Security Considerations in the System Development Life Cycle, October 2008.
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems,
February 2010.
3
CAESARS
September 2010
September 2010
Information Security Management Act and Agency Privacy Management, dated April 21, 2010.
The OMB memo includes the following:
For FY 2010, FISMA reporting for agencies through CyberScope, due November 15,
2010, will follow a three-tiered approach:
1. Data feeds directly from security management tools
2. Government-wide benchmarking on security posture
3. Agency-specific interviews
Further guidance on item 1, direct data feeds, says,
Agencies should not build separate systems for reporting. Any reporting should be a byproduct of agencies continuous monitoring programs and security management tools.
Beginning January 1, 2011, agencies will be required to report on this new information
monthly.
And it provides additional details:
The new data feeds will include summary information, not detailed information, in the
following areas for CIOs:
Inventory
Systems and Services
Hardware
Software
External Connections
Security Training
Identity Management and Access
The types of information that OMB requires to be reported through CyberScope are broader in
scope than the status of individual assets, which are the focus of the CAESARS reference
architecture. Nevertheless, the CAESARS reference architecture can directly support the
achievement of some of the OMB objectives by ensuring that the inventory, configuration, and
vulnerabilities of systems, services, hardware, and software are consistent, accurate, and
complete. A fundamental underpinning of both the CAESARS reference architecture and the
OMB reporting objectives is full situational awareness of all agency IT assets.
September 2010
information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration
anagement tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.
CAESARS
September 2010
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A
System Life Cycle Approach, February 2010.
5
DHS/National Cyber Security Division (NCSD)/US-CERT SwA Processes and Practices Working Group.
(https://buildsecurityin.us-cert.gov/swa/procwg.html)
6
A system-under-development is a system that has not been formally authorized by a Authorizing Official (AO).
CAESARS
September 2010
In this step, an agency Authorizing Official (AO) reviews the security configuration assessment
report (prepared by the security analyst or ISSO) and
Formally approves the new security configuration baseline with risk-based decision
(RBD)
Step 6: Monitor Security Controls
Once the system-under-development has formally transitioned into a system-in-operation,
the CAESARS system shall perform automated assessments periodically to maintain the baseline
security posture. However, existing processes for doing this must still be followed:
If a software patch is required, the formally approved security configuration baseline
must be updated through a change control process.
If a software upgrade or configuration change is significant, then the ISSO must rebaseline the new system configuration by initiating Step 2 in the RMF process.
The CAESARS reference architecture is intended to be tailored to fit within this six-step
framework, the requirements of the NIST SP 800-53, the agencys information security program
plan, and the agencys enterprise security architecture. It can help implement the use of common
controls, as it functions as a common control for risk assessment and configuration management
across the scope of the enterprise that it covers. Future versions of CAESARS will address even
more comprehensive integration of the RMF and SwA with CAESARS operations.
September 2010
within the procurement process, such as requirements development (for COTS products) or
creation and execution of SOWs and PWSs (for system development.)
The modular architecture allows for multiple adjacent subsystems. A single CAESARS
Database/Repository Subsystem could interface with multiple CAESARS Analysis/Risk Scoring
Subsystems (e.g., one a COTS analysis product and another internally developed) and even offer
differing services to each. This feature would allow a single CAESARS Database/Repository
Subsystem to interface with multiple CAESARS Analysis/Risk Scoring Subsystems, for
example, at both the local (site or region) level and the enterprise-wide level.
Similarly, a single CAESARS Analysis/Risk Scoring Subsystem could provide data to multiple
CAESARS Presentation and Reporting Subsystem components.
The use of trade names and references to specific products in this document do not constitute an endorsement of
those products. Omission of other products does not imply that they are either inferior or superior to products
mentioned herein for any particular purpose.
CAESARS
10
September 2010
11
September 2010
A primary design goal of CAESARS is to minimize the need for client platforms themselves to
contain or require any specific executable components of CAESARS. The data to be gathered
and reported to CAESARS is collected by systems that are already in place on the client
platforms or that will be provided by the enterprise. The platforms of the Sensor Subsystem are
assumed to have already installed the tools that will gather the configuration and vulnerability
data that will be reported to CAESARS. For example, those platforms that run the Microsoft
Windows operating system are assumed to have already in place the native Windows security
auditing system, and the server platforms are assumed to have their own similar tools already in
place. Enterprise tools such as Active Directory likewise have their own native auditing
mechanisms. Similarly, client platforms may already have installed such tools as anti-virus, antispam, and anti-malware controls, either as enterprise-wide policy or through local (region- or
site-specific) selection.
CAESARS, per se, does not supply these data collection tools, nor does it require or prohibit any
specific tools. The data that they collect, however, must be transferred from the client platforms
to the CAESARS Database/Repository Subsystem on an ongoing, periodic basis. The tools for
transferring this data are specified in the CAESARS Sensor-to-Database protocol. The transfer
can follow either a push or pull process, or both, depending upon enterprise policy and local
considerations.
In the data push process, the scheduling and execution of the transfer is controlled by the local
organization, possibly by the platform itself. This allows maximum flexibility at the local level
and minimizes the possibility of interference with ongoing operations. But the push process is
also more likely to require that specific CAESARS functionalities be present on client platforms.
CAESARS components required for data push operations are part of the CAESARS Sensor-toDatabase protocol, and are described in Section 2 as part of the Sensor Subsystem.
In the data pull process, the scheduling and execution of the data transfer is controlled by the
CAESARS Database/Repository Subsystem. CAESARS interrogates the platforms on an
ongoing, periodic basis, and stores the resulting data in the CAESARS Database/Repository. The
pull paradigm minimizes the need for any CAESARS components on individual client platforms,
but also provides less scheduling flexibility at the local level and may also interfere with existing
operations. The pull paradigm may also involve directly integrating the CAESARS
Database/Repository Subsystem with numerous and disparate individual platform sensors,
negating the benefits of subsystem modularity and independence. CAESARS components
required for data pull operations are part of the CAESARS Sensor-to-Database protocol, and are
described in Section 3 as part of the CAESARS Database/Repository Subsystem.
12
September 2010
does include tools for accessing the data in order to allow routine maintenance and manual error
correction.
CAESARS also does not impose any specific requirements on the connectivity from the Sensor
Subsystem platforms to the CAESARS Database/Repository Subsystem. Such connectivity is
assumed to exist already, with sufficient capacity to handle the anticipated CAESARS data
traffic. However, CAESARS data is presumed to be sensitive, and the security of the monitoring
process is discussed in Section 3. The connectivity must allow for the confidentiality, integrity,
and availability of the data transfer.
13
September 2010
2.1.1 Definitions
A security configuration benchmark is a set of recommended security configuration settings
specified by the Federal CIO Council in the United States Government Configuration Baseline
(USGCB)8 or by the NIST National Checklist Program, originated from the National
CAESARS
14
September 2010
Vulnerability Database (NVD)9 and sponsored by DHS National Cyber Security Division/USCERT.
A risk-based decision (RBD) is an explicit policy decision by an organization to accept the risk
posed by a specific vulnerability and to forego implementing any further countermeasures
against it. In essence, an RBD is a decision to accept a specific vulnerability as part of a
benchmark. Most commonly, an organization adopts an RBD in cases where the countermeasure
either is not available at all, is available but ineffective, or is available only at a cost (in money,
time, or operational impact) that exceeds the expected damage that the vulnerability itself would
cause. Clearly, such a crucial policy decision should be made only at the Enterprise or
Mission/Business level.
A security configuration baseline is a set of security configuration benchmarks with agencyspecified software patches and RBDs.
Figure 3 illustrates the relationships between security configuration benchmarks, baseline, and
RBDs. A baseline is composed of multiple security configuration benchmarks and agencyspecified software patches. Non-conformance is a deviation from the agency-specified baseline.
An RBD is a formal acceptance of deviation from the approved benchmark. It should be noted
that an RBD is made within the Analysis/Risk Scoring Subsystem. Sensors only assess the state
of security configuration settings in a computing/networking asset against a prescribed security
configuration baseline.
Figure 3. Relationships Between Security Configuration Benchmarks, Baseline, and RBDs
The National Checklist Program is available at the National Vulnerability Database website: http://nvd.nist.gov/
CAESARS
15
September 2010
Diverse network domains. The enterprise may be composed of multiple network domains
that may or may not have trusted relationships.
Geographically diverse networks. A geographically diverse enterprise may be
interconnected through networks that may or may not have sufficient bandwidth to
support monitoring in continuous real time. That is, it may be necessary for some
portions of the enterprise to collect and store sensor data, then forward that data at a later
time as transmission bandwidth becomes available. Clearly, this will necessitate a time
lag between data collection and reporting.
Disconnected computing assets. An enterprise may have computing assets that are not
continuously connected to an agencys enterprise but for which the agency must account
nonetheless. Again, periodic connections must be used to transmit the data, necessitating
a time lag between collection and reporting.
Remote office. Some computing assets may be connected to the agencys network only
remotely (e.g., over public channels such as the Internet). The data being reported is
sufficiently sensitive that its confidentiality, integrity and availability must be ensured
using explicit security measures, such as a virtual private network (VPN).
The CAESARS Sensor Subsystem must be capable of assessing the security configuration
settings specified by the USGCB. The USGCB is a federal government-wide initiative that
provides guidance to agencies on what should be done to improve and maintain effective
configuration settings focusing primarily on security. The USGCB security configuration
baseline is evaluated by the Federal CIO Councils Architecture and Infrastructure Committee
(AIC) Technology Infrastructure Subcommittee (TIS) for federal civilian agency use.10 The
USGCB security configuration settings originate from NIST National Checklist Program (NCP),
to which government and industry contribute their best-practices in the form of guidelines (such
as Defense Information Systems Agency [DISA] Security Technical Implementation Guides
[STIG] and Center for Internet Security [CIS] Benchmarks) and the Federal Desktop Core
Configuration (FDCC).
The CAESARS Sensor Subsystem shall be capable of assessing the security configuration
baseline in one or multiple aforementioned operating environments such as: A single unified
network domain; diverse network domains; geographically-diverse networks; disconnected
computing assets; and remote office.
The CAESARS sensor subsystem shall provide the ability to perform periodic and pre-scheduled
automated security assessments.
The CAESARS sensor subsystem shall provide the ability to perform on-demand security
assessment on a specified target asset.
10
CAESARS
16
September 2010
components assess and collect the security configuration settings of target computing and
networking assets then compare them to a set of configuration baselines.11 The findings are
aggregated at the CAESARS Database/Repository Subsystem to support the risk assessment and
scoring process.
Figure 4 is a contextual description of the Sensor Subsystem. In general, an enterprise has many
types of point solutions to assist in monitoring, measuring, and managing the security posture of
computing and networking assets. In Figure 4, the Targets region refers to the totality of the
organizations IT assets upon which CAESARS is expected to monitor and report. The targets
set can include platforms (desktops, laptops, palm and mobile devices, etc.), servers (web and
communications, file, database, security appliances, etc.), and communications links as well as
their associated hardware, software, and configuration components. Platforms and
communications links can be physical or virtual. In fact, the CAESARS architecture is expressly
intended to support, as much as possible, the entire range of IT assets in use throughout the
federal government.
Figure 4. Contextual Description of the Sensor Subsystem
11
In this case, target means target-of-evaluation (TOE). It is the agencys computing and networking assets.
CAESARS
17
September 2010
Referencing the DOSs iPosts risk scoring elements and the IRS SCPMaR system, the
recommended security point solutions are:
For Security Configuration Compliance
These sensors are designed to verify and validate the implemented security
configuration settings in a computing asset or in networking equipment.
FDCC Scanner to measure level of compliance to the FDCC security configuration
baseline on desktop workstations and laptops
Authenticated Security Configuration Scanner to measure level of compliance to
the agency-defined set of security configuration baseline on computing and network
assets
For Patch-level Compliance
Authenticated Vulnerability and Patch Scanner to measure level of compliance to
NIST-defined or agency-defined set of patches and to identify and enumerate
vulnerabilities associated with computing and network assets
For Vulnerability Assessment
Unauthenticated Vulnerability Scanner to identify and enumerate vulnerabilities
associated with computing and network assets
Network Vulnerability Scanner to identify and enumerate vulnerabilities associated
with computing and network assets
Database Vulnerability Scanner to identify and enumerate vulnerabilities associated
with database systems
Web Vulnerability Scanner to identify and enumerate vulnerabilities associated
with web applications
Network Configuration Management Tool to provide configuration settings on network
equipment for security configuration scanner
Anti-virus Tool to provide identification data on the latest virus definitions for security
configuration scanner
Table 1 is a summary of the aforementioned tools for providing security measurements data to
the CAESARS Database/Repository Subsystem.
Table 1. Recommended Security Tools for Providing Data to Support Risk Scoring
Reference Risk Scoring Elements
Security Configuration Compliance
To measure the degree of compliance to
agency-specified security configuration
baselines
Patch-level Compliance
To measure the degree of compliance to
agency-specified patch-level baselines
CAESARS
18
September 2010
Vulnerability
To discover and identify potential
vulnerabilities that may affect agencys
security posture
Virus Definition Status
To ensure that the latest virus definition
has been implemented in all computing
assets
12
CVE: Common Vulnerabilities and Exposures. CVE is a dictionary of publicly known information security
Source: NIST SP 800-126. See Appendix
vulnerabilities and exposures. CVEs common identifiers enable data exchange between security products and
for more details.
provide a baseline index point for evaluating coverage of tools andAservices.
13
Miuccio, B., CIS Research Report Summary: CIS Benchmark Security Configuration Eliminate 80-90% of Known
Operating System Vulnerabilities, Center for Internet Security, 2002.
14
NIST SP 800-126, The Technical Specification for the Security Content Automation Protocol (SCAP): SCAP
Version 1.0, November 2009.
CAESARS
19
September 2010
15
OMB M-08-22, Guidance on the Federal Desktop Core Configuration (FDCC), August 11, 2008.
Assessment Results Format (ARF) and Assessment Summary Results (ASR) are emerging standards, not formally
approved by NIST.
17
As of April 23, 2010. The United States Government Configuration Baseline can be downloaded from
http://usgcb.nist.gov/.
18
Per OMB M-09-29, agencies are to report compliance status of FDCC using the XCCDF Results and reporting
spreadsheet as defined by NIST in FDCC Compliance Reporting FAQs 2008.03.04
(http://nvd.nist.gov/fdcc/fdcc_reporting_faq_20080328.cfm).
16
CAESARS
20
September 2010
Figure 5. Contextual Description of Interfaces Between an FDCC Scanner Tool and the Database/Repository/Subsystem
CAESARS
21
September 2010
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, Debian Linux 5,
and Apple Mac OS X Snow Leopard Server Edition
Network Operating System: Cisco IOS 12
Workstation Applications:
Web Browsers: Microsoft IE 7and 8 and Mozilla Firefox
Office Productivity Applications: Microsoft Office 2003 and 2007
Anti-Virus Systems: Symantec Norton AntiVirus 10.0 and 9.0 and McAfee
VirusScan 7.0
Virtualized Servers:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, and Debian 5
Required Inputs:
Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified
SCAP)20
Required Outputs:
XCCDF Results format using NIST-specified or agency-specified XML schema
definition
Required Interface:
Database/Repository Subsystem ESB-provided secured WS
20
Currently, there over 130 security configuration benchmarks available in NVD; 18 security configuration
benchmarks are written in SCAP. Agencies such as NSA and IRS are in the process of generating additional SCAP
benchmarks. In addition, SCAP-validated product vendors such as CIS, McAfee, and NetIQ are providing their own
SCAP benchmarks for agencies to tailor.
CAESARS
22
September 2010
Figure 6. Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the
Database/Repository Subsystem
23
September 2010
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, Debian Linux 5,
and Apple Mac OS X Snow Leopard Server Edition
Network Operating System: Cisco IOS 12
Workstation Applications:
Web Browsers: Microsoft IE 7 and 8 and Mozilla Firefox
Office Productivity Applications: Microsoft Office 2003 and 2007
Anti-Virus Systems: Symantec Norton AntiVirus 10.0 and 9.0 and McAfee
VirusScan 7.0
Virtualized Servers:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, and Debian 5
Required Inputs:
Asset inventory baseline.
Agency-defined software patch baselines (described using OVAL).21
Required Outputs:
OVAL Results format using MITRE or agency-specified XML schema definition
Required Interface:
Database/Repository Subsystem ESB-provided secured WS
Figure 7. Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch Scanner Tool and the
Database/Repository Subsystem
21
Currently, vendors are to describe software patches in OVAL. NIST NVD maintains a repository of software
patches in OVAL, so agencies and tools vendors can download the latest patch list.
CAESARS
24
September 2010
22
Unlike the Authenticated Security Configuration Scanners and Authenticated Vulnerability and Patch Scanner,
network-based vulnerability scanners do not need a defined security configuration and patch-level baselines.
CAESARS
25
September 2010
23
Common Weakness Enumeration (CWE) provides a common nomenclature and identification for software security
control implementation weaknesses.
CAESARS
26
September 2010
Required Interface:
Database Subsystem ESB-provided secured WS or WS Adapter
CAESARS
27
September 2010
Figure 9. Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and the Database/Repository
Subsystem
28
September 2010
Required Interface:
Database Subsystem ESB-provided secured WS or WS Adapter
Figure 10. Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool and the
Database/Repository Subsystem
CAESARS
29
September 2010
Figure 11. Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the
Database Subsystem
CAESARS
30
September 2010
Nonetheless, the current set of sensor types highlights some general properties that continuous
monitoring components have in common, and which other component types will share,
regardless of their specific function or level of sophistication. The most obvious common
property is that all of the continuous monitoring components consist essentially of three elements
that can be represented in the CAESARS reference architecture:
1. A pre-established standard that is stored in the CAESARS Database/Repository
Subsystem in the Repository of System Configuration Baselines; the standard may be
established by the sensor itself (e.g., an anti-virus signature file and its associated update
schedule) or by an outside source (e.g., FDCC or NVD)
2. A means of interrogating each relevant platform (whether by agent, proxy, or hybrid) to
determine whether the standard is being correctly applied/maintained
3. A means of reporting discrepancies (in SCAP-compatible format) into the CAESARS
Database of Findings, plus the means to analyze, score, and report the findings to the
appropriate CAESARS users
With this framework in mind, it is possible to gain some insight into which other controls (such
as those in NIST SP 800-53 or other sources) might, in the future, lend themselves to
implementations of continuous monitoring. Three such possibilities are listed below.
Maintenance. It may be possible to establish hardware maintenance and configuration
management standards for each platform and place these into the CAESARS Repository
of System Configuration Baselines. Platform sensors may be able to record maintenance
actions and determine whether hardware maintenance and configuration are up-to-date.
Discrepancies exceeding pre-defined limits (e.g., preventive maintenance actions that are
more than 15 days late) would be reported to CAESARS as a finding, as would
maintenance events not corresponding to authorized actions.
Analysis of Audit Logs. The accepted best engineering practice for analyzing audit logs
calls for logs to be examined on a regular and timely basis and for recorded events to be
assigned a severity level. High-severity audit events are reported to appropriate
administrators immediately and are identified as trackable action items, requiring analysis
and closure within a reasonable time period. It may be possible to automate the audit log
analysis process, including the reporting, analysis, and resolution of high-severity audit
events, and to establish criteria such as those above as standards in the CAESARS
Repository of System Configuration Baselines. Platform sensors could determine whether
the logs are being examined regularly and whether high-severity audit events are being
closed within the required periods. Discrepancies would be reported to CAESARS as
findings.
Identity and Access Management. There is little currently available in the way of
automated continuous monitoring of identity and access management. These are very
serious problem areas and research into them is extensive and continuing, especially into
ways of verifying their correctness and reducing their complexity. Future product
offerings may include automated identification of erroneous access control assignments
or of weak or compromised identity management, especially if standards for such
things could be developed as Configuration Baselines. Should that happen, these tools
might easily direct such findings to CAESARS for reporting and remediation.
CAESARS
31
September 2010
Without attempting to predict specifically which other potential sensor types may be required or
available in the future, the CAESARS reference architecture is designed to include them.
CAESARS
32
September 2010
CAESARS
33
September 2010
access to the security kernel; thus, it has greater visibility to privileged system
configuration files.
Complex management tasking The agent is a mission-specific software application; it
may be programmed to perform complex management tasks such as checking the host
platform on its security configuration settings, modifying security configuration
parameters, and enforcing the installation of security patches;
Low network utilization Agents can perform a programmed assessment process
autonomously, without direct connection or instruction from the CAESARS
Database/Repository Subsystem. Hence, the security compliance data can be collected
and stored within the target computing asset until the CAESARS Database/Repository
Subsystem requests them.
Security Endpoint agents can establish their own trusted communications channel (via
Secure Shell [SSH]) to the CAESARS Database/Repository Subsystem.
The disadvantages of agent-based deployment configuration are:
Higher operating costs Endpoint agents are required to be installed on target computing
assets. For a large enterprise, this entails greater operating costs installation and
maintenance (i.e., patch and upgrade). This is one of key drivers to replace the existing
IRS legacy solution, Policy Checkers.
Incompatibilities between agents An endpoint agent is a mission-specific application
that runs on top of an operating system (OS). A host may run multiple agents, such as one
for anti-virus (e.g., Symantec Anti-Virus System), one for intrusion detection (e.g., ISS
RealSecure or Check Point ZoneAlarm), and another one for configuration management
(e.g., Tivoli Configuration Manager). Introduction of an endpoint agent may disrupt or
interfere with existing agents.
Endpoint agents require configuration management The agent is software, and so it may
require software distribution services to manage configurations. Therefore, the
CAESARS Database/Repository Subsystem requires an agent distribution program to
track and maintain agent configurations throughout an enterprise.
Installation of an agent requires permission from system owners The system owner may
not wish to add complexity and dependency to his/her information system because
information security is not his/her primary mission.
Use Case for Agent-Based Configuration
Diverse networked domains, in which an enterprise is composed of multiple networked
domains that may or may not have trusted relationships
Geographically diverse networks, in which a geographically diverse enterprise that is
interconnected through networks may or may not have sufficient bandwidth to support
continuous monitoring
Disconnected computing assets that are disconnected from an agencys enterprise even
though the agency has to account for them
CAESARS
34
September 2010
35
September 2010
CAESARS
36
September 2010
CAESARS
37
September 2010
CAESARS
38
September 2010
3. CAESARS Database
3.1 Goals
The goals of the CAESARS Database are as follows:
CAESARS
39
September 2010
40
September 2010
connection of a platform for which there is no Asset Inventory Baseline, that platform may be
deemed unauthorized and as posing a risk.)
The initial resolution of the hardware inventory will be limited to individual platforms, since that
is the level at which CAESARS is expected to report other findings. In follow-on
implementations, the hardware inventory may contain information on required hardware versions
and updates. Thus, CAESARS could identify hardware updates and versions and identify
deviations between actual and expected configurations. This could be accomplished by
comparing the actual configuration in the Findings Database with the expected configuration in
the hardware inventory.
The software inventory contains an Asset Inventory Baseline of all authorized software items on
which CAESARS is expected to report. For any specific software configuration item (CI), the
software inventory contains, at a minimum, the full identity of the CI, including vendor name
and product identity, and the release/version of the product, to whatever resolution CAESARS is
expected to differentiate within the product.
The software inventory is related, via a unique software CI ID, to a patch/update database that
contains information on all patches and updates to all authorized software CIs. The
patches/updates database also contains information as to which CIs require periodic updates
(e.g., the date of the most recent update to the signature file of an anti-virus CI). Thus,
CAESARS can identify software patches and updates and identify deviations between actual and
expected configurations by comparing the actual software configurations in the Findings
Database with the expected configuration in the software inventory and patch/update database.
41
September 2010
CAESARS
42
September 2010
Figure 18. Contextual Description of Interfaces Between the Database Subsystem and an FDCC Scanner Tool
CAESARS
43
September 2010
Figure 19. Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Security
Configuration Scanner Tool
CAESARS
44
September 2010
Figure 20. Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Vulnerability
and Patch Scanner Tool
CAESARS
45
September 2010
Figure 21. Contextual Description of Interfaces Between the Database Subsystem and an Unauthenticated Vulnerability
Scanner Tool
CAESARS
46
September 2010
Figure 22. Contextual Description of Interfaces Between the Database Subsystem and a Web Vulnerability Scanner Tool
Database
Subsystem
Repository of
System
Configuration
Baselines
Database of
Findings
Latest CVE
dictionary from NVD
WS
Adapter
SOA ESB
Web
Services
WS
Adapter
Web Vulnerability
Scanner
Assessment results
Vulnerability findings described using CVE ID
and agency provided XML schema
CAESARS
47
September 2010
Figure 23. Contextual Description of Interfaces Between the Database Subsystem and a Database Vulnerability Scanner
Tool
CAESARS
48
September 2010
49
September 2010
complete independence of the other CAESARS components, provided only that the CAESARS
Database protocols are followed.
4.1.4 Develop Analysis Results that Are Transparent, Defensible, and Comparable
The ultimate goal of Continuous Monitoring is to evaluate individual organizations, both in
relation to each other and in compliance to an established higher-level standard. Any attempt to
synthesize and reduce raw data into composite scores must be accepted by all parties as
CAESARS
50
September 2010
transparent, defensible, and comparable. Analyses are transparent when the algorithms and
methodologies are widely known and agreed to. They are defensible when they can be directly
correlated to accepted risk management goals and principles. Finally, analyses are comparable
when all systems are evaluated consistently, and variations in the scoring methodologies are
allowed when, and only when, they are justified by genuine differences in local situation.
51
September 2010
CAESARS
52
September 2010
5.1 Goals
The goals of the CAESARS Risk Scoring Engine are as follows:
CAESARS
53
September 2010
5.1.2 Common Base of Raw Data Can Be Accessed by Multiple Analytic Tools
The CAESARS Risk Scoring Engine may actually be composed of multiple, independent
analytic scoring tools, each operating independently and side-by-side, provided only that each
tool follow the inter-layer protocols. Tools may range from small, internally-developed ad hoc
components to complete COTS systems. The ability to use multiple tools independently means
that many different types of analyses can be done using the same CAESARS Database, without
having to collect new data from individual platforms for each analysis.
5.1.3 Multiple Scoring Tools Reflect Both Centralized and Decentralized Analyses
The scope of Risk Scoring may range from detailed analyses of local platform enclaves, or even
of individual platforms, up to comparative summary analyses of the entire Enterprise IT facility.
The fact that these multiple analytic tools all depend consistently upon a single CAESARS
Database ensures that, as the purview of the analysis rises to include larger views, the results of
these different analyses present a consistent and coherent picture; conclusions drawn from the
analyses can be aggregated up or individually separated out as needed.
5.2.1 Allow Common Scoring for Consistent Comparative Results Across the
Enterprise
An enterprise IT facility may contain sites with widely different locations, missions, and
configurations. For any summary comparison across sites to be meaningful, the data that
contribute to it must be carefully selected and vetted for consistent collection and interpretation.
CAESARS allows an enterprise to select summary information that can be consistently collected
at different sites and interpreted in a way that will yield the most meaningful overall
comparisons.
5.2.2 Show Which Components Are Compliant (or Not) with High-Level Policies
The single most important function of any Continuous Monitoring effort is to ensure that
Enterprise-wide (and similar high-level) policies are being consistently and uniformly followed
by all Enterprise components. Indeed, many important aspects of information assurance (e.g.,
virus control and software patch management) are quantifiable only in the simple yes or no of
compliance: the policies are either being followed or they are not. CAESARS allows enterpriselevel analyses to show, for many types of compliance-based monitoring, whether these policies
are being consistently followed, and if they are not, which components of the enterprise are noncompliant in which areas.
CAESARS
54
September 2010
Much of the data for compliance monitoring is binary or ordinal data. This data can be
aggregated in the CAESARS Database (either by the Analysis Engine or the Scoring Engine or
even by the individual sensors themselves) and be valid for higher-level comparisons. But the
original, unaggregated data is still available in the individual sensor stores, in order to permit
detailed diagnostics and correction on individual platforms.
24
In a business application, a shadow price is the maximum price that management is willing to pay for an extra unit
of a given limited resource. For example, if a production line is already operating at its maximum 40 hour limit, the
shadow price would be the maximum price the manager would be willing to pay for operating it for an additional hour,
based on the benefits from this change. Source: http://en.wikipedia.org/wiki/Shadow_price
CAESARS
55
September 2010
in their platforms original sensor data stores. Thus, any errors in the data or misconfigurations
of the platform tools that originally collected the data can readily be identified and corrected.
5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use
The CAESARS Database/Repository Subsystem and the CAESARS Analysis/Risk Scoring
Subsystem allow all users to prepare analyses that are directly relevant to their own view.
Enterprise-level users can create summaries of the overall IT enterprise, while local users can
obtain detailed analyses of their respective portions. Those users whose needs are more
specialized (e.g., configuring firewalls or monitoring intrusion detection/response platforms) can
formulate more specialized analyses to answer these questions. Since the Analysis/Risk Scoring
Subsystem accesses the Database through standardized protocols, any analysis can be stored and
executed repeatedly over time to obtain trend information and to see the effects of changes to the
systems being monitored.
CAESARS
56
September 2010
this section is intended as an illustration of risk scoring, not a comprehensive treatment of all
cases where risk scoring may be effective.
Abbreviation
What Is Scored
Source
Vulnerability
VUL
Tenable
Patch
PAT
SMS
Security
Compliance
SCM
Tenable
Anti-Virus
AVR
SMS
SOE Compliance
SOE
SMS
AD Users
ADU
AD
AD Computers
ADC
AD
SMS Reporting
SMS
SMS
Vulnerability
Reporting
VUR
Tenable
Security
Compliance
Reporting
SCR
Tenable
Table 3 presents still other prospective risk scoring components that may be implemented in the
future.
Table 3. Components Under Consideration for iPost Scoring
Prospective Component
Unapproved Software
AD Participation
Every user who has not passed the mandatory awareness training
within the last 365 days
On each platform, a raw score is calculated for each component (see subsequent sections for a
discussion of each component). The raw risk score for a platform is then simply the sum of the
ten individual raw risk component scores on that platform. (For certain of the security tests, iPost
CAESARS
57
September 2010
recognizes the difference between failing a security test and being unable to conduct the test
[perhaps for scheduling reasons], and in cases where the truth cannot be definitely resolved, iPost
has the option to score the test as having been passed.)
The risk score for an aggregate is obtained simply by adding the raw risk scores of all platforms
in the aggregate, then dividing this total by the number of platforms in the aggregate, and is
expressed as an average risk score per platform. A letter grade (A, B, C, D, F) is also assigned
based on the average risk per platform.
In all cases, the raw risk score is an open-ended inverse metric: zero represents a perfect score
and higher scores represent higher risk (undesirable). Thus, if new risk scoring components (such
as the three listed above) were to be added, both raw scores and average scores would rise, but
there would be no need to adjust existing risk scoring to accommodate the new components.
The iPost architecture has within it separate Cleansing and Staging databases. These
databases allow data to be pre-processed before scoring is done. In particular, it allows data from
multiple reporting components to be harmonized to prevent duplication. In the case of the double
reporting of system vulnerabilities and software patches, the databases search for cases in which
a patch already in the DOS Patch Management program is associated with a reported
vulnerability; the vulnerability report is discarded because the situation has already been scored
as a missing patch.
iPost is expressly designed to recognize that specific platform risks will be assigned to specific
administrators as part of their responsibilities, and that in certain cases these administrators will
not have the capability to correct certain risks that would normally be under their purview.
Therefore, iPost has a detailed program of exceptions that allow specific security tests to be
removed from the scoring of specific platforms or groups of platforms.
58
September 2010
detailed level, or to investigate region- and site-specific factors that might required modified
scoring treatment.
The current iPost documentation does not describe differing analysis methodologies for different
processor types, nor does it address specialized processors (e.g., firewalls, switches) that might
exist in different connectivity zones.
59
September 2010
score of five points for each product missing or of the wrong version. There is no differentiation
among the 19 products.
5.4.4.6 Active Directory Users (ADU)
All user passwords are required to be changed at least every 60 days. AD checks the age of all
user passwords and, for any password that hasnt been changed in 60 days or more, assesses the
ADU score one point for every password and every day over 60. (Inactive accounts and those
requiring two-factor authentication are not scored.) There is no differentiation of the accounts of
highly privileged users or of security administrators.
5.4.4.7 Active Directory Computers (ADC)
AD Computer account passwords are treated identically to user passwords, except that the
maximum change window is 30 days rather than 60.
5.4.4.8 SMS Reporting (SMS)
The Microsoft SMS monitors the health of the client agent installed on each Windows host. A
variety of factors are monitored, and an SMS error message is generated for each factor that fails.
Only certain messages (form 1xx or 2xx) are scored, and the number of such messages is
irrelevant. The only two relevant considerations are (i) whether one or more 1xx/2xx error
messages has been generated and (ii) the number of consecutive days that 1xx/2xx error
messages have appeared. If any 1xx/2xx error messages appear, the SMS score is assessed 100
points plus 10 points for each consecutive day that such messages have appeared. There is no
differentiation among the 1xx/2xx error messages.
5.4.4.9 Vulnerability Reporting (VUR)
DOS IT systems are required to have externally directed vulnerability scans every seven days. A
grace period of two weeks (i.e., two missed scans) is allowed; after that the VUR score is
assessed five points for each missed scan. If the scan was attempted but could not be completed,
no risk score is assessed.
5.4.4.10 Security Compliance Reporting (SCR)
DOS IT systems are required to have externally directed security compliance scans every 15
days. A grace period of one month (i.e., two missed scans) is allowed; after that the VUR score is
assessed five points for each missed scan. If the scan was attempted but could not be completed,
no risk score is assessed.
CAESARS
60
September 2010
6.1 Goals
The goals for the Presentation and Reporting Subsystem of the architecture are aligned to the
design goals for the system as a whole, as listed in Section 1.5, and specifically to provide and
display risk score data to:
Motivate administrators to reduce risk
Motivate management to support risk reduction
Inspire competition
Measure and recognize improvement
In many organizations, the challenges of reducing security risk fall in three categories, which
better reporting and scoring information can inform:
What vulnerabilities represent higher risks than others?
What remediation actions will result in the most effective reduction of risk?
What action is actually required to reduce or eliminate each such vulnerability?
The focus of the Analysis/Risk Scoring Subsystem and the risk scoring algorithm is answering
the first question. The focus of the Presentation and Reporting Subsystem is answering the next
two.
6.1.2 Allow Either Convenient Dashboard Displays or Direct, Detailed View of Data
The Presentation and Reporting Subsystem must be flexible enough to allow viewing of data at
any level of granularity at which decisions and comparisons may be made. The Presentation and
Reporting Subsystem can include multiple presentation tools, with either local or enterprise-wide
CAESARS
61
September 2010
perspectives, yet still provide confidence that no one type of presentation will influence or skew
any other. The Analysis/Risk Scoring Subsystem allows for presentation tools that are directly
interfaced to an existing analysis tool or tools that combine ad hoc analysis and presentation in a
single component. Such a combined analysis-plus-presentation tool must access the CAESARS
Database, and thus must follow the Databases Master Schema protocol. Likewise, the
connectivity and security comments made above for the CAESARS Analysis and Risk Scoring
Subsystem also apply to the CAESARS Presentation and Reporting Subsystem.
25
The term organization in this context represents any distinct subset of the agency or enterprise that is responsible
for operating IT systems according to agency-defined security policies. The subset may be based on geographic
location, business lines, functional or service lines, or however the agency IT enterprise is organized and operated.
CAESARS is intended to apply to any federal organization with their existing organizational structure, roles, and
responsibilities.
CAESARS
62
September 2010
The minimum set of reports and displays should reflect current and historical information for
consistent comparison of devices, sites, and organizations on the basis of the agencys accepted
scoring algorithm. Based on DOS current and planned implementation, these would display, at
different levels of granularity the source of the risk score and the information needed to take
action to eliminate that source.
What Is Scored
Vulnerability
Patch
Security Compliance
Anti-Virus
SOE Compliance
AD Users
User account password ages exceeding threshold (scores each user account, not
each host)
AD Computers
SMS Reporting
Vulnerability Reporting
Unapproved Software
Every Add/Remove Programs string that is not on the official approved list
AD Participation
Every computer discovered on the network that is not a member of Active Directory
Every user who has not passed the mandatory awareness training within the last
365 days
The Presentation and Reporting Subsystem needs to have the ability to map every finding to a
specific device at a specific point in time, so that time-based risk scores and changes in risk over
time can be tracked. In addition, each device must be associated with a specific organization and
site, so that the responsibility for the risk score for each device can be assigned to the appropriate
organization. A full description of the data model for the risk scoring architecture is beyond the
scope of this document, but the sections that follow provide an overview of the functions that the
data model must support.
63
September 2010
risk grade for that organizational element. It should also provide organizational grade ranks and
include the scores of that element on each risk component. For the DOS, the Risk Score Advisor
summarizes all the scoring issues for a site and provides advice on how to improve the score.
Subsequent pages of the Risk Score Advisor provide details on each risk component. In addition
to security information, the site dashboard also provides network information to support other
administrative functions of the users, and includes a site level summary of scores, open trouble
tickets, and active alerts as well as summary statistics of the hosts and component risk scores.
The ability to sort a large number of hosts according to their salient characteristics is key to
enabling site managers to identify where to target their efforts most effectively and to see
patterns that contribute to higher risk scores. The Presentation and Reporting Subsystem must
support multiple tailored views of the risk scoring data on demand.
CAESARS
64
September 2010
26
See DHS Acquisition Directive 102-01 and supporting guidebooks for a complete taxonomy of requirements.
CAESARS
65
September 2010
Appendix C shows the mapping of tools needed to conduct continuous risk analysis and scoring
to the Consensus Audit Guidelines (CAG), commonly called the 20 Critical Security Controls.27
Although the CAG Top 20 have no official status as federal policy, they represent a sound,
defensible approach to prioritizing the controls most critical to defending against known attack
methods. As the SANS description says,
Because federal agencies do not have unlimited money, current and past federal CIOs
and CISOs have agreed that the only rational way they can hope to meet these
requirements is to jointly establish a prioritized baseline of information security measures
and controls that can be continuously monitored through automated mechanisms.
The number and types of controls that are subject to continuous monitoring through automated
mechanisms are not limited to those described in this reference architecture. Additional research
is needed to integrate sensors that can automatically assess the effectiveness of a full range of
controls, such as Data Loss Prevention (SANS Control #15), and assign a risk score that reflects
the potential harm to the IT systems and the agency mission.
Research is already being conducted, and needs to continue, on how to refine the risk scoring
algorithm, to generalize it for widespread use, and to allow tailoring of the scoring to achieve
specific aims for every agency. One of the criticisms of the iPost scoring is that it does not
account in a consistent way for variances in threat,28 (which includes the capability and intent to
cause harm) and impact (the adverse effect of a breach of security, through a loss of
confidentiality, integrity, or availability). For example, not all devices are equally important to
the functions of the organization; systems are rated as Low, Moderate, or High impact, but the
current risk scoring algorithm treats a specific weakness as equally risky in every environment.
This clearly does not reflect the intent of impact categorization. Impact is a function of mission
value of a particular asset as well as the capability to mitigate the effect of a security weakness
through other means. For example, a vulnerability on a device that is not reachable through the
Internet would be of less concern (other things being equal) than the vulnerability on a device
that was directly accessible from the Internet. The NSA is working on a process to include these
elements in a dynamic assessment of the most critical weaknesses and actions to address them.
This will lead to a dynamic risk model that conforms more closely to OMB M-10-15. A full
treatment of these issues will lead to improvements in risk scoring, but the lack of such
investigation should not impede progress toward a partial solution.
27
CAESARS
66
September 2010
CAESARS
67
September 2010
29
NIST SP 800-126, The Technical Specification for the Security Content Automation Protocol (SCAP): SCAP
Version 1.0, November 2009.
30
OVAL (http://oval.mitre.org/)
31
CPE (http://cpe.mitre.org/)
32
CCE (http://cce.mitre.org/)
33
CVE (http://cve.mitre.org/)
34
CVSS (http://www.first.org/cvss)
CAESARS
68
September 2010
FDCC Scanners
Authenticated
Configuration
Scanners
Authenticated
Vulnerability and
Patch Scanners
Network
Configuration
Management Tools
Unauthenticated
Vulnerability
Scanners
Web Vulnerability
Scanners
Database
Vulnerability
Scanners
System
Management
Tools
Anti-virus Tools
This appendix provides a template for mapping tools needed to conduct continuous risk analysis and scoring as described in this reference architecture. Populating the table will
enable system owners and agencies to demonstrate how the capabilities used for risk scoring also help meet the objectives of security controls from the catalogue of controls
described in NIST SP 800-53.
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Planning (PL)
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Operational Controls
Management Controls
CAESARS
Partial/ Assess
security
configuration of
laptops and
workstations
N/A
Partial/ Assess
security
configuration of
laptops and
Partial/ Assess
security
configuration of
computing assets
N/A
Partial/ Assess
security
configuration of
computing assets
Partial/ Assess
patch
compliance of
computing
assets
N/A
Yes/ Manage
configuration of
network
equipment
N/A
Partial/ Assess
patch
compliance of
computing
69
Yes/ Manage
configuration of
network
equipment
Yes/ Platformspecific
N/A
Yes/ Platformspecific
Yes/ Anti-virus
system
N/A
Yes/ Anti-virus
system
September 2010
Technical Controls
workstations
CAESARS
assets
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
70
September 2010
Partial/
Patches only
Yes/
Management
of authorized
network
devices
Yes/
Workstations
and laptops
Partial/
Microsoft
Personal
Firewall
CAESARS
Yes/ FDCC +
Servers
N/A
Partial/ Patch
only
Yes/ Platformspecific
N/A
Partial/
Software
vulnerabilities
Partial/
Identification
of web
services only
Partial/
Identification
of database
only
Partial/
Management
of security
configuration
policy
N/A
Partial/
Assessment
of web
services only
Partial/
Assessment
of database
only
Yes/
Management
of security
configuration
policy
N/A
71
N/A
N/A
N/A
Partial/
Authorized
devices
N/A
NIST SP 800-53
Security
Controls
N/A
Partial/
Identification
of both
authorized &
unauthorized
devices
Anti-virus Tools
Partial/
Identification
of software on
authorized
devices
System
Management
Tools
N/A
Partial/
Identification
of software on
authorized
devices
Partial/
Management
of authorized
network
devices
Database
Vulnerability
Scanners
Partial/
Verification of
authorized
devices
Web
Vulnerability
Scanners
Partial/
Verification of
authorized
devices
Unauthenticated
Vulnerability
Scanners
Partial/
Verification of
authorized
devices
Network
Management
Tools
Authenticated
Vulnerability &
Patch Scanners
Authenticated
Configuration
Scanners
15 automatable of
20 Critical Security Controls
FDCC Scanners
This appendix maps the 15 automatable Consensus Audit Guideline security controls to the appropriate tools referenced in Appendix B, and to the corresponding NIST SP 800-53
Security Controls.
CM-8 (a, c, d, 2, 3,
4), PM-5, PM-6
N/A
Partial/
Malware
N/A
N/A
September 2010
NIST SP 800-53
Security
Controls
Anti-virus Tools
System
Management
Tools
Database
Vulnerability
Scanners
Web
Vulnerability
Scanners
Unauthenticated
Vulnerability
Scanners
Network
Management
Tools
Authenticated
Vulnerability &
Patch Scanners
Authenticated
Configuration
Scanners
FDCC Scanners
15 automatable of
20 Critical Security Controls
RA-5, SC-7(2, 4, 5,
6, 8, 11, 13, 14,
18), SC-9
5: Boundary Defense
N/A
N/A
N/A
N/A
N/A
Partial/
Security
configuration
settings only
8: Controlled Use of
Administrative Privileges
Partial/
Assessment
only
Partial/
Assessment
only
Partial/
Assessment
only
Partial/
Assessment
only
CAESARS
N/A
N/A
Partial/ Patch
only
N/A
Partial/
Assessment
only
Partial/ Need
NAC
N/A
N/A
Yes
Partial/ NAC
only
Partial/
Assessment
of perimeter
security only
N/A
Partial/
Assessment
only
N/A
N/A
N/A
N/A
N/A
N/A
Partial/
Assessment
only
Partial/
Assessment
only
N/A
N/A
Partial/
Assessment
of web
services only
Partial/
Assessment
of database
only
Partial/ User
accounts
management
N/A
Partial/
Assessment
of web
services only
Partial/
Assessment
of database
only
Partial/ User
accounts
management
72
N/A
N/A
N/A
N/A
N/A
September 2010
NIST SP 800-53
Security
Controls
Anti-virus Tools
System
Management
Tools
Database
Vulnerability
Scanners
Web
Vulnerability
Scanners
Unauthenticated
Vulnerability
Scanners
Network
Management
Tools
Authenticated
Vulnerability &
Patch Scanners
Authenticated
Configuration
Scanners
FDCC Scanners
15 automatable of
20 Critical Security Controls
N/A
N/A
Yes
N/A
Yes
Yes
Yes
N/A
N/A
N/A
N/A
N/A
Yes
N/A
N/A
N/A
Yes
N/A
AC-2 (e, f, g, h, j, 2,
3, 4, 5), AC-3
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Yes
N/A
N/A
N/A
Yes/ Platformspecific
N/A
N/A
N/A
N/A
N/A
N/A
N/A
N/A
Partial/ NAC
only
N/A
N/A
N/A
N/A
N/A
N/A
CAESARS
N/A
N/A
N/A
N/A
N/A
73
N/A
N/A
N/A
September 2010
CAESARS
74
September 2010
Microsoft SMS
SW
YES
DOS
Microsoft AD
SW
YES
DOS, IRS
Directory service
SW
YES
DOS, IRS
SW
YES
IRS
SW
NO
IRS
FDCC Scanner
Tripwire Enterprise
SW
YES
IRS
Host-based IDS
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
CA IT Client Manager
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES /
SAIR Tier 1
CAESARS
DOS
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
Authenticated Configuration
Scanner/FDCC Scanner
75
September 2010
SW
YES
SW
YES /
SAIR Tier 1
eEye Retina
SW
YES
Unauthenticated Vulnerability
Scanner
Fortinet FortiScan
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
Gideon SecureFusion
SW
YES /
SAIR Tier 1
Authenticated Configuration
Scanner/FDCC Scanner
HP SCAP Scanner
SW
NO
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
CAESARS
IRS
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
Authenticated Configuration
Scanner/FDCC Scanner
76
September 2010
SW
YES
FDCC Scanner
SW
YES
SW
NO
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
ThreatGuard Secutor
Magnus/Secutor Prime/S-CAT
SW
YES
nCircle Configuration
Compliance Manager
SW / Network
Appliance
YES
CAESARS
IRS
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
Authenticated Configuration
Scanner/FDCC Scanner
Authenticated Configuration
Scanner/FDCC Scanner
77
September 2010
SW
YES
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Authenticated Configuration
Scanner/FDCC Scanner
SW
YES
Unauthenticated Vulnerability
Scanner
nCircle IP360
Network
Appliance
YES
Rapid7 NeXpose
SW
YES
DbProtect
SW
YES
DOS, IRS
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Authenticated Configuration
Scanner/FDCC Scanner
CAESARS
DOS
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
Unauthenticated Vulnerability
Scanner
Unauthenticated Vulnerability
Scanner
IRS
Unauthenticated Vulnerability
Scanner
78
September 2010
AppDetective
SW
YES
IRS
Unauthenticated Vulnerability
Scanner
SW
YES
IRS
Tavve PReView
NO
Niksun NetOmni
NO
SW
YES
DOS, IRS
Relational Database
Management System
(RDBMS)
SW
YES
IRS
Relational Database
Management System
(RDBMS)
SW
YES
IRS
SW
YES
DOS, IRS
Directory service
SW
DOS, IRS
Operating System
CAESARS
79
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
September 2010
SW
YES
SW
YES
SW
YES
SW
YES
ADO.net
SW
YES
SW
YES
DOS, IRS
Middleware
SW
YES
DOS, IRS
Middleware
iPost (PRSM)
SW
NO
DOS
GOTS
Agiliance RiskVision
SW
NO
IRS
Presentation Engine
SW
YES
IRS
Presentation/
Risk Scoring Engine
DOS, IRS
Presentation and
Reporting
Subsystem
Reporting
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Relational Database
Management System
(RDBMS)
CAESARS
DOS, IRS
Function
Authenticated
Vulnerability and
Patch Scanner
Agency Basis of
Information
Authenticated
Configuration
Scanner
GSA
Schedule?
Sensor Subsystem
FDCC Scanner
Tool
Form
(HW, SW,
Network
Appliance,
etc.)
CAESARS Component
Web server
Middleware
80
September 2010
Tool
CAESARS
Form
(HW, SW,
Network
Appliance,
etc.)
GSA
Schedule?
Dundas Chart
SW
YES
Agency Basis of
Information
IRS
Function
81
Reporting
Presentation and
Reporting
Subsystem
Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Sensor Subsystem
Relational DBMS
Network
Configuration
Management Tool
System Configuration
Management Tool
Anti-Virus Tool
Web Vulnerability
Scanner
Database
Vulnerability Scanner
Unauthenticated
Vulnerability Scanner
Authenticated
Vulnerability and
Patch Scanner
Authenticated
Configuration
Scanner
FDCC Scanner
Presentation/ Security
Information & Event Manager
(SIEM)
September 2010
Source
Microsoft Windows XP
Windows Vista
Windows 7
CIS*
CIS*
Source
IRS**, CIS*
IRS**, CIS*
Sun Solaris 10
IRS**, CIS*
CIS*
HP-UX 11
IRS**, CIS*
IBM AIX 6
IRS**, CIS*
RedHat EL 5
IRS**, CIS*
Debian Linux 5
IRS**, CIS*
FreeBSD
CIS
CIS*
Source
Cisco IOS 12
CIS*
Workstation Applications
Platform
Source
Microsoft IE 7
Microsoft IE 8
CIS*
Mozilla Firefox
CIS*
CIS*
CIS*
CIS*
CIS*
CIS*
CAESARS
82
September 2010
Server Applications
Platform
Source
IRS**, CIS*
CIS*
CIS*
CIS*
IRS**, CIS*
IRS**, CIS*
IRS**, CIS*
CIS*
* SCAP benchmarks by the Center for Internet Security (CIS) are proprietary and cannot be used
by other SCAP-certified Authenticated Configuration Scanners. This is because the CIS
Benchmark Audit Tools have OVAL components embedded into the tools. SCAP benchmarks
by CIS are currently available as a part of CIS-CAT Benchmark Audit Tool.
** SCAP benchmarks by the IRS are designed for meeting the IRSs security configuration
policies. However, SCAP content such as OVAL, CCE, and CPE can be reused by other
agencies with express agreement from the IRS. SCAP contents are currently being developed by
the IRS; the first formal release is scheduled for October 2010.
CAESARS
83
September 2010
Vulnerability Score
Abbreviation
VUL
Description
Each vulnerability detected by the sensor is assigned a score from 1.0 to 10.0
according to the Common Vulnerability Scoring System (CVSS) and stored in
the National Vulnerability Database (NVD). To provide greater separation
between HIGH and LOW vulnerabilities (so that it takes many LOWs to equal
one HIGH vulnerability), the raw CVSS score is transformed by raising to the
power of 3 and dividing by 100.
Individual
Scores
Host Score
Notes
The VUL score for each host is the sum of all VUL scores for that host.
Name
Patch Score
Abbreviation
PAT
Description
Each patch which the sensor detects is not fully installed on a host is assigned
a score corresponding directly to its risk level.
Individual
Scores
Host Score
CAESARS
Risk Score
Critical
10.0
High
9.0
Medium
6.0
84
September 2010
Notes
None
Name
Abbreviation
SCM
Description
Individual
Scores
SCM Score for a failed check = Score of the checks Security Setting Category
Host Score
Notes
Security Setting Category
Initial Adjusted
Final
CVSS- CVSSAgency
Based
Based
Score
Score
Score
File Security
10.0
4.310
0.8006
Group Membership
10.0
4.310
0.8006
System Access
10.0
4.310
0.8006
Registry Keys
9.0
3.879
0.5837
Registry Values
9.0
3.879
0.5837
Privilege Rights
8.0
3.448
0.4099
7.0
3.017
0.2746
Event Audit
6.0
2.586
0.1729
Security Log
5.0
2.155
0.1001
System Log
3.0
1.293
0.0216
Application Log
2.0
0.862
0.0064
NOTE: There is no SCM score for a check that cannot be completed. Only a FAIL is scored.
CAESARS
85
September 2010
Name
Anti-Virus Score
Abbreviation
AVR
Description
The date on the anti-virus signature file is compared to the current date. There
is no score until a grace period of 6 days has elapsed. After six days, a score of
6.0 is assigned for each day since the last update of the signature file, starting
with a score of 42.0 on day 7.
Individual
Scores
Not applicable
Host Score
Host AVR Score = (IF Signature File Age > 6 THEN 1 ELSE 0) * 6.0 *
Signature File Age
Notes
None
Name
Abbreviation
SOE
Description
Individual
Scores
Product SOE Score = 5.0 (for each product not in approved version)
Host Score
Notes
Name
CAESARS
86
September 2010
Abbreviation
UPA
Description
By comparing the date each user password was changed to the current date,
user account passwords not changed in more than 60 days are scored one point
for every day over 60, unless:
The user account is disabled, or
The user account requires two-factor authentication for login
Individual
Scores
UPA Score = (IF PW Age > 60 THEN 1 ELSE 0) * 1.0 * (PW Age 60)
Host Score
Same
Notes
If the date of the last password reset cannot be determined, e.g., if the user
account has restrictive permissions, then a flat score of 200 is assigned.
Name
Abbreviation
CPA
Description
Individual
Scores
CPA Score = (IF PW Age > 30 THEN 1 ELSE 0) * 1.0 * (PW Age 30)
Host Score
Same
Notes
None
Incomplete Reporting Scores
Name
SMS Reporting
Abbreviation
SMS
Description
SMS Reporting monitors the health of the SMS client agent that is installed on
every Windows host. This agent independently reports the following types of
information:
Hardware inventory data, e.g., installed memory and serial number
CAESARS
87
September 2010
Host Score
Notes
Name
Vulnerability Reporting
Abbreviation
VUR
Description
Vulnerability Reporting measures the age of the most recent vulnerability scan
of a host. This scan is conducted from outside the host rather than by an agent.
It is therefore possible that a host may not have recent scan information for
one of the following reasons:
The host was powered off when the scan was attempted.
The hosts IP address was not included in the range of the scan.
The scanner does not have sufficient permissions to conduct the scan
on that host.
The date of the most recent scan is used as a base date and compared to the
current date. There is a conceptual grace period of 2 consecutive scans.
Operationally, each host is scanned for vulnerabilities every 7 days. Therefore,
a grace period of 15 days is allowed for VUR. After this period, a score of 5.0
is assigned for each subsequent missed scan.
CAESARS
88
September 2010
Individual
Scores
VUR Age
Host Score
Notes
If a host has never been scanned, e.g., the host is new on the network, the
current date is used as the base date.
Name
Abbreviation
SCR
Description
Security Compliance Reporting measures the age of the most recent security
compliance scan of a host. This scan is conducted from outside the host rather
than by an agent. It is therefore possible a host may not have recent scan
information for one of the following reasons:
The host was powered off when the scan was attempted.
The hosts IP address was not included in the range of the scan.
The scanner does not have sufficient permissions to conduct the scan
on that host.
The date of the most recent scan is used as a base date and compared to the
current date. There is a conceptual grace period of 2 consecutive scans.
Operationally, each host is scanned for security compliance every 15 days.
Therefore, a grace period of 30 days is allowed for SCR. After this period, a
score of 5.0 is assigned for each subsequent missed scan.
Individual
Scores
SCR Age
Host Score
Notes
If a host has never been scanned, e.g., the host is new on the network, the
current date is used as the base date.
CAESARS
89
September 2010
Acronyms
AD
Active Directory
ADC
ADU
AVR
Anti-Virus
CAG
CAESARS
CCE
CERT
CI
Configuration Item
CIO
CIS
CISO
COTS
Commercial Off-tThe-Shelf
CPE
CSAM
CSC
CVE
CWE
DHS
DBMS
DISA
DOJ
Department of Justice
DOS
Department of State
ESB
FDCC
FIPS
FISMA
CAESARS
90
September 2010
FNS
HTML
IA
Information Assurance
ID
Identifier
ID/R
IDS
IRS
ISSO
IT
Information Technology
NIST
NAC
NSA
NVD
OMB
OS
Operating System
OVAL
PAT
Patch
P.L.
Public Law
POA&M
PWS
RAT
RBD
Risk-Based Decision
RFI
RFP
RMF
SANS
SCAP
SCM
Security Compliance
SCPMaR
CAESARS
91
September 2010
SCR
SMS
SOA
SOE
SOW
Statement of Work
SP
Special Publication
SQL
SRR
SSH
Secure Shell
STIG
SwA
Software Assurance
VPN
VUL
Vulnerability
VUR
Vulnerability Reporting
WAN
WS
Web Service
XCCDF
XML
CAESARS
92
September 2010