Sunteți pe pagina 1din 105

Department of Homeland Security

Federal Network Security Branch

Continuous Asset Evaluation, Situational


Awareness, and Risk Scoring Reference
Architecture Report
(CAESARS)
September 2010

Version 1.8

Document No. MP100146

This page intentionally left blank.

CAESARS

September 2010

Table of Contents
1. Introduction ..............................................................................................................................1
1.1 Objective ..............................................................................................................................1
1.2 Intended Audience ...............................................................................................................1
1.3 References ............................................................................................................................2
1.4 Review of FISMA Controls and Continuous Monitoring....................................................2
1.5 CAESARS Reference Architecture Concept of Operations ................................................4
1.5.1 Definition .......................................................................................................................4
1.5.2 Operating Principles.......................................................................................................4
1.5.3 Relationship of CAESARS to CyberScope ...................................................................5
1.5.4 Cautionary Note What Risk Scoring Can and Cannot Do ..........................................6
1.5.5 CAESARS and Risk Management.................................................................................7
1.5.6 Risk Management Process .............................................................................................8
1.6 The CAESARS Subsystems ................................................................................................9
1.7 Document Structure: The Architecture of CAESARS.......................................................10
1.7.1 CAESARS Sensor Subsystem .....................................................................................11
1.7.2 CAESARS Database/Repository Subsystem ...............................................................12
1.7.3 CAESARS Analysis/Risk Scoring Subsystem ............................................................13
1.7.4 CAESARS Presentation and Reporting Subsystem .....................................................13
2. The Sensor Subsystem ...........................................................................................................14
2.1 Goals ..................................................................................................................................14
2.1.1 Definitions....................................................................................................................14
2.1.2 Operating Environment Assumptions for the Sensor Subsystem ................................15
2.2 Solution Concept for the Sensor Subsystem ......................................................................16
2.2.1 Tools for Assessing Security Configuration Compliance ............................................19
2.2.2 Security Assessment Tools for Assessing Patch-Level Compliance ...........................23
2.2.3 Tools for Discovering and Identifying Security Vulnerabilities..................................25
2.2.4 Tools for Providing Virus Definition Identification ....................................................29
2.2.5 Other Sensors ...............................................................................................................30
2.2.6 Sensor Controller .........................................................................................................32
2.3 Recommended Technology in the Sensor Subsystem .......................................................33
CAESARS

iii

September 2010

2.3.1 Agent-Based Configuration .........................................................................................33


2.3.2 Agentless Configuration ..............................................................................................35
2.3.3 Proxy-Hybrid Configuration ........................................................................................36
2.3.4 NAC-Remote Configuration ........................................................................................37
3. CAESARS Database ..............................................................................................................39
3.1 Goals ..................................................................................................................................39
3.1.1 Raw Data Collected and Stored Completely, Accurately, Automatically, Securely,
and in a Timely Manner .........................................................................................................39
3.1.2 Modular Architecture ...................................................................................................39
3.2 Objects and Relations ........................................................................................................39
3.2.1 Repository of Asset Inventory Baselines .....................................................................40
3.2.2 Repository of System Configuration Baselines ...........................................................41
3.2.3 National Vulnerability Database ..................................................................................42
3.2.4 Database of Findings....................................................................................................42
4. CAESARS Analysis Engine ..................................................................................................49
4.1 Goals ..................................................................................................................................49
4.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies ...............49
4.1.2 Make Maximal Use of Existing, In-Place, or Readily Available Sensors ...................50
4.1.3 Harmonize Data from Different Sensors .....................................................................50
4.1.4 Develop Analysis Results that Are Transparent, Defensible, and Comparable ..........50
4.2 Types of Raw Data and Data Consolidation and Reduction..............................................51
5. CAESARS Risk Scoring Engine ...........................................................................................53
5.1 Goals ..................................................................................................................................53
5.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies ...............53
5.1.2 Common Base of Raw Data Can Be Accessed by Multiple Analytic Tools ...............54
5.1.3 Multiple Scoring Tools Reflect Both Centralized and Decentralized Analyses ..........54
5.2 Centralized Scoring that Is Performed Enterprise-Wide ...................................................54
5.2.1 Allow Common Scoring for Consistent Comparative Results Across the Enterprise .54
5.2.2 Show Which Components Are Compliant (or Not) with High-Level Policies ...........54
5.2.3 Permit Decomposition of Scores into Shadow Prices..................................................55
5.2.4 Show Which Components Are Subject to Specific Threats/Attacks ...........................55
5.2.5 Allow Controlled Enterprise-Wide Change to Reflect Evolving Strategies ................55
5.3 Decentralized Analyses that Are Unique to Specific Enterprise Subsets ..........................55
CAESARS

iv

September 2010

5.3.1 Raw Local Data Is Always Directly Checkable by Local Administrators ..................55
5.3.2 Different Processor Types or Zones Can Be Analyzed Separately .............................56
5.3.3 Region- and Site-Specific Factors Can Be Analyzed by Local Administrators ..........56
5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use .......................56
5.4 Description of the iPost Implementation of the Analysis and Scoring Engines ................56
5.4.1 Synopsis of the iPost Scoring Methodology ................................................................57
5.4.2 Using iPost for Centralized Analyses ..........................................................................58
5.4.3 Using iPost for Decentralized Analyses ......................................................................58
5.4.4 The Scoring Methodologies for iPost Risk Components .............................................59
6. CAESARS Presentation and Reporting Subsystem ...........................................................61
6.1 Goals ..................................................................................................................................61
6.1.1 Modular Presentation, Independent of Presentation and Reporting Subsystem
Technologies ..........................................................................................................................61
6.1.2 Allow Either Convenient Dashboard Displays or Direct, Detailed View of Data ...61
6.2 Consistent Display of Enterprise-Wide Scores ..................................................................62
6.2.1 Device-Level Reporting ...............................................................................................63
6.2.2 Site-, Subsystem-, or Organizational-Level Reporting ................................................63
6.2.3 Enterprise-Level Reporting ..........................................................................................64
6.2.4 Risk Exception Reporting ............................................................................................64
6.2.5 Time-Based Reporting .................................................................................................64
7. Areas for Further Study ........................................................................................................65
8. Conclusions and Recommendations .....................................................................................67
Appendix A. NIST-Specified Security Content Automation Protocol (SCAP) ......................68
Appendix B. Addressing NIST SP 800-53 Security Control Families ....................................69
Appendix C. Addressing the Automatable Controls in the Concensus Audit Guidelines ....71
Appendix D. List of Applicable Tools ........................................................................................74
Appendix E. Sample List of SCAP Security Configuration Checklists ..................................82
Appendix F. Sample Risk Scoring Formulas ............................................................................84
Acronyms

CAESARS

.................................................................................................................................90

September 2010

List of Figures
Figure ES-1. Contextual Description of the CAESARS System .................................................... x
Figure 1. Continuous Monitoring of a Systems Security Posture in the NIST-Defined System
Life Cycle and Risk Management Framework ............................................................................... 4
Figure 2. Contextual Description of the CAESARS System ........................................................ 11
Figure 3. Relationships Between Security Configuration Benchmarks, Baseline, and RBDs ..... 15
Figure 4. Contextual Description of the Sensor Subsystem ......................................................... 17
Figure 5. Contextual Description of Interfaces Between an FDCC Scanner Tool and the
Database/Repository/Subsystem ................................................................................................... 21
Figure 6. Contextual Description of Interfaces Between an Authenticated Security Configuration
Scanner Tool and the Database/Repository Subsystem ................................................................ 23
Figure 7. Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch
Scanner Tool and the Database/Repository Subsystem ................................................................ 24
Figure 8: Contextual Description of Interfaces Between an Unauthenticated Vulnerability
Scanner Tool and the Database/Repository Subsystem ................................................................ 25
Figure 9. Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and
the Database/Repository Subsystem ............................................................................................. 28
Figure 10. Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool
and the Database/Repository Subsystem ...................................................................................... 29
Figure 11. Contextual Description of Interfaces Between an Authenticated Security
Configuration Scanner Tool and the Database Subsystem ........................................................... 30
Figure 12. Contextual Description of Sensor Controller to Control Security Assessment Tools. 32
Figure 13. Agent-Based Deployment Configuration .................................................................... 33
Figure 14. Agentless Deployment Configuration ......................................................................... 35
Figure 15. Proxy-Hybrid Deployment Configuration Agentless ............................................... 37
Figure 16. NAC-Remote Deployment Configuration Agent-Based .......................................... 37
Figure 17. Contextual Description of Database/Repository Subsystem ....................................... 40
Figure 18. Contextual Description of Interfaces Between the Database Subsystem and an FDCC
Scanner Tool ................................................................................................................................. 43
Figure 19. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Security Configuration Scanner Tool .................................................................... 44
Figure 20. Contextual Description of Interfaces Between the Database Subsystem and an
Authenticated Vulnerability and Patch Scanner Tool ................................................................... 45
CAESARS

vi

September 2010

Figure 21. Contextual Description of Interfaces Between the Database Subsystem and an
Unauthenticated Vulnerability Scanner Tool................................................................................ 46
Figure 22. Contextual Description of Interfaces Between the Database Subsystem and a Web
Vulnerability Scanner Tool ........................................................................................................... 47
Figure 23. Contextual Description of Interfaces Between the Database Subsystem and a Database
Vulnerability Scanner Tool ........................................................................................................... 48

List of Tables
Table 1. Recommended Security Tools for Providing Data to Support Risk Scoring ................. 18
Table 2. Currently Scored iPost Components ............................................................................... 57
Table 3. Components Under Consideration for iPost Scoring ...................................................... 57
Table 4. Reportable Scoring Elements (Sample) .......................................................................... 63

CAESARS

vii

September 2010

Executive Summary
Continuous monitoring is the backbone of true security.
Vivek Kundra, Federal Chief Information Officer, Office of Management and Budget
A target-state reference architecture is proposed for security posture monitoring and risk scoring,
based on the work of three leading federal agencies: the Department of State (DOS) Security
Risk Scoring System, the Department of Treasury, Internal Revenue Service (IRS) Security
Compliance Posture Monitoring and Reporting (SCPMaR) System, and the Department of
Justice (DOJ) use of BigFix and the Cyber Security Assessment and Management (CSAM) tool
along with related security posture monitoring tools for asset discovery and management of
configuration, vulnerabilities, and patches. The target reference architecture presented in this
document the Continuous Asset Evaluation, Situational Awareness, and Risk Scoring
(CAESARS) reference architecture represents the essential functional components of a security
risk scoring system, independent of specific technologies, products, or vendors, and using the
combined elements of the DOS, IRS, and DOJ approaches. The objective of the CAESARS
reference architecture is to provide an abstraction of the various posture monitoring and risk
scoring systems that can be applied by other agencies seeking to use risk scoring principles in
their information security program. The reference architecture is intended to support managers
and security administrators of federal information technology (IT) systems. It may be used to
develop detailed technical and functional requirements and build a detailed design for tools that
perform similar functions of automated asset monitoring and situational awareness.
The CAESARS reference architecture and the information security governance processes that it
supports differ from those in most federal agencies in key respects. Many agencies have
automated tools to monitor and assess information security risk from factors like missing
patches, vulnerabilities, variance from approved configurations, or violations of security control
policies. Some have automated tools for remediating vulnerabilities, either automatically or
through some user action. These tools can provide current security status to network operations
centers and security operations centers, but they typically do not support prioritized remediation
actions and do not provide direct incentive for improvements in risk posture. Remedial actions
can be captured in Plans of Actions and Milestones, but plans are not based on quantitative and
objective assessment of the benefits of measurably reducing risk, because the potential risk
reduction is not measured in a consistent way.
What makes CAESARS different is its integrated approach and end-to-end processes for:
Assessing the actual state of each IT asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to use in taking the
actions to make the most critical changes needed to reduce risk and improve their grades
CAESARS

viii

September 2010

Making these assessments on a continuous or nearly continuous basis is a prerequisite for


moving IT security management from isolated assessments, supporting infrequent authorization
decisions, to continuous risk management as described in the current federal guidance of the
National Institute of Standards and Technology (NIST) and Office of Management and Budget
(OMB) mandates.
The risk scoring and continuous monitoring capabilities that were studied for this document
represent operational examples of a more generalized capability that could provide significant
value to most or all federal agencies, or to any IT enterprise. What makes risk scoring different
from compliance posture reporting is providing information at the right level of detail so that
managers and system administrators can understand the state of the IT systems for which they
are responsible, the specific gaps between actual and desired states of security protections, and
the numerical value of every remediation action that can be taken to close the gaps. This enables
responsible managers to identify the actions that will result in the highest added value in bringing
their systems into compliance with standards, mandates, and security policies.
The reference architecture consists of four interconnected architectural subsystems, the functions
and services within those subsystems, and expected interactions between subsystems.
The four subsystems are:
Sensor Subsystem
Database/Repository Subsystem
Analysis/Risk Scoring Subsystem
Presentation and Reporting Subsystem
The fundamental building blocks of all analysis and reporting are the individual devices that
constitute the assets of the information system enterprise. An underlying assumption of risk
scoring is that the total risk to an organization is an aggregate of the risks associated with every
device in the system. The risk scoring system answers the questions:
What are the devices that constitute the organizations IT assets?
What is the current state of security controls (subset of technical controls) associated with
those assets?
How does their state deviate from the accepted baseline of security controls and
configurations?
What is the relative severity of the deviations, expressed as a numerical value?
At its core, CAESARS is a decision support system. The existing implementations on which it is
based have proven to be effective means of visualizing technical details about the security
posture of networked devices in order to derive actionable information that is prioritized, timely,
and tailored.
The core of the reference architecture is the database, which contains the asset status as reported
by the sensors as well as the baseline configurations against which the asset status is compared,
the rule sets for scrubbing data for consistency, the algorithms for computing the risk score of
each asset, and the data that identifies, for each asset, the responsible organization, site, and/or
individual who will initiate the organzations remediation procedure and monitor their

CAESARS

ix

September 2010

completion. This assignment of responsibility is key to initiating and motivating actions that
measurably improve the security posture of the enterprise.
The subsystems interact with the database and through it with each other, as depicted in Figure
ES-1.
Figure ES-1. Contextual Description of the CAESARS System

Using an implementation based on this reference architecture, risk scoring can complement and
enhance the effectiveness of security controls that are susceptible to automated monitoring and
reporting, comparing asset configurations with expected results from an approved security
baseline. It can provide direct visualization of the effect of various scored risk elements on the
overall posture of a site, system, or organization.
Risk scoring is not a substitute for other essential operational and management controls, such as
incident response, contingency planning, and personnel security. It cannot determine which IT
systems have the most impact on agency operations, nor can it determine how various kinds of
security failures loss of confidentiality, integrity, and availability will affect the functions and
mission of the organization. In other words, risk scoring cannot score risks about which it has no
information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration
CAESARS

September 2010

management tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.
Neither is it a substitute for the underlying governance and management processes that assign
responsibility and accountability for agency processes and results; but it does help make explicit
what those responsibilities are for the security of IT systems, and by extension it helps identify
when there are overlaps or gaps in responsibility so that they can be addressed.
The reference architecture presented here abstracts the design and implementation of various
configuration benchmarking and reporting tools to a model that can be implemented with a
variety of tools and products, depending on the existing infrastructure and system management
technologies available to federal IT managers. The reference architecture also enables IT
managers to add capabilities beyond those implemented in existing implementations, to extend
risk scoring and continuous monitoring to all IT components (network, database, server,
workstations, and other elements) in a modular, interoperable, standards-based implementation
that is scalable, flexible, and tailorable to each agencys organizational and technical
environment.
In a memorandum dated April 5, 2010, the OMB Chief Information Officer restated the
commitment to continuous monitoring:
Continuous monitoring is the backbone of true security security that moves beyond
compliance and paperwork. The threats to our nations information security continue to
evolve and therefore our approach to cybersecurity must confront these new realities on a
real time basis. The Department of State (DOS) the Department of Justice (DOJ) and the
Department of the Treasury (Treasury) have each developed systems that allow
monitoring in real time of certain aspects of their security enterprises. To evaluate best
practices and scale them across the government, the Office of Management and Budget
(OMB) is requesting that DOS, DOJ, and Treasury coordinate with the Department of
Homeland Security (DHS) on a comprehensive assessment of their monitoring systems.
This reference architecture summarizes the conclusions of that assessment. DHS will engage
with federal stakeholders to refine this reference architecture based on the experience of other
federal agencies with similar capabilities and to establish a federal forum for sharing capabilities
and knowledge that support the goals of risk scoring and continuous monitoring.

CAESARS

xi

September 2010

Acknowledgements
The information published in the reference architecture is based on the compilation of work in
support of the continuous monitoring of computing and network assets at the Department of
State, the Department of Justice, and the Department of the Treasury. The Federal Network
Security Branch of the National Cyber Security Division of the Department of Homeland
Security especially acknowledges the contributions, dedication, and assistance of the following
individuals in making this reference architecture a reality:
John Streufert, Department of State
George Moore, Department of State
Ed Roback, Department of Treasury
Duncan Hays, Internal Revenue Service
Gregg Bryant, Internal Revenue Service
LaTonya Gutrick, Internal Revenue Service
Andrew Hartridge, Internal Revenue Service
Kevin Deeley, Department of Justice
Holly Ridgeway, Department of Justice
Marty Burkhouse, Department of Justice

CAESARS

September 2010

This page intentionally left blank.

CAESARS

September 2010

1. Introduction
The Federal Network Security (FNS) Branch of the National Cyber Security Division (NCSD) of
the Department of Homeland Security (DHS) is chartered with leading, directing, and supporting
the day-to-day operations for improving the effectiveness and consistency of information
systems security across government networks. FNS is also the Program Management Office for
the Information Systems Security Line of Business (ISSLOB). On April 5, 2010, the Office of
Management and Budget (OMB) tasked the DHS with assessing solutions for the continuous
monitoring of computing and network assets of the Department of State (DOS), the Department
of Justice (DOJ), and the Department of the Treasury. The results of the assessment gave rise to
a reference architecture that represents the essential architectural components of a risk scoring
system. The reference architecture is independent of specific technologies, products, or vendors.
On April 21,2010, the Office of Management and Budget (OMB) released memorandum M-1015, providing guidelines to the federal departments and agencies (D/A) for FY2010 Federal
Information Security Management Act (FISMA) reporting. The OMB memorandum urges D/As
to continuously monitor security-related information from across the enterprise in a manageable
and actionable way. The reference architecture defined in this document the Continuous Asset
Evaluation, Situational Awareness, and Risk Scoring (CAESARS) reference architecture is
provided to the federal D/As to help develop this important capability. Continuous monitoring of
computing and network assets requires up-to-date knowledge of the security posture of every
workstation, server, and network device, including operating system, software, patches,
vulnerabilities, and antivirus signatures. Information Security managers will use the summary
and detailed information to manage and report the security posture of their respective D/A.

1.1 Objective
The objective of this document is to describe a reference architecture that is an abstraction of a
security posture monitoring and risk scoring system, informed by the sources noted above, and
that can be applied to other agencies seeking to apply risk scoring principles to their information
security program. This reference architecture is to be vendor-neutral and product-neutral and will
incorporate the key elements of the DOS, Internal Revenue Service (IRS), and DOJ
implementations: targeted, timely, prioritized risk scoring based on frequent monitoring of
objective measures of IT system risk.

1.2 Intended Audience


This reference architecture is intended for use by managers and security administrators of federal
IT systems. It is intended as a guide to developing individual agency programs and processes that
support continuous monitoring of IT assets for compliance with baseline configuration standards
for patches, versions, configuration settings, and other conditions that affect the security risk
posture of a device, system, or enterprise.

CAESARS

September 2010

1.3 References
National Institute of Standards and Technology (NIST) Special Publication (SP) 800-37,
Revision 1, Guide for Applying the Risk Management Framework to Federal Information
Systems, February 2010.
NIST SP 800-39, DRAFT Managing Risk from Information Systems: An Organizational
Perspective, April 2008.
NIST SP 800-53, Revision 3, Recommended Security Controls for Federal Information
Systems and Organizations, August, 2009.
NIST SP 800-64, Revision 2, Security Considerations in the System Development Life Cycle,
October 2008.
NIST SP 800-126, Revision 1 (Second Public Draft), The Technical Specification for the
Security Content Automation Protocol (SCAP): SCAP Version 1.1, May 2010.
NIST, Frequently Asked Questions, Continuous Monitoring, June 1, 2010.1
OMB Memorandum M-07-18, Ensuring New Acquisitions Include Common Security
Configuration, June 1, 2007.
OMB Memorandum M-08-22, Guidance on the Federal Desktop Core Configuration
(FDCC), August 11, 2008.
OMB Chief Information Officer Memorandum, Subject: Security Testing for Agency
Systems, April 5, 2010.
OMB Memorandum M-10-15, FY2010 Reporting Instructions for the Federal Information
Security Management Act and Agency Privacy Management, April 21, 2010.
Department of State, Enterprise Network Management, iPost: Implementing Continuous Risk
Monitoring at the Department of State, Version 1.4, November, 2009.
The MITRE Corporation, Security Risk Scoring System (iPost) Architecture Study, Version
1.1, February 2009.
The MITRE Corporation, Security Compliance Posture Monitoring and Reporting
(SCPMaR) System: The Internal Revenue Service Solution Concept and Architecture for
Continuous Risk Monitoring, Version 1.0, February 1, 2009.

1.4 Review of FISMA Controls and Continuous Monitoring


Under the Federal Information Security Management Act of 2002 (FISMA), , NIST is
responsible for developing information security standards and guidelines, including minimum
requirements for federal information systems. NIST SP800-37, Revision 1, Guide for Applying
the Risk Management Framework to Federal Information Systems, establishes a Risk
Management Framework (RMF) that promotes the concept of near-real-time risk management
through the implementation of robust continuous monitoring processes. The RMF encourages the
use of automation and automated support tools to provide senior leaders with the necessary
information to take credible, risk-based decisions with regard to the organizational information
systems supporting their core missions and business functions.

http://csrc.nist.gov/groups/SMA/fisma/documents/faq-continuous-monitoring.pdf

CAESARS

September 2010

Commercially available automated tools, such as those described in the NIST RMF, support
situational awarenesswhat NIST refers to as maintaining awareness of the security state of
information systems on an ongoing basis through enhanced monitoring processesof the state
of the security of IT networks and systems. Tools are available to monitor and assess the
information security risk from numerous factors such as missing patches, known vulnerabilities,
lack of compliance with approved configurations, or violations of security control policies. Many
if not all of these tools can provide current security status to network operations centers and
security operations centers.
What is generally lacking are tools and processes that provide information in a form and at a
level of detail that support prioritized remediation actions and that recognize improvements
commensurate with the timely, targeted reduction in risk. As a result, system security assessment
and authorization is usually based on infrequently conducted system vulnerability scans that test
security controls at the time of initial assessment but do not reflect the real state of system risk
between security control test cycles.
Faced with a broad range of residual risks, security managers and system administrators have no
reliable, objective way to prioritize actions to address these risks. Remedial actions are often
embodied in Plans of Actions and Milestones (POA&M), but assigning resources to take action
is not based on rational assessment of the benefits of actually reducing risk, because the potential
risk reduction is not measurable in a consistent way.
The CAESARS reference architecture and the information security governance processes that it
supports, combined, provide support different from that available to most federal agencies in key
respects. Many agencies have automated tools to monitor and assess information security risk
from factors like missing patches, vulnerabilities, variance from approved configurations, or
violations of security control policies. Some have automated tools for remediating
vulnerabilities, either automatically or through some user action. These tools can provide current
security status to network operations centers and security operations centers, but they typically
do not support prioritized remediation actions and do not provide direct incentive for
improvements in risk posture.
What make CAESARS different is its integrated approach and end-to-end process for:
Assessing the actual state of each information technology (IT) asset under management
Determining the gaps between the current state and accepted security baselines
Expressing in clear, quantitative measures the relative risk of each gap or deviation
Providing simple letter grades that reflect the aggregate risk of every site and system
Ensuring that the responsibility for every system and site is correctly assigned
Providing targeted information for security and system managers to take the actions to
make the most critical changes needed to reduce risk and improve their grades
Making these assessments on a continuous or nearly continuous basis is a prerequisite for
moving IT security management from isolated assessments, supporting infrequent authorization
decisions, to continuous risk management as described in current federal guidance of the NIST
and OMB mandates.
The CAESARS approach provides a means of monitoring the security controls in place and
focusing staff efforts on those most likely to enhance the agencys information security posture.
CAESARS

September 2010

The system consolidates and scores data from multiple network and computer security
monitoring applications into a single point and presents the data in an easy-to-comprehend
manner. The system allows a highly distributed workforce with varying levels of skill and
authority to recognize security issues within their scope of control. Once recognized, the staff
can then focus their efforts on those that remediate the highest vulnerabilities.

1.5 CAESARS Reference Architecture Concept of Operations


1.5.1 Definition
System life cycle is defined by NIST SP 800-64, Rev. 2 as Initiation, Development/Acquisition,
Implementation/Assessment, and Operations & Maintenance. (Note: Activities in the Disposal
Phase are not a part of continuous monitoring activities.)2
Risk management framework (RMF) is defined by NIST SP 800-37, Rev. 1 as Categorize
Information System, Select Security Controls, Implement Security Controls, Assess Security
Controls, Authorize Information System, and Monitor Security Controls.3
Figure 1 illustrates the security activities in a system life cycle using the NIST-defined RMF.
Figure 1. Continuous Monitoring of a Systems Security Posture in the NIST-Defined System Life Cycle and Risk
Management Framework
NIST SP
800-64, Rev 2

SDLC Phase: Initiation

Security
Activities

SDLC Phase: Development/Acquisition

SDLC Phase: Implementation


Assessment

Approving Authorities review,


negotiate, and approve deviations

Preliminary risk assessment


and define information
protection needs

Perform security assessment


to validate implemented security
controls and record residual risks

FIPS 199: Security category


Security configuration
benchmarks

Verify designed security


controls

ISSOs &Security PMO


track approved
deviations and monitor
risks
Monitor , report , and
manage implemented
security controls to
maintain security posture

Implement security configuration


benchmarks

NIST SP 800-37, Rev. 1


Risk Management
Framework

SDLC Phase: Operations& Maintenance

Categorize

Select

Implement

Assess

Authorize

Monitor

1.5.2 Operating Principles


The Risk Scoring program is intended as an agency-wide program, directed by the Senior
Agency Information Security Officer in support of its organizational mission. It is applicable to
agencies and IT infrastructures that are both centralized and decentralized, and its application is
especially valuable in an organization where the control of IT assets is widely distributed and not
under the direct control of any one organization. The program is intended to meet the following
objectives:
Measure risk in multiple areas
Motivate administrators to reduce risk
2

NIST SP 800-64, Rev. 2, Security Considerations in the System Development Life Cycle, October 2008.
NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems,
February 2010.
3

CAESARS

September 2010

Motivate management to support risk reduction


Measure improvement
Inspire competition
Provide a single score for each host
Provide a single score for each site
Provide a single score for the enterprise
Be scalable to additional risk areas that can permit automated continuous monitoring
Score a given risk only once
Be demonstrably fair
The Risk Scoring program at DOS evolved in three separate stages:
Deployment of enterprise management tools to monitor weaknesses
Delivery of operational monitoring data to the field in an integrated application
Establishment of a risk scoring program that fairly measures and assigns risk
The reference architecture was developed to fit the multiple organizational structures, network
infrastructures, geographic distribution, and existing tools available to support these goals. It will
provide local administrators with a single interface for direct access to the monitoring data for
objects within their scope of responsibility; it will also provide a single source for enterpriselevel management reporting across a variety of monitoring data.
A generic CAESARS reference architecture for other federal agencies would support
development and operation of similar systems for multiple agencies that fit their enterprise
architectures and do not constrain their choice of tools. CAESARS is intended for use by
multiple agencies to support their individual adoption of comparable processes and the tools to
support those processes. A modular architecture also supports the sharing of subsets of the
solution (such as a government-wide risk scoring algorithm) while allowing agencies to use the
algorithm in the manner that best fits their specific size, structure, and complexity.
Depending on the agency needs, resources, and governance models, CAESARS could:
Enable agencies to see how their existing tools and processes could be adapted to such a
framework
Enable agencies to identify gaps in their own set of solution tools, based on the
architectural constructs developed
Support a build or buy decision for acquiring continuous risk scoring tools and services
Provide standardized approaches to statements of work (SOW) or performance work
statements (PWS) to acquire the needed services and products
Establish an accepted framework in which more detailed technical and functional
requirements can be developed

1.5.3 Relationship of CAESARS to CyberScope


Current plans for summarizing federal agency progress and programs for continuous monitoring
of IT security are detailed in OMB M-10-15, FY2010 Reporting Instructions for the Federal
CAESARS

September 2010

Information Security Management Act and Agency Privacy Management, dated April 21, 2010.
The OMB memo includes the following:
For FY 2010, FISMA reporting for agencies through CyberScope, due November 15,
2010, will follow a three-tiered approach:
1. Data feeds directly from security management tools
2. Government-wide benchmarking on security posture
3. Agency-specific interviews
Further guidance on item 1, direct data feeds, says,
Agencies should not build separate systems for reporting. Any reporting should be a byproduct of agencies continuous monitoring programs and security management tools.
Beginning January 1, 2011, agencies will be required to report on this new information
monthly.
And it provides additional details:
The new data feeds will include summary information, not detailed information, in the
following areas for CIOs:
Inventory
Systems and Services
Hardware
Software
External Connections
Security Training
Identity Management and Access
The types of information that OMB requires to be reported through CyberScope are broader in
scope than the status of individual assets, which are the focus of the CAESARS reference
architecture. Nevertheless, the CAESARS reference architecture can directly support the
achievement of some of the OMB objectives by ensuring that the inventory, configuration, and
vulnerabilities of systems, services, hardware, and software are consistent, accurate, and
complete. A fundamental underpinning of both the CAESARS reference architecture and the
OMB reporting objectives is full situational awareness of all agency IT assets.

1.5.4 Cautionary Note What Risk Scoring Can and Cannot Do


Risk scoring can complement and enhance the effectiveness of security controls that are
susceptible to automated monitoring and reporting, comparing asset configurations with
expected results from an approved security baseline. It can provide direct visualization of the
effect of various scored risk elements on the overall posture of a site, system, or organization.
Risk scoring is not a substitute for other essential operational and management controls, such as
incident response, contingency planning, and personnel security. It cannot determine which IT
systems have the most impact on agency operations, nor can it determine how various kinds of
security failures loss of confidentiality, integrity, and availability will affect the functions and
mission of the organization. In other words, risk scoring cannot score risks about which it has no
CAESARS

September 2010

information. However, when used in conjunction with other sources of information, such as the
FIPS-199 security categorization and automated asset data repository and configuration
anagement tools, risk scoring can be an important contributor to an overall risk management
strategy. Such strategies will be considered in future versions of CAESARS.

1.5.5 CAESARS and Risk Management


It cannot be overemphasized that, while CAESARS can serve a necessary and valuable function
within an organizations risk management function, the Reference Architecture is not, nor ever
can be, a full replacement for that function. CAESARS does directly address many of the major
aspects of risk management, but there are other aspects that CAESARS does not, nor ever will,
consider. Three such areas merit particular mention.
First, in modern terms, risk management is an interplay between threat and criticality. Both
threat and criticality can often be qualitatively assessed as low, moderate, or high. Criticality is
normally addressed in the systems FIPS-199 Security Categorization. Threats can be similarly
assessed as to their severity, and the two must be combined to determine overall risk. A highseverity threat on a low-criticality system and a low-severity threat on a high-criticality system
may both pose the same overall risk, which itself may then range from low to high.
CAESARS operates primarily in two parallel processes: (i) assessing system configurations
(hardware, software, network, etc.) against pre-defined standards, and (ii) detecting the presence
of known vulnerabilities. The CAESARS architecture is limited to considerations of the threat
that is posed by deviations from standards or by the presence of vulnerabilities. CAESARS does
not have access to any information concerning the criticality of a system or its assets. Therefore,
all discussion of risk in CAESARS must be interpreted in these terms: threats detected and
reported by CAESARS can be assessed for their severity, and may pose overall risk ranging from
low up to the highest that the subject system could ever face: the failure of its mission and/or the
compromise/destruction of its most critical assets. But, lacking knowledge of specific system
criticalities, CAESARS itself can go no further.
Second, CAESARS is not, in itself, a full risk management system. CAESARS cannot, for
instance, evaluate security controls in the NIST SP 800-53 Planning or Management families,
such as those for capital planning or risk assessment. It cannot create, evaluate, or manage
security policies. CAESARS may have interaction with other automated security tools designed
to address Operational or Technical controls, such as those for configuration management,
auditing, or intrusion detection, but it does not itself replace those tools.
Finally, CAESARS cannot itself replace a risk management organization. CAESARS can report
its findings to the systems owners and security administrators and to identified management
officials. But CAESARS cannot replace the organization functions of prioritization or tasking
that are needed for remediation. Moreover, if an organization is federated (i.e., if it consists of
enclaves that function with relatively independent asset sensitivity, ownership and security
policies), then each enclave might have its own individual version of CAESARS. Integrating the
CAESARS results from the various enclaves into a single picture that is useful to the full
enterprise would require policy decisions and procedures that are beyond CAESARS intended
scope.

CAESARS

September 2010

1.5.6 Risk Management Process


The operating concept for CAESARS is based on NIST-defined RMF4 and US-CERT Software
Assurance (SwA) guidelines.5 As illustrated in Figure 1, RMF is composed of a six-step process
conducted throughout a system life cycle:
Step 1: Categorize Information System
To determine the information protection needs, a project security engineer working as a part of a
system engineering team shall work with the information system security officer (ISSO) to:
Perform Federal Information Processing Standard (FIPS) 199 security categorization
Identify potential information security flaws or weaknesses in an information system (i.e.,
potential vulnerabilities)
Identify the potential threat and likelihood of a threat source so that the vulnerability can
be addressed
Step 2: Select Security Controls
Based on the configurations of a designed information system, the security engineer shall work
with the system owner and the Information System Security Officier (ISSO) to:
Select the security configuration benchmarks based on the security category and
computing platforms in order to formulate the initial system security configuration
baseline
Step 3: Implement Security Controls
Once the security configuration baseline has been established,
The system administrator shall implement the security configuration settings according to
the established security configuration baseline
Step 4: Assess Security Controls
After the security configuration baseline has been implemented in a system-underdevelopment6:
The CAESARS system shall perform assessments based on the approved baseline
A security control assessor or ISSO shall report the level of compliance and identify
deviations from the approved security configuration baseline
The ISSO shall review the security configuration assessment reports and determine
corrective action or recommendations for residual risks
Step 5: Authorize Information System

NIST SP 800-37, Rev. 1, Guide for Applying the Risk Management Framework to Federal Information Systems: A
System Life Cycle Approach, February 2010.
5
DHS/National Cyber Security Division (NCSD)/US-CERT SwA Processes and Practices Working Group.
(https://buildsecurityin.us-cert.gov/swa/procwg.html)
6
A system-under-development is a system that has not been formally authorized by a Authorizing Official (AO).

CAESARS

September 2010

In this step, an agency Authorizing Official (AO) reviews the security configuration assessment
report (prepared by the security analyst or ISSO) and
Formally approves the new security configuration baseline with risk-based decision
(RBD)
Step 6: Monitor Security Controls
Once the system-under-development has formally transitioned into a system-in-operation,
the CAESARS system shall perform automated assessments periodically to maintain the baseline
security posture. However, existing processes for doing this must still be followed:
If a software patch is required, the formally approved security configuration baseline
must be updated through a change control process.
If a software upgrade or configuration change is significant, then the ISSO must rebaseline the new system configuration by initiating Step 2 in the RMF process.
The CAESARS reference architecture is intended to be tailored to fit within this six-step
framework, the requirements of the NIST SP 800-53, the agencys information security program
plan, and the agencys enterprise security architecture. It can help implement the use of common
controls, as it functions as a common control for risk assessment and configuration management
across the scope of the enterprise that it covers. Future versions of CAESARS will address even
more comprehensive integration of the RMF and SwA with CAESARS operations.

1.6 The CAESARS Subsystems


The remainder of this document describes a modularized architecture that contains the required
components of an enterprise security risk scoring system modeled on current agency
implementations. The recommended architectural approach consists of four architectural
subsystems, functions and services within those subsystems, and expected interactions between
subsystems.
The four subsystems of CAESARS, as depicted in Figure 2 below, are:
Sensor Subsystem
Database/Repository Subsystem
Analysis/Risk Scoring Subsystem
Presentation and Reporting Subsystem
This modular division of the subsystems allows a number of advantages. Chief among these is
that the internal implementation of one subsystem is functionally independent of that of any
other subsystem. The technology of any one subsystem can be procured or developed, altered or
even replaced independently of another subsystem. The technology within a subsystem could be
replicated to provide failure backup, with the two implementations using differing technologies
to provide technical diversity. Software maintenance efforts are also independent across
subsystems.
The CAESARS Database/Repository Subsystem, for example, could include a commercial offthe-shelf (COTS) database based on MS SQL or Oracle (or both, side by side), or it could
include an internally developed product. This independence also applies to the various steps
CAESARS

September 2010

within the procurement process, such as requirements development (for COTS products) or
creation and execution of SOWs and PWSs (for system development.)
The modular architecture allows for multiple adjacent subsystems. A single CAESARS
Database/Repository Subsystem could interface with multiple CAESARS Analysis/Risk Scoring
Subsystems (e.g., one a COTS analysis product and another internally developed) and even offer
differing services to each. This feature would allow a single CAESARS Database/Repository
Subsystem to interface with multiple CAESARS Analysis/Risk Scoring Subsystems, for
example, at both the local (site or region) level and the enterprise-wide level.
Similarly, a single CAESARS Analysis/Risk Scoring Subsystem could provide data to multiple
CAESARS Presentation and Reporting Subsystem components.

1.7 Document Structure: The Architecture of CAESARS


In the remaining sections of this paper, the four subsystems of CAESARS are described in detail.
In those cases where well-defined products, particularly COTS products, have been used, these
products are identified and placed into their appropriate subsystem and context.7 Their interfaces
are described as fully as is practical for this paper. Future work may include identifying
alternative products that might also be used within each subsystem and context.

The use of trade names and references to specific products in this document do not constitute an endorsement of
those products. Omission of other products does not imply that they are either inferior or superior to products
mentioned herein for any particular purpose.

CAESARS

10

September 2010

Figure 2. Contextual Description of the CAESARS System

1.7.1 CAESARS Sensor Subsystem


Section 2 of this document describes the Sensor Subsystem. The Sensor Subsystem includes the
totality of the IT assets that are the object of CAESARS monitoring activities. It includes all
platforms upon which CAESARS is expected to report, including end-user devices, database
servers, network servers, and security appliances. The Sensor Subsystem does not include
platforms for which federal agencies have no administrative responsibility or control and which
CAESARS is not expected to monitor and report. For example, it could include federal
contractor IT systems but it would not include the public Internet.
CAESARS may also play a role in cases where federal agencies contract with third-party
providers for data processing and/or storage services (e.g., cloud computing services). It may
be possible to require, in the contracts Service Level Agreement (SLA), that the providers
compliance data be made available to real-time federal monitoring, analysis, and reporting (at
least at a summary level, if not down to the sensors themselves). This possibility will be
examined in future versions of CAESARS.
CAESARS

11

September 2010

A primary design goal of CAESARS is to minimize the need for client platforms themselves to
contain or require any specific executable components of CAESARS. The data to be gathered
and reported to CAESARS is collected by systems that are already in place on the client
platforms or that will be provided by the enterprise. The platforms of the Sensor Subsystem are
assumed to have already installed the tools that will gather the configuration and vulnerability
data that will be reported to CAESARS. For example, those platforms that run the Microsoft
Windows operating system are assumed to have already in place the native Windows security
auditing system, and the server platforms are assumed to have their own similar tools already in
place. Enterprise tools such as Active Directory likewise have their own native auditing
mechanisms. Similarly, client platforms may already have installed such tools as anti-virus, antispam, and anti-malware controls, either as enterprise-wide policy or through local (region- or
site-specific) selection.
CAESARS, per se, does not supply these data collection tools, nor does it require or prohibit any
specific tools. The data that they collect, however, must be transferred from the client platforms
to the CAESARS Database/Repository Subsystem on an ongoing, periodic basis. The tools for
transferring this data are specified in the CAESARS Sensor-to-Database protocol. The transfer
can follow either a push or pull process, or both, depending upon enterprise policy and local
considerations.
In the data push process, the scheduling and execution of the transfer is controlled by the local
organization, possibly by the platform itself. This allows maximum flexibility at the local level
and minimizes the possibility of interference with ongoing operations. But the push process is
also more likely to require that specific CAESARS functionalities be present on client platforms.
CAESARS components required for data push operations are part of the CAESARS Sensor-toDatabase protocol, and are described in Section 2 as part of the Sensor Subsystem.
In the data pull process, the scheduling and execution of the data transfer is controlled by the
CAESARS Database/Repository Subsystem. CAESARS interrogates the platforms on an
ongoing, periodic basis, and stores the resulting data in the CAESARS Database/Repository. The
pull paradigm minimizes the need for any CAESARS components on individual client platforms,
but also provides less scheduling flexibility at the local level and may also interfere with existing
operations. The pull paradigm may also involve directly integrating the CAESARS
Database/Repository Subsystem with numerous and disparate individual platform sensors,
negating the benefits of subsystem modularity and independence. CAESARS components
required for data pull operations are part of the CAESARS Sensor-to-Database protocol, and are
described in Section 3 as part of the CAESARS Database/Repository Subsystem.

1.7.2 CAESARS Database/Repository Subsystem


Section 3 of this reference architecture describes the CAESARS Database/Repository
Subsystem. The CAESARS Database/Repository Subsystem includes the totality of the data
collected by the Sensor Subsystem and transferred up to CAESARS. It also includes any tools
that are required by the CAESARS Database/Repository to perform data pull operations from the
Sensor Subsystem platforms.
CAESARS does not impose any specific database design or query requirements at the
CAESARS Database/Repository Subsystem; these are the purview of the CAESARS
Analysis/Risk Scoring Subsystem. However, the CAESARS Database/Repository Subsystem
CAESARS

12

September 2010

does include tools for accessing the data in order to allow routine maintenance and manual error
correction.
CAESARS also does not impose any specific requirements on the connectivity from the Sensor
Subsystem platforms to the CAESARS Database/Repository Subsystem. Such connectivity is
assumed to exist already, with sufficient capacity to handle the anticipated CAESARS data
traffic. However, CAESARS data is presumed to be sensitive, and the security of the monitoring
process is discussed in Section 3. The connectivity must allow for the confidentiality, integrity,
and availability of the data transfer.

1.7.3 CAESARS Analysis/Risk Scoring Subsystem


Sections 4 and 5 this reference architecture describe the CAESARS Analysis/Risk Scoring
Subsystem. One crucial design goal of CAESARS is that CAESARS data be transferred to, and
secured within, the CAESARS Database/Repository before most analysis is performed. This
allows the CAESARS Analysis/Risk Scoring Subsystem to consist of multiple analytic tools,
possibly querying the same or different portions of the database, with either local (region- or sitespecific) or enterprise-wide perspectives, yet still provide confidence that no single type of
analysis will influence or skew any other. Thus, the primary component of the CAESARS
Database/Repository Subsystem will be the schema of the database itself, which any analytic tool
must follow to access the database.
Sections 4 and 5 also discuss examples of existing analytic tools as well as some available COTS
tools. The purpose of this discussion is not to require or recommend any specific tools or modes
of analysis, nor to recommend any specific COTS products, but merely to underscore the
advantage that a wide variety of tools and modes can bring to the analysis process.
CAESARS does not impose any specific requirements on the connectivity from the CAESARS
Database/Repository Subsystem to the CAESARS Analysis/Risk Scoring Subsystem. Such
connectivity is assumed to exist already, with sufficient capacity to handle the anticipated
CAESARS analysis traffic. However, CAESARS data is presumed to be sensitive, and the
security of the analysis process is discussed in Section 4. The connectivity must allow for the
confidentiality, integrity, and availability of the data, and the CAESARS Database/Repository
Subsystem limits region and site officials to accessing only the data from their own client set.

1.7.4 CAESARS Presentation and Reporting Subsystem


Section 6 of this reference architecture describes the CAESARS Presentation and Reporting
Subsystem. The CAESARS Presentation and Reporting Subsystem can include multiple
presentation tools, with either local or enterprise-wide perspectives, yet still provide confidence
that no one type of presentation will influence or skew any other. The CAESARS Analysis/Risk
Scoring Subsystem allows for presentation tools that are directly interfaced to an existing
analysis tool or tool set that combines ad hoc analysis and presentation in a single component.
This flexibility means that the CAESARS Analysis-to-Presentation-Subsystem protocol cannot
be specified globally, but rather is tool-specific. A combined analysis/presentation tool that must
access the CAESARS Database/Repository thus must also follow the Master Schema protocol.
Likewise, the connectivity and security comments made above for the CAESARS Analysis/Risk
Scoring Subsystem also apply to the CAESARS Presentation and Reporting Subsystem.
CAESARS

13

September 2010

2. The Sensor Subsystem


2.1 Goals
The six primary goals identified in the Sensor Subsystem are:
The ability to monitor and identify weaknesses in security configuration settings in
computing and networking assets throughout their system life cycles, enterprise-wide, in
a timely manner
The ability to leverage existing security management tools as much as possible
The ability to assess and report the security configuration settings of computing assets,
enterprise-wide, across multiple enclaves, domains, and geographical regions
The ability to assess and report the security configuration settings of heterogeneous
computing and networking platforms, such as:
Operating Systems: Windows-based, UNIX-based computing platforms, and IOSbased networking platforms
Applications: Office productivity suite, anti-virus system, web servers, web
application servers, and database servers
Virtualized Servers: Windows- and UNIX-based servers in a computing grid
The ability to assess and report the patch-level of heterogeneous computing and
networking platforms, such as:
Operating Systems: Windows-based, UNIX-based computing platforms, and IOSbased network platforms
Applications: Office productivity suite, anti-virus system, web servers, web
application servers, and database servers
Virtualized Servers: Windows- and UNIX-based servers in a computing grid
The ability to assess and report vulnerabilities of heterogeneous networking and
application platforms, such as:
Computing assets: Windows-based, UNIX-based computing platforms and IOS-based
network platforms
Software applications: Office productivity suite, anti-virus system, web servers, web
application servers, and database servers

2.1.1 Definitions
A security configuration benchmark is a set of recommended security configuration settings
specified by the Federal CIO Council in the United States Government Configuration Baseline
(USGCB)8 or by the NIST National Checklist Program, originated from the National

United States Government Configuration Baseline (USGCB) website: http://usgcb.nist.gov/

CAESARS

14

September 2010

Vulnerability Database (NVD)9 and sponsored by DHS National Cyber Security Division/USCERT.
A risk-based decision (RBD) is an explicit policy decision by an organization to accept the risk
posed by a specific vulnerability and to forego implementing any further countermeasures
against it. In essence, an RBD is a decision to accept a specific vulnerability as part of a
benchmark. Most commonly, an organization adopts an RBD in cases where the countermeasure
either is not available at all, is available but ineffective, or is available only at a cost (in money,
time, or operational impact) that exceeds the expected damage that the vulnerability itself would
cause. Clearly, such a crucial policy decision should be made only at the Enterprise or
Mission/Business level.
A security configuration baseline is a set of security configuration benchmarks with agencyspecified software patches and RBDs.
Figure 3 illustrates the relationships between security configuration benchmarks, baseline, and
RBDs. A baseline is composed of multiple security configuration benchmarks and agencyspecified software patches. Non-conformance is a deviation from the agency-specified baseline.
An RBD is a formal acceptance of deviation from the approved benchmark. It should be noted
that an RBD is made within the Analysis/Risk Scoring Subsystem. Sensors only assess the state
of security configuration settings in a computing/networking asset against a prescribed security
configuration baseline.
Figure 3. Relationships Between Security Configuration Benchmarks, Baseline, and RBDs

2.1.2 Operating Environment Assumptions for the Sensor Subsystem


CAESARS must be capable of operating in a variety of enterprise IT environments, including
those with the following features:

The National Checklist Program is available at the National Vulnerability Database website: http://nvd.nist.gov/

CAESARS

15

September 2010

Diverse network domains. The enterprise may be composed of multiple network domains
that may or may not have trusted relationships.
Geographically diverse networks. A geographically diverse enterprise may be
interconnected through networks that may or may not have sufficient bandwidth to
support monitoring in continuous real time. That is, it may be necessary for some
portions of the enterprise to collect and store sensor data, then forward that data at a later
time as transmission bandwidth becomes available. Clearly, this will necessitate a time
lag between data collection and reporting.
Disconnected computing assets. An enterprise may have computing assets that are not
continuously connected to an agencys enterprise but for which the agency must account
nonetheless. Again, periodic connections must be used to transmit the data, necessitating
a time lag between collection and reporting.
Remote office. Some computing assets may be connected to the agencys network only
remotely (e.g., over public channels such as the Internet). The data being reported is
sufficiently sensitive that its confidentiality, integrity and availability must be ensured
using explicit security measures, such as a virtual private network (VPN).
The CAESARS Sensor Subsystem must be capable of assessing the security configuration
settings specified by the USGCB. The USGCB is a federal government-wide initiative that
provides guidance to agencies on what should be done to improve and maintain effective
configuration settings focusing primarily on security. The USGCB security configuration
baseline is evaluated by the Federal CIO Councils Architecture and Infrastructure Committee
(AIC) Technology Infrastructure Subcommittee (TIS) for federal civilian agency use.10 The
USGCB security configuration settings originate from NIST National Checklist Program (NCP),
to which government and industry contribute their best-practices in the form of guidelines (such
as Defense Information Systems Agency [DISA] Security Technical Implementation Guides
[STIG] and Center for Internet Security [CIS] Benchmarks) and the Federal Desktop Core
Configuration (FDCC).
The CAESARS Sensor Subsystem shall be capable of assessing the security configuration
baseline in one or multiple aforementioned operating environments such as: A single unified
network domain; diverse network domains; geographically-diverse networks; disconnected
computing assets; and remote office.
The CAESARS sensor subsystem shall provide the ability to perform periodic and pre-scheduled
automated security assessments.
The CAESARS sensor subsystem shall provide the ability to perform on-demand security
assessment on a specified target asset.

2.2 Solution Concept for the Sensor Subsystem


As described in Section 2.1.2, Operating Environment Assumptions for the Sensor Subsystem,
the CAESARS architecture provides the ability to assess the security configuration settings on a
set of heterogeneous computing and networking platforms. Principally, the Sensor Subsystem

10

United States Government Configuration Baseline (USGCB) is located at: http://usgcb.nist.gov/.

CAESARS

16

September 2010

components assess and collect the security configuration settings of target computing and
networking assets then compare them to a set of configuration baselines.11 The findings are
aggregated at the CAESARS Database/Repository Subsystem to support the risk assessment and
scoring process.
Figure 4 is a contextual description of the Sensor Subsystem. In general, an enterprise has many
types of point solutions to assist in monitoring, measuring, and managing the security posture of
computing and networking assets. In Figure 4, the Targets region refers to the totality of the
organizations IT assets upon which CAESARS is expected to monitor and report. The targets
set can include platforms (desktops, laptops, palm and mobile devices, etc.), servers (web and
communications, file, database, security appliances, etc.), and communications links as well as
their associated hardware, software, and configuration components. Platforms and
communications links can be physical or virtual. In fact, the CAESARS architecture is expressly
intended to support, as much as possible, the entire range of IT assets in use throughout the
federal government.
Figure 4. Contextual Description of the Sensor Subsystem

11

In this case, target means target-of-evaluation (TOE). It is the agencys computing and networking assets.

CAESARS

17

September 2010

Referencing the DOSs iPosts risk scoring elements and the IRS SCPMaR system, the
recommended security point solutions are:
For Security Configuration Compliance
These sensors are designed to verify and validate the implemented security
configuration settings in a computing asset or in networking equipment.
FDCC Scanner to measure level of compliance to the FDCC security configuration
baseline on desktop workstations and laptops
Authenticated Security Configuration Scanner to measure level of compliance to
the agency-defined set of security configuration baseline on computing and network
assets
For Patch-level Compliance
Authenticated Vulnerability and Patch Scanner to measure level of compliance to
NIST-defined or agency-defined set of patches and to identify and enumerate
vulnerabilities associated with computing and network assets
For Vulnerability Assessment
Unauthenticated Vulnerability Scanner to identify and enumerate vulnerabilities
associated with computing and network assets
Network Vulnerability Scanner to identify and enumerate vulnerabilities associated
with computing and network assets
Database Vulnerability Scanner to identify and enumerate vulnerabilities associated
with database systems
Web Vulnerability Scanner to identify and enumerate vulnerabilities associated
with web applications
Network Configuration Management Tool to provide configuration settings on network
equipment for security configuration scanner
Anti-virus Tool to provide identification data on the latest virus definitions for security
configuration scanner
Table 1 is a summary of the aforementioned tools for providing security measurements data to
the CAESARS Database/Repository Subsystem.
Table 1. Recommended Security Tools for Providing Data to Support Risk Scoring
Reference Risk Scoring Elements
Security Configuration Compliance
To measure the degree of compliance to
agency-specified security configuration
baselines
Patch-level Compliance
To measure the degree of compliance to
agency-specified patch-level baselines

CAESARS

Recommended Security Assessment Tools


Federal Desktop Core Configuration (FDCC) Scanner (SCAPvalidated)
Authenticated Security Configuration Scanner (SCAP-validated)
Authenticated Vulnerability and Patch Scanner (SCAP-validated)

18

September 2010

Vulnerability
To discover and identify potential
vulnerabilities that may affect agencys
security posture
Virus Definition Status
To ensure that the latest virus definition
has been implemented in all computing
assets

Unauthenticated Vulnerability Scanner (SCAP-validated) /


12
Network Vulnerability Scanner (CVE-compatible)
Web Vulnerability Scanner (CVE-compatible)
Database Vulnerability Scanner (CVE-compatible)
Anti-Virus System logs
Authenticated Security Configuration Scanner (SCAP-validated)

2.2.1 Tools for Assessing Security Configuration Compliance


In 2002, a joint study conducted by the National Security Agency (NSA), CIS, and the MITRE
Corporation found that, on average, about 80 percent of known computer vulnerabilities were
attributed to misconfiguration or missing patches.13 Since then, many tools have been developed
to assist in the assessment of security configuration settings in computing platforms and network
equipment. The well known tools in the federal
What is SCAP?
(.gov) domain are DISA Security Readiness
Review (SRR) Scripts for Windows- and UNIXbased computing platforms, CIS Benchmark Audit SCAP is a suite of specifications that
standardize the format and nomenclature
Tools, and the Router Audit Tool (RAT) for IOSby which security software products
based network equipments. However, most of
communicate software flaw and security
these tools are designed for security auditors and
configuration information. SCAP is a
system administrators, not for continuous
multi-purpose protocol that supports
monitoring. Recently, NIST, NSA, and DISA,
automated
vulnerability
checking,
working jointly with industry, have defined a set
technical control compliance activities,
and security measurement. The technical
of new automated security configuration
composition of SCAP Version 1.0
assessment tools based on the NIST-specified
14
comprises six specifications eXtensible
Security Content Automation Protocol (SCAP) .
Configuration Checklist Description
Currently, there are two types of tools designed
Format (XCCDF), Open Vulnerability
for assessing the degree of compliance to
and Assessment Language (OVAL),
established security configuration baselines:
Common Platform Enumeration (CPE),
FDCC Scanner and Authenticated Security
Common Configuration Enumeration
Configuration Scanner.
(CCE), Common Vulnerabilities and
2.2.1.1 Assessing Compliance to the FDCC
Baseline
OMB M-07-18, issued in 2007, requires all
federal agencies to implement FDCC for all
federal desktop and laptop computing assets. The

Exposures (CVE), and Common


Vulnerability Scoring System (CVSS)
and their interrelationships. These
specifications are grouped into the
following three categories:
Languages
Enumerations
Vulnerability measurement and
scoring systems

12

CVE: Common Vulnerabilities and Exposures. CVE is a dictionary of publicly known information security
Source: NIST SP 800-126. See Appendix
vulnerabilities and exposures. CVEs common identifiers enable data exchange between security products and
for more details.
provide a baseline index point for evaluating coverage of tools andAservices.
13
Miuccio, B., CIS Research Report Summary: CIS Benchmark Security Configuration Eliminate 80-90% of Known
Operating System Vulnerabilities, Center for Internet Security, 2002.
14
NIST SP 800-126, The Technical Specification for the Security Content Automation Protocol (SCAP): SCAP
Version 1.0, November 2009.

CAESARS

19

September 2010

FDCC standard is composed of multiple security configuration settings:


Microsoft Windows XP Professional Edition: Operating System, Personal Firewall, and
IE7
Microsoft Windows Vista: Operating System, Personal Firewall, and IE7
To verify an agencys implementation of FDCC, OMB requires that both industry and
government information technology providers must use SCAP validated tools and that
agencies must also use these [SCAP-validated] tools when monitoring use of these
configurations as part of FISMA continuous monitoring.15
Using a NIST SCAP-validated FDCC Scanner, the assessment results are required to be
produced in XCCDF Results Format. However, XCCDF Results Format can be bulky; therefore,
under the XCCDF specification, XML schema can be created to abridge the full assessment
results to just the deviations to FDCC baseline (for example, NSA Assessment Results Format
[ARF] or Assessment Summary Results [ASR]).16
Target of Evaluation:
Workstation and Laptop Operating Systems: Microsoft Windows-based computing
assets:
Windows XP
Windows Vista
Windows 7
Required Input:
Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified
SCAP)17
Required Output:
OMB/NIST-specified XCCDF Results format and reporting spreadsheet18
XCCDF Results format using NIST-specified or agency-specified XML schema
definition (The purpose of the schema is to enable FDCC Scanner product vendors to
export the validated deviations into the CAESARS Database/Repository Subsystem.)
Required Interface:
Database Subsystem Enterprise Service Bus (ESB)-provided secured web service (WS)

15

OMB M-08-22, Guidance on the Federal Desktop Core Configuration (FDCC), August 11, 2008.
Assessment Results Format (ARF) and Assessment Summary Results (ASR) are emerging standards, not formally
approved by NIST.
17
As of April 23, 2010. The United States Government Configuration Baseline can be downloaded from
http://usgcb.nist.gov/.
18
Per OMB M-09-29, agencies are to report compliance status of FDCC using the XCCDF Results and reporting
spreadsheet as defined by NIST in FDCC Compliance Reporting FAQs 2008.03.04
(http://nvd.nist.gov/fdcc/fdcc_reporting_faq_20080328.cfm).
16

CAESARS

20

September 2010

Figure 5. Contextual Description of Interfaces Between an FDCC Scanner Tool and the Database/Repository/Subsystem

2.2.1.2 Assessing Compliance to Agency-Defined Security Configuration Baseline


Prior to SCAP, agencies documented their security configuration baseline by using CIS
Benchmarks and DISA STIGs. These security configuration checklists can now be defined using
SCAP. Currently, there are 18 security configuration checklists in the NVD repository for
SCAP-validated security configuration assessment tools. Many vendors that offer Authenticated
Security Configuration Scanners such as CIS and McAfee also provide security configuration
benchmarks in SCAP.19
Most if not all Authenticated Security Configuration Scanner tools are also FDCC Scanners; this
is because FDCC is just a subset of the security configuration baseline. Under the NIST
Validation Program test requirements, these tools must also produce results in the Open
Vulnerability Assessment Language (OVAL) Results format.
The assessment results from the use of a NIST SCAP-validated Authenticated Security
Configuration Scanner are required to be produced in XCCDF results format; however,
depending upon the agencys reporting needs, these tools can also produce finding results in
OVAL Results format and OVAL System Characteristics format.
Target of Evaluation:
Server Operating Systems:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
19

Security configuration benchmarks currently available in SCAP are enumerated in Appendix D.

CAESARS

21

September 2010

UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, Debian Linux 5,
and Apple Mac OS X Snow Leopard Server Edition
Network Operating System: Cisco IOS 12
Workstation Applications:
Web Browsers: Microsoft IE 7and 8 and Mozilla Firefox
Office Productivity Applications: Microsoft Office 2003 and 2007
Anti-Virus Systems: Symantec Norton AntiVirus 10.0 and 9.0 and McAfee
VirusScan 7.0
Virtualized Servers:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, and Debian 5
Required Inputs:
Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified
SCAP)20
Required Outputs:
XCCDF Results format using NIST-specified or agency-specified XML schema
definition
Required Interface:
Database/Repository Subsystem ESB-provided secured WS

20

Currently, there over 130 security configuration benchmarks available in NVD; 18 security configuration
benchmarks are written in SCAP. Agencies such as NSA and IRS are in the process of generating additional SCAP
benchmarks. In addition, SCAP-validated product vendors such as CIS, McAfee, and NetIQ are providing their own
SCAP benchmarks for agencies to tailor.

CAESARS

22

September 2010

Figure 6. Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the
Database/Repository Subsystem

2.2.2 Security Assessment Tools for Assessing Patch-Level Compliance


Timely implementation of software patches is one of the most critical IT operational tasks for
maintaining the security posture of an enterprise, yet it may be one of most complex. Because
not all software is constructed the same, not all the patches can be implemented in the same way.
However, with the advent of OVAL, agencies can now verify and validate their implementation
of patches. Major COTS vendors and IT service providers are now providing OVAL contents
that describe how and where their software patches are installed.
The Authenticated Vulnerability and Patch Scanner is designed to use the patch-level baseline
described in OVAL to verify and validate the software patches have been implemented in an
agencys computing assets.
Target of Evaluation:
Workstation and Laptop Operating Systems: Microsoft Windows-based computing
assets:
Windows XP
Windows Vista
Windows 7
Server Operating Systems:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
CAESARS

23

September 2010

UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, Debian Linux 5,
and Apple Mac OS X Snow Leopard Server Edition
Network Operating System: Cisco IOS 12
Workstation Applications:
Web Browsers: Microsoft IE 7 and 8 and Mozilla Firefox
Office Productivity Applications: Microsoft Office 2003 and 2007
Anti-Virus Systems: Symantec Norton AntiVirus 10.0 and 9.0 and McAfee
VirusScan 7.0
Virtualized Servers:
Windows-based: Microsoft Windows Server 2003 (32- and 64-bit platforms) and
Windows Server 2008 (32- and 64-bit platforms)
UNIX-based: Sun Solaris 10, HP-UX 11, IBM AIX 6, RedHat EL 5, and Debian 5
Required Inputs:
Asset inventory baseline.
Agency-defined software patch baselines (described using OVAL).21
Required Outputs:
OVAL Results format using MITRE or agency-specified XML schema definition
Required Interface:
Database/Repository Subsystem ESB-provided secured WS
Figure 7. Contextual Description of Interfaces Between an Authenticated Vulnerability and Patch Scanner Tool and the
Database/Repository Subsystem

21

Currently, vendors are to describe software patches in OVAL. NIST NVD maintains a repository of software
patches in OVAL, so agencies and tools vendors can download the latest patch list.

CAESARS

24

September 2010

2.2.3 Tools for Discovering and Identifying Security Vulnerabilities


2.2.3.1 Unauthenticated Vulnerability Scanners
The Unauthenticated Vulnerability Scanners are the typical network-based vulnerability scanners
that leverage the Nmap-like port scanning mechanism to develop a profile of networked
computing assets in an enterprise. After a profile has been developed, these vulnerability
scanners begin the process of analyzing and probing for potential vulnerabilities.
Target of Evaluation:
Networked computing assets in an enterprise22
Required and Optional Inputs:
The latest CVE dictionary; this may be optional in some unauthenticated vulnerability
scanners where the tool has its own proprietary form of CVE dictionary
Required Outputs:
Discovered vulnerabilities identified using CVE ID and description
Required Interface:
CAESARS Database/Repository Subsystem ESB-provided secured WS or WS Adapter
Figure 8: Contextual Description of Interfaces Between an Unauthenticated Vulnerability Scanner Tool and the
Database/Repository Subsystem

22

Unlike the Authenticated Security Configuration Scanners and Authenticated Vulnerability and Patch Scanner,
network-based vulnerability scanners do not need a defined security configuration and patch-level baselines.

CAESARS

25

September 2010

2.2.3.2 Web Vulnerability Scanners


Today, most service-oriented architecture (SOA) IT infrastructures are implemented using web
services technology. Web Vulnerability Scanner is a new generation of vulnerability scanner
designed to discover and identify vulnerability in web services. Examples of this type of
technology are Nikto, Paros Proxy, WebScarab, WebInspect, Acunetix WVS, and AppScan.
Authentications are required for in-depth exploitation of web application vulnerabilities.
Target of Evaluation:
Web Servers: Microsoft IIS 6.0, 7.0, and 7.5 and Apache 2.0 and 1.3
Web Application Servers: Apache Tomcat 5.5 and 6.0 and BEA WebLogic Server 8.1
and 7.0
Required and Optional Inputs:
The latest CVE dictionary; this may be optional in some unauthenticated vulnerability
scanners where the tool has its own proprietary form of CVE dictionary
The latest CWE dictionary23 is optional; not all web vulnerability scanners are designed
to exploit program codes in web applications
Required and Optional Outputs:
Discovered vulnerabilities identified using CVE ID and description.
Web application vulnerabilities described using CWE are optional; not all web
vulnerability scanners are designed to exploit program codes in web applications

23

Common Weakness Enumeration (CWE) provides a common nomenclature and identification for software security
control implementation weaknesses.

CAESARS

26

September 2010

Required Interface:
Database Subsystem ESB-provided secured WS or WS Adapter

CAESARS

27

September 2010

Figure 9. Contextual Description of Interfaces Between a Web Vulnerability Scanner Tool and the Database/Repository
Subsystem

2.2.3.3 Database Vulnerability Scanners


Similar to web vulnerability scanners, Database Vulnerability Scanners are a type of software
application vulnerability scanner designed for database management systems (DBMS) such as
DbProtect and AppDetective. Authentications are usually required for in-depth exploitation of
DBMS application vulnerabilities. In addition, some SCAP-validated Authenticated Security
Configuration Scanners can assess the degree of compliance to an agency-specified security
configuration baseline.
Target of Evaluation:
Database Servers: Microsoft SQL Server 2005 and 2008; Oracle Database Server 9i, 10g,
and 11g; and IBM DB2 Version 8.0 and 9.5
Required and Optional Inputs:
The latest CVE dictionary
Optional input: The latest CWE dictionary is optional; not all database vulnerability
scanners are designed to exploit program codes in database presentation and business
logic layers
Required and Optional Outputs:
Discovered vulnerabilities identified using CVE ID and description
Web application vulnerabilities described using CWE are optional; not all database
vulnerability scanners are designed to exploit program codes in database presentation and
business logic layers
CAESARS

28

September 2010

Required Interface:
Database Subsystem ESB-provided secured WS or WS Adapter
Figure 10. Contextual Description of Interfaces Between an Database Vulnerability Scanner Tool and the
Database/Repository Subsystem

2.2.4 Tools for Providing Virus Definition Identification


Currently, management of virus definition updates is performed using an anti-virus system. To
verify whether a computing asset has the latest virus definition update requires an agent. Some
Authenticated Security Configuration Scanners can assess whether a computing asset has the
latest virus definition by examining the system log. This assessment information is then sent
back as a part of XCCDF Results.
Required Inputs:
Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified SCAP)
Required Outputs:
XCCDF Results format using a NIST-specified or agency-specified XML schema
definition
Required Interface:
Database Subsystem ESB-provided secured WS

CAESARS

29

September 2010

Figure 11. Contextual Description of Interfaces Between an Authenticated Security Configuration Scanner Tool and the
Database Subsystem

2.2.5 Other Sensors


The following is found in NIST Frequently Asked Questions, Continuous Monitoring, June 1,
2010, http://csrc.nist.gov/groups/SMA/fisma/documents/faq-continuous-monitoring.pdf:
Organizations develop security plans containing the required security controls for their
information systems and environments of operation based on mission and operational
requirements. All security controls deployed within or inherited by organizational
information systems are subject to continuous monitoring.
Many of the security controls defined in NIST Special Publication 80053especially in
the technical families of Access Control, Identification and Authentication, Auditing and
Accountability, and Systems and Communications Protectionare good candidates for
monitoring using automated tools and techniques (e.g., the Security Content Automation
Protocol).
Many authoritative sources federal guidance (e.g., from NIST and OMB), industry guidance
(e.g., Gartner, SANS (SysAdmin, Audit, Network, Security) Institute), and the expanding COTS
marketplace make clear that continuous monitoring is trending in the direction of more
sophisticated, deeper-probing monitoring. The CAESARS reference architecture is flexible
enough to support types of sensors other than those listed here, and the list of sensor types
currently enumerated in this document should not be construed as being either definitive or
exhaustive. This reference architecture will not undertake to dictate, or even to predict, what
other types of sensors may be required and/or available in the future.

CAESARS

30

September 2010

Nonetheless, the current set of sensor types highlights some general properties that continuous
monitoring components have in common, and which other component types will share,
regardless of their specific function or level of sophistication. The most obvious common
property is that all of the continuous monitoring components consist essentially of three elements
that can be represented in the CAESARS reference architecture:
1. A pre-established standard that is stored in the CAESARS Database/Repository
Subsystem in the Repository of System Configuration Baselines; the standard may be
established by the sensor itself (e.g., an anti-virus signature file and its associated update
schedule) or by an outside source (e.g., FDCC or NVD)
2. A means of interrogating each relevant platform (whether by agent, proxy, or hybrid) to
determine whether the standard is being correctly applied/maintained
3. A means of reporting discrepancies (in SCAP-compatible format) into the CAESARS
Database of Findings, plus the means to analyze, score, and report the findings to the
appropriate CAESARS users
With this framework in mind, it is possible to gain some insight into which other controls (such
as those in NIST SP 800-53 or other sources) might, in the future, lend themselves to
implementations of continuous monitoring. Three such possibilities are listed below.
Maintenance. It may be possible to establish hardware maintenance and configuration
management standards for each platform and place these into the CAESARS Repository
of System Configuration Baselines. Platform sensors may be able to record maintenance
actions and determine whether hardware maintenance and configuration are up-to-date.
Discrepancies exceeding pre-defined limits (e.g., preventive maintenance actions that are
more than 15 days late) would be reported to CAESARS as a finding, as would
maintenance events not corresponding to authorized actions.
Analysis of Audit Logs. The accepted best engineering practice for analyzing audit logs
calls for logs to be examined on a regular and timely basis and for recorded events to be
assigned a severity level. High-severity audit events are reported to appropriate
administrators immediately and are identified as trackable action items, requiring analysis
and closure within a reasonable time period. It may be possible to automate the audit log
analysis process, including the reporting, analysis, and resolution of high-severity audit
events, and to establish criteria such as those above as standards in the CAESARS
Repository of System Configuration Baselines. Platform sensors could determine whether
the logs are being examined regularly and whether high-severity audit events are being
closed within the required periods. Discrepancies would be reported to CAESARS as
findings.
Identity and Access Management. There is little currently available in the way of
automated continuous monitoring of identity and access management. These are very
serious problem areas and research into them is extensive and continuing, especially into
ways of verifying their correctness and reducing their complexity. Future product
offerings may include automated identification of erroneous access control assignments
or of weak or compromised identity management, especially if standards for such
things could be developed as Configuration Baselines. Should that happen, these tools
might easily direct such findings to CAESARS for reporting and remediation.
CAESARS

31

September 2010

Without attempting to predict specifically which other potential sensor types may be required or
available in the future, the CAESARS reference architecture is designed to include them.

2.2.6 Sensor Controller


The Sensor Controller component is designed to control the automated security assessment
processes.
Figure 12 illustrates how a Sensor Controller such as NetIQ Aegis or Opsware Process
Automation System is used:
Step 1: Scheduling Automated Assessments. The ISSO works with the IT operations
manager to schedule the automated security assessments based on the organizations
security awareness needs. The Sensor Subsystem performs periodic security
configuration compliance assessments in accordance with this pre-determined schedule.
Step 2: Automated Assessments. The Sensor Subsystem performs automated security
assessments of all the computing assets according to the asset portfolio information
stored in the System Configuration Baseline Repository. Each security configuration
compliance assessment is based on the assigned security configuration baseline.
Step 3: Collection and Storage of Assessment Results. The security assessment results are
stored in the Database of findings in the Database/Repository Subsystem to support the
risk analysis, scoring, and reporting processes.
Figure 12. Contextual Description of Sensor Controller to Control Security Assessment Tools

CAESARS

32

September 2010

2.3 Recommended Technology in the Sensor Subsystem


Based on observed instances of enterprise deployment of continuous monitoring systems,
CAESARS Sensor Subsystem includes four recommended technology implementation
configurations: Agent-based, Agent-less, Proxy-Hybrid, and Portable. These recommended
technology implementation configurations are designed to support the following operating
environment:
Diverse networked domains, in which an enterprise is composed of multiple networked
domains that may or may not have trusted relationships
Geographically diverse networks, in which a geographically-diverse enterprise that is
interconnected through networks may or may not have sufficient bandwidth to support
continuous monitoring.
Disconnected computing assets that are disconnected from an agencys enterprise even
though the agency has to account for them

2.3.1 Agent-Based Configuration


This deployment configuration is designed so that an endpoint agent resides in the target
computing asset. The endpoint agent periodically, or on demand, assesses the security
configuration compliance of the computing asset. Then, the assessment data are sent via a trusted
communication channel, using a NIST FIPS 140-2-certified cryptographic module, to the
CAESARS Database/Repository Subsystem for aggregation and analysis, as shown in Figure 13.
Figure 13. Agent-Based Deployment Configuration

The advantages of this agent-based configuration are:


Near-real-time continuous monitoring The endpoint agent resident in a computing asset
can monitor security configuration parameters continuously or periodically and report
compliance to the CAESARS Database/Repository Subsystem.
Deep visibility to security configuration compliance data Because the agent resides
locally on a computing asset and is executed as a system process, it usually has direct

CAESARS

33

September 2010

access to the security kernel; thus, it has greater visibility to privileged system
configuration files.
Complex management tasking The agent is a mission-specific software application; it
may be programmed to perform complex management tasks such as checking the host
platform on its security configuration settings, modifying security configuration
parameters, and enforcing the installation of security patches;
Low network utilization Agents can perform a programmed assessment process
autonomously, without direct connection or instruction from the CAESARS
Database/Repository Subsystem. Hence, the security compliance data can be collected
and stored within the target computing asset until the CAESARS Database/Repository
Subsystem requests them.
Security Endpoint agents can establish their own trusted communications channel (via
Secure Shell [SSH]) to the CAESARS Database/Repository Subsystem.
The disadvantages of agent-based deployment configuration are:
Higher operating costs Endpoint agents are required to be installed on target computing
assets. For a large enterprise, this entails greater operating costs installation and
maintenance (i.e., patch and upgrade). This is one of key drivers to replace the existing
IRS legacy solution, Policy Checkers.
Incompatibilities between agents An endpoint agent is a mission-specific application
that runs on top of an operating system (OS). A host may run multiple agents, such as one
for anti-virus (e.g., Symantec Anti-Virus System), one for intrusion detection (e.g., ISS
RealSecure or Check Point ZoneAlarm), and another one for configuration management
(e.g., Tivoli Configuration Manager). Introduction of an endpoint agent may disrupt or
interfere with existing agents.
Endpoint agents require configuration management The agent is software, and so it may
require software distribution services to manage configurations. Therefore, the
CAESARS Database/Repository Subsystem requires an agent distribution program to
track and maintain agent configurations throughout an enterprise.
Installation of an agent requires permission from system owners The system owner may
not wish to add complexity and dependency to his/her information system because
information security is not his/her primary mission.
Use Case for Agent-Based Configuration
Diverse networked domains, in which an enterprise is composed of multiple networked
domains that may or may not have trusted relationships
Geographically diverse networks, in which a geographically diverse enterprise that is
interconnected through networks may or may not have sufficient bandwidth to support
continuous monitoring
Disconnected computing assets that are disconnected from an agencys enterprise even
though the agency has to account for them

CAESARS

34

September 2010

2.3.2 Agentless Configuration


In an agentless deployment configuration, a Sensor Subsystem component is co-located with the
target computing assets in a trusted network domain such as the data center, local campus, or an
office building (see Figure 14).
Figure 14. Agentless Deployment Configuration

The advantages of an agentless deployment configuration are:


Ease of deployment An agentless solution requires no software installation.
Non-intrusive An agentless solution has no agent, and so it requires no CPU utilization
from targeted computing assets.
Low operating costs With no local agent, there is no maintenance or software
distribution issue.
The disadvantages of agentless deployment configuration are:
Shallow/ limited visibility to security configuration compliance data Agentless
assessments can only be performed through interrogation of the target computing assets
OS. The CAESARS Sensor Subsystem cannot validate privileged system files such as:
Windows actions and reports
List Instant Messenger Applications report
Users with Weak Passwords report
Users with Password = User Name report
Users without a Password report
Users with Password Too Short report
Set Disk Quota for User action
Show User Quota for a Specified Volume report
Windows security checks
Accounts with Password Equal to Any User Name
Accounts with Password Equal to User Name
Accounts with Password Equal to Reverse User Name
CAESARS

35

September 2010

Accounts with Short Passwords


Accounts with Blank Passwords
Instant Messenger Setting
Queries of the Port object
Any default port scan reports, such as the Port Scan (TCP/UDP Endpoints) report
Queries of the HKLM/Current User registry hive or any reports that rely on that hive
Encryption of data The transmitted data, potentially sensitive in nature, are not
guaranteed by the application running in agentless mode due to the dependence on OS
settings. In agentless mode, the transmission of data is dependent on the OSs settings.
For example, for the Windows Group Policy Object setting System cryptography, use
FIPS-compliant algorithms for encryption, hashing, and signing.
Network reliance Agentless security configuration compliance assessments require a
network. Compared to an agent-based solution, an agentless solution requires a
significant level of network usage.
Impact on other security mechanisms Agentless security configuration compliance
assessments may cause false alarms to an intrusion detection system (IDS), and, in some
cases, compromise perimeter security if the target computing assets are external to the
trusted network domain.
Use Case for Agentless Configuration
Diverse networked domains, in which an enterprise is composed of multiple networked
domains that may or may not have trusted relationships.
Geographically diverse networks, in which a geographically diverse enterprise that is
interconnected through networks may or may not have sufficient bandwidth to support
continuous monitoring
Disconnected computing assets that are disconnected from an agencys enterprise even
though the agency has to account for them.

2.3.3 Proxy-Hybrid Configuration


In a proxy-hybrid deployment configuration (of the CAESARS Database/Repository
Subsystem), a proxy appliance is co-located in the same trusted network domain as the target
computing assets. Therefore, a proxy-hybrid configuration may perform agentless interrogation
or command the endpoint agents in the target computing assets for the security configuration
compliance data. After the assessment data have been collected, the proxy appliance will
establish a trusted communications channel, via a NIST FIPS 140-2-validated cryptographic
module, to transfer the collected assessment data to the primary CAESARS Database/Repository
Subsystem, as shown in Figure 15.

CAESARS

36

September 2010

Figure 15. Proxy-Hybrid Deployment Configuration Agentless

Use Case for Proxy-Hybrid Configuration


Diverse networked domains, in which an enterprise is composed of multiple networked
domains that may or may not have trusted relationships

2.3.4 NAC-Remote Configuration


The target computing assets, such as laptops in a remote office, have no direct access to a trusted
network domain. In this case, the CAESARS endpoint agent provides both security for
transporting the security configuration compliance data and Network Admission Control (NAC)
for controlled admission to the trusted network domain (e.g., an agencys Wide Area Network),
as shown in Figure 16.
Figure 16. NAC-Remote Deployment Configuration Agent-Based

CAESARS

37

September 2010

Use Case for NAC-Remote Configuration


Geographically diverse networks, in which a geographically diverse enterprise may be
interconnected through networks that may or may not have sufficient bandwidth to
support continuous monitoring
Remote office. Some computing assets may be connected to agencys network only
remotely (e.g., over public channels such as the INTERNET). The data being reported is
sufficiently sensitive that its confidentiality, integrity, and availability must be ensured
using explicit security measures, such as a VPN.

CAESARS

38

September 2010

3. CAESARS Database
3.1 Goals
The goals of the CAESARS Database are as follows:

3.1.1 Raw Data Collected and Stored Completely, Accurately, Automatically,


Securely, and in a Timely Manner
The purpose of the CAESARS Database is to serve as the central repository of all CAESARS
data, including both raw data that is reported from the sensors on the individual platforms of the
Sensor Subsystem and data that is developed within the CAESARS Analysis/Risk Scoring
Subsystem as a result of cleansing, pre-processing, analysis, and scoring processes. The
CAESARS Database also contains active executable elements, including components to interface
with individual platform sensors and stores and to transmit the raw data from the individual
platform to the CAESARS Database. This transmission function is designed to fulfill several
criteria, including automated collection functions to ensure that the data collected is timely and
complete and also to ensure that the transmission schedules do not interfere with the platforms
normal operations. It also includes security functions to ensure the confidentiality and integrity
of the data (which may be sensitive).

3.1.2 Modular Architecture


The CAESARS Database, as the central repository of all CAESARS data, is the only component
of CAESARS that interfaces directly with the other CAESARS components: the Sensor
Subsystem, the Analysis/Risk Scoring Subsystem, and the Presentation and Reporting
Subsystem. Each of these other CAESARS components must interface with the Database using a
strictly enforced protocol. In particular, the platform sensors and their associated native data
stores transmit data to the CAESARS Database only via SOA ESB Web Services and
specifically designed Adapters. On the other hand, the other CAESARS components are not
permitted any direct communication or design dependence on each other. This design allows
nearly complete independence of the implementation of the other components: Multiple
instances of platform sensors and native stores can all feed data into the CAESARS Database,
and each of the CAESARS Analysis/Risk Scoring and Presentation and Reporting Subsystems
can consist of multiple instances of differing technologies, all interfacing with the central
CAESARS Database through its inherent protocol, yet with all instances being completely
independent of each other either in design technology, execution, or maintenance.

3.2 Objects and Relations


The CAESARS Database contains, at a minimum, the objects and relations shown in Figure 17
below.

CAESARS

39

September 2010

Figure 17. Contextual Description of Database/Repository Subsystem

3.2.1 Repository of Asset Inventory Baselines


The Repository of Asset Inventory Baselines contains, at a minimum, the full authorized
inventories of hardware and software upon which CAESARS is expected to report. The
Repository of Asset Inventory Baselines need not be a CAESARS-dedicated capability; it may
be part of another capability (such as one for enterprise-wide hardware and/or software
Configuration Management, and/or one for software patch/update management) as long as the
capabilities described below are provided.
The hardware inventory is detailed to the resolution of individual platforms, at a minimum,
(since this is the level at which CAESARS is expected to report) and contains all of the platforms
on which CAESARS is to report. (Thus, for example, if the Findings Database reveals the
CAESARS

40

September 2010

connection of a platform for which there is no Asset Inventory Baseline, that platform may be
deemed unauthorized and as posing a risk.)
The initial resolution of the hardware inventory will be limited to individual platforms, since that
is the level at which CAESARS is expected to report other findings. In follow-on
implementations, the hardware inventory may contain information on required hardware versions
and updates. Thus, CAESARS could identify hardware updates and versions and identify
deviations between actual and expected configurations. This could be accomplished by
comparing the actual configuration in the Findings Database with the expected configuration in
the hardware inventory.
The software inventory contains an Asset Inventory Baseline of all authorized software items on
which CAESARS is expected to report. For any specific software configuration item (CI), the
software inventory contains, at a minimum, the full identity of the CI, including vendor name
and product identity, and the release/version of the product, to whatever resolution CAESARS is
expected to differentiate within the product.
The software inventory is related, via a unique software CI ID, to a patch/update database that
contains information on all patches and updates to all authorized software CIs. The
patches/updates database also contains information as to which CIs require periodic updates
(e.g., the date of the most recent update to the signature file of an anti-virus CI). Thus,
CAESARS can identify software patches and updates and identify deviations between actual and
expected configurations by comparing the actual software configurations in the Findings
Database with the expected configuration in the software inventory and patch/update database.

3.2.2 Repository of System Configuration Baselines


The Repository of System Configuration Baselines contains detailed information on the full
expected hardware and software configurations for each platform on which CAESARS is
expected to report. Security vulnerabilities in specific software CIs are obtained from the NVD
(see below) and other sources. The Repository of System Configuration Baselines is related to
the Asset Inventory Baseline Repository for the purpose of identifying platforms and software
CIs, but it also contains more detailed information on the expected security configurations,
particularly of user platform OSs.
The Repository of System Configuration Baselines also contains expected, platform- and
product-independent configuration data from sources such as the NVD and the Federal Desktop
Core Configuration standard. Thus, CAESARS can identify deviations between actual and
expected full configurations by comparing the actual configurations in the Findings Database
with the expected configuration in the Repository of System Configuration Baselines database.
In addition to individual platform configurations, the Repository of System Configuration
Baselines contains information on connectivity of platforms, including network connectivity and
configurations, the placement of platforms in specific security zones, and connectivity to other
platforms such as servers and firewalls. This interconnectivity information can be used to
determine, for example, how one platform can provide inherited security controls to another, or
how a vulnerability on one platform may pose a risk to other interconnected platforms. The
Repository of Security Configuration Baselines can serve as a primary source of information in
creating the System Security Plan(s) for the various portions of the enterprise.
CAESARS

41

September 2010

3.2.3 National Vulnerability Database


The NVD is obtained, and frequently updated, from the NIST website. It contains information on
known security vulnerabilities in specific software products.
In addition, some organizations may have available to them a similar capacity to deal with
proprietary vulnerability information relating to publically available hardware and software, as
well as custom applications, that are used only within that organizational context. Integrating the
NVD with such proprietary/custom information to create a combined organization-specific
vulnerability database is not addressed in this document, but may be considered in future
enhancements to CAESARS.

3.2.4 Database of Findings


The CAESARS Findings Database consists of two parts. First, the raw findings from the Sensor
Subsystem are collected on a periodic, continuing basis from the computing assets. Second, the
Findings Database contains the results of the Analysis/Risk Scoring Subsystem described in the
next section of this document. The Findings Database is also charged with coordinating and
scheduling these processes in order to ensure that (i) the collection of raw data from the Sensor
Subsystem is sufficiently frequent but does not interfere with the normal operations of
computing assets and that (ii) the Risk Analysis and Risk Scoring processes (which may be quite
computation- and time-intensive) have sufficient lead-time to provide their results to the
Presentation Engine as quickly as possible.
In general, there are two types of assessment findings: deviations from security configuration
baselines (i.e., misconfigurations) and vulnerabilities. The security configuration assessment
tools report findings in XCCDF Results format using NIST-defined and agency-defined XML
schema via ESB provided web services. The vulnerability findings are reported in a structured
formatsuch as Assessment Results Format (ARF) or a proprietary format associated with a
specific product, using HTML with embedded hyperlinks to CVE via a web service adapter.
Figures 18 through 23 and the accompanying text describe the interfaces between the database
subsystem and the range of sensor components listed above.

CAESARS

42

September 2010

Figure 18. Contextual Description of Interfaces Between the Database Subsystem and an FDCC Scanner Tool

Required Output to an FDCC Scanner Tool:


Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified SCAP)
Required Input from an FDCC Scanner Tool:
OMB/NIST-specified XCCDF Results format and reporting spreadsheet
XCCDF Results format using NIST-specified or agency-specified XML schema
definition
Deviations in NIST/agency provided XML schema
Required Interface:
Secured WS from the FDCC Scanner Tool via TLS 1.0 or SSL 3.1

CAESARS

43

September 2010

Figure 19. Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Security
Configuration Scanner Tool

Required Outputs to an Authenticated Security Configuration Scanner Tool:


Asset inventory baseline
Agency-defined security configuration baselines (described using NIST-specified SCAP)
Required Inputs from an Authenticated Security Configuration Scanner Tool:
XCCDF Results format using a NIST-specified or agency-specified XML schema
definition
Deviations in NIST/agency provided XML schema
Required Interface:
Secured WS from an Authenticated Security Configuration Scanner tool via TLS 1.0 or
SSL 3.1

CAESARS

44

September 2010

Figure 20. Contextual Description of Interfaces Between the Database Subsystem and an Authenticated Vulnerability
and Patch Scanner Tool

Required Outputs to an Authenticated Vulnerability and Patch Scanner Tool:


Asset inventory baseline
Agency-defined software patch baselines (described using OVAL)
Required Inputs from Authenticated Vulnerability and Patch Scanner Tool:
OVAL Results format using a MITRE- or agency-specified XML schema definition
Deviations in NIST/agency provided XML schema
Required Interface:
Secured WS from Authenticated Vulnerability and Patch Scanner tool via TLS 1.0 or
SSL 3.1

CAESARS

45

September 2010

Figure 21. Contextual Description of Interfaces Between the Database Subsystem and an Unauthenticated Vulnerability
Scanner Tool

Required Outputs to an Unauthenticated Vulnerability Scanner Tool:


The latest CVE dictionary
Required Inputs from an Unauthenticated Vulnerability Scanner Tool:
Discovered vulnerabilities identified using CVE ID and description
Required Interface:
Secured WS from an Unauthenticated Vulnerability Scanner tool via TLS 1.0 or SSL 3.1

CAESARS

46

September 2010

Figure 22. Contextual Description of Interfaces Between the Database Subsystem and a Web Vulnerability Scanner Tool
Database
Subsystem

Repository of
System
Configuration
Baselines

Database of
Findings

Latest CVE
dictionary from NVD
WS
Adapter

SOA ESB
Web
Services

Database Subsystem contains the latest CVE dictionary


from NVD. However, the vulnerability scanner may use
its own.

WS
Adapter

Database Subsystem collects the assessment results


for situational awareness.
HTTP using TLS 1.0/
SSL 3.1

Web Vulnerability
Scanner

Agency may choose correlate CVE ID to Patches in


OVAL repository.

Assessment results
Vulnerability findings described using CVE ID
and agency provided XML schema

Required and Optional Outputs to a Web Vulnerability Scanner Tool:


The latest CVE dictionary
Optional input: the latest CWE dictionary
Required and Optional Inputs from a Web Vulnerability Scanner Tool:
Discovered vulnerabilities identified using CVE ID and description
Optional output: Web application vulnerabilities described using CWE
Required Interface:
Secured WS from a Web Vulnerability Scanner tool via TLS 1.0 or SSL 3.1

CAESARS

47

September 2010

Figure 23. Contextual Description of Interfaces Between the Database Subsystem and a Database Vulnerability Scanner
Tool

Required and Optional Inputs from a Database Vulnerability Scanner Tool:


The latest CVE dictionary.
Optional input: the latest CWE dictionary
Required and Optional Outputs to a Database Vulnerability Scanner Tool:
Discovered vulnerabilities identified using CVE ID and description
Optional output: web application vulnerabilities described using CWE
Required Interface:
Security web transaction from a Database Vulnerability Scanner tool via TLS 1.0 or SSL
3.1

CAESARS

48

September 2010

4. CAESARS Analysis Engine


4.1 Goals
The CAESARS Analysis Engine is an executable component of the CAESARS Analysis/Risk
Scoring Subsystem that is charged with identifying discrepancies between the CAESARS
System Configurations Baselines and the CAESARS Database of Findings. The purpose of the
CAESARS Analysis Engine is to perform any needed cleansing (i.e., correlating, harmonizing,
and de-conflicting) of the Database and to allow the Database to be queried for data that are
reported up to the Database from the IT assets in the Sensor Subsystem. The cleansing of the
CAESARS Database involves a number of analytic steps, including:
Correlating and de-conflicting raw data from multiple platform sensors that may be
providing duplicating or conflicting measurements of the same information
Correlating and de-conflicting duplications and discrepancies between the raw actual data
provided by the sensors and the expected standards contained in the Repository of
System Configuration Baselines, in order to avoid such errors as counting the same
vulnerability twice
Harmonizing discrepancies among items of raw data that may have been collected at
different times
Performing any other analyses and calculations that may be computation- and timeintensive
It is important to distinguish the CAESARS Analysis Engine from the CAESARS Risk Scoring
Engine. The CAESARS Analysis Engine functions only to cleanse and clarify the meaning of the
data and remove inherent duplications and potential conflicts. These Analysis functions are
interpretation-neutral in that no information is removed or discarded and the operations
performed are not dependent upon any particular interpretation of the data. All interpretationdependent functions, such as data reduction, weighting, and the relative scoring of different
portions of data, are reserved for the CAESARS Risk Scoring Engine. The Analysis Engine does
perform some calculations that may be required by the Risk Scoring Engine and that are timeand computation-intensive, but the original data are always left intact for later drill-down
analyses. Other analytical tools may be also appropriately included in the Analysis Engine, so
that additional value may be derived from the database, but in general, the functions of the
Analysis Engine are not user-selectable, as are those of the Scoring Engine.
The design goals of the CAESARS Analysis Engine are detailed below.

4.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies


The CAESARS Analysis Engine interfaces directly with, and is dependent upon, only the
CAESARS Database. Thus, the technologies that constitute the CAESARS Analysis Engine are
independent of those that are used by the Sensor Subsystem or the CAESARS Presentation and
Reporting Subsystem or any other portion of CAESARS. Ideally, the technologies of the
CAESARS Analysis Engine can be selected, installed, configured, and maintained in nearly
CAESARS

49

September 2010

complete independence of the other CAESARS components, provided only that the CAESARS
Database protocols are followed.

4.1.2 Make Maximal Use of Existing, In-Place, or Readily Available Sensors


To the greatest extent feasible, the CAESARS analyses will use data that can be provided by
Sensor Subsystem sources that are already in place or that can be readily obtained and installed
with a minimum of central guidance. Many of these existing tools are already in use by local
administrators, who understand the tools and trust their conclusions. If an enterprise has a large
number of sites, regions, and processors, each with differing missions, configurations, etc.,
attempts to integrate highly specific data collection can involve difficulties and costs that exceed
the value of the integrated data.
On the other hand, it is clear that the goals of centralized analysis and limitation to existing
sensors can easily come into conflict, in both policy and technology. CAESARS is designed
expressly to allow an enterprise to strike a balance between the top-down mandates of
centralized analysis and the bottom-up reality of existing tools.

4.1.3 Harmonize Data from Different Sensors


CAESARS data may be collected from multiple sensors on the client platforms. These sensors
operate independently and their operations may overlap, so that the same weakness or
vulnerability may be reported and scored twice. One interpretation might see these data as
separately important, e.g., in ensuring that they can be independently corrected. An alternate
interpretation would be that this double reporting overstates the risk posed by the vulnerability
and creates an inaccurate picture of the platform. The CAESARS Analysis Engine provides the
capability to identify and harmonize such overlapping reporting without forcing a particular
interpretation of its importance. Until standards and interfaces for correlation of information are
available and widely adopted, some custom code will be required, including vendor proprietary
interfaces.
An example occurs in the analysis of security vulnerabilities and software patch management.
On the client platforms, security vulnerabilities are reported by one sensor, while patch
management is reported by another; in fact, the two sensors may be managed by different
organizations. Suppose that a particular patch was associated with correcting a specific
vulnerability, but that on a particular platform, this patch had not yet been installed. The resulting
situation would be reported twice: once for the existence of the vulnerability and once for the
failure to install the path. On the one hand, it is highly desirable to record and report both facts,
so that the two organizations can take their respective corrective actions. On the other hand, this
double reporting, if scored as such, may overstate the risk posed to the platform. The CAESARS
Analysis Engine identifies and correlates these situations and ensures that they are recorded
consistently and independently, but the data from both sources is retained, so that the decision of
how to interpret and score them is left to the CAESARS Risk Scoring Engine.

4.1.4 Develop Analysis Results that Are Transparent, Defensible, and Comparable
The ultimate goal of Continuous Monitoring is to evaluate individual organizations, both in
relation to each other and in compliance to an established higher-level standard. Any attempt to
synthesize and reduce raw data into composite scores must be accepted by all parties as
CAESARS

50

September 2010

transparent, defensible, and comparable. Analyses are transparent when the algorithms and
methodologies are widely known and agreed to. They are defensible when they can be directly
correlated to accepted risk management goals and principles. Finally, analyses are comparable
when all systems are evaluated consistently, and variations in the scoring methodologies are
allowed when, and only when, they are justified by genuine differences in local situation.

4.2 Types of Raw Data and Data Consolidation and Reduction


Data that are relevant to continuous security monitoring come in a wide variety of formats, units
of measurement, and metric interpretations. The raw data may be far too voluminous to allow
detailed point-by-point comparisons. On the other hand, the differing types of data do not readily
combine. Broadly speaking, the raw data collected by these sensors fall into three types:
Binary data refers to data that is inherently of a yes/no or true/false observation. For
example, On platform x, running Windows XP Service Pack 3, patch nnn has either
been applied or it has not, or On platform x, Security Compliance Check nnn has been
passed or it has not. As the examples indicate, much compliance monitoring is binary
data, and the volume of such data may be very large. The key observation about binary
data is that each item is unique; in general, there is no inherently correct way to compare
one item of binary data to another, e.g., to determine which one poses higher risk.
Ordinal data refers to data that has been categorized qualitatively into groupings such as
Low, Moderate, or High for example, the assessments of confidentiality, integrity and
availability in the FIPS-199 security sensitivity categorization. As with binary data,
comparisons among ordinal data items are problematic. Within each category, the
components are deemed to be approximately equal, but no attempt is made to quantify
the differences among categories; High risk is worse than Moderate risk, but with no
claim as to how much worse it is. Another example of ordinal data is the Common
Vulnerability Scoring System (CVSS) within the NVD. Even though the CVSS uses
numbers 1 through 10 as category identifiers and a 6 is indeed worse than a 3, there is
no interpretation that a 6 is twice as bad as a 3. Also as with binary data, much
compliance monitoring is ordinal data, and the volume may be very large.
Numeric data refers to data that can be directly compared. Numeric data can be cardinal
(e.g., simple counts such as the number of platforms in site x) or it can be direct
measurement (e.g., such the number of days that Platform x is overdue for an update to
its anti-virus signature file). The key observation about numeric data is that it represents
not just an absolute measure, but a comparable one: if site x has twice as many platforms
as site y, or if the signature file on platform x is twice as old as that for platform y, it is
reasonable to conclude that x poses twice the risk as y. Moreover, numeric data can
often be reduced (e.g., by averaging over a large number of sources) without affecting its
interpretation.
These differences in data types affect continuous monitoring in two areas: the volume of data
that must be collected and stored and the need for comparability of the results, e.g., comparing
one platform to another or one site to another. Continuous monitoring can produce large volumes
of binary and ordinal data, which must be consolidated and reduced in order to be transmitted to
the CAESARS Database. Moreover, in order to make the data comparable, it has to be made
numeric. This can be done by accumulating them to simple totals (the number of High-risk
CAESARS

51

September 2010

vulnerabilities discovered on a scan of platform x) or ratios (e.g., the percentage of compliance


checks that were failed on platform x). It can also be done by the assignment of weights to
different binary items or different classes of ordinal data. Accumulation and weighting schemes
allow binary and ordinal data to be compared to each other and to true numeric data.
However, the central feature of all binary and ordinal reduction methodologies must always be
emphasized: Every accumulation or weighting system represents an enterprise policy
interpretation whose rationale may be subtle and whose implications may be widespread
and serious. In particular, every such reduction methodology represents choices by one
enterprise that may or may not be appropriate for another. In this sense, data
consolidation/reduction can run precisely counter to the goals of transparency and defensibility
of the analyses.
It is for this reason that such a sharp distinction is made among the functions of the Sensor
Subsystem and the CAESARS Analysis Engine and Scoring Engine. The CAESARS paradigm
of data consolidation and reduction has implications at all three points:
The Sensor Subsystem can easily produce such large amount of binary and ordinal data
that it might not be feasible to transmit all of this highly-detailed data from the individual
sensor stores to the CAESARS Database. Accumulation or consolidation schemes may be
necessary simply to reduce the data to feasible volumes. But these schemes inherently
reduce options on interpretation, and they should be used carefully and judiciously. In
any case, the individual sensor stores must always retain the original raw data, since these
will be the primary source for correcting the findings on their original platforms.
The CAESARS Analysis Engine is limited to data cleansing operations that are, as much
as possible, interpretation-neutral, i.e., those that clarify or de-conflict the data in the
CAESARS Database without imposing any particular interpretation, and without altering
or removing any data that might be needed for alternative interpretations. In particular, all
Analysis Engine operations retain all Database data; analysis results are added to the
Database in addition to the raw data from which they are derived.
The CAESARS Risk Scoring Engine is the vehicle for data consolidation, reduction, and
scoring. It allows multiple tools to access the Database data and reduce the data to simple
scores that allow easy comparison among platforms, sites, organizations, or any other
structure that the user requires. In particular, the Risk Scoring Engine is the repository of
all interpretation-dependent operations (except for the data consolidation/reduction that is
inherent in the Sensor Subsystem.) The Scoring Engine does not alter the Database data
in any way, so that multiple scoring vehicles can be implemented without interference.

CAESARS

52

September 2010

5. CAESARS Risk Scoring Engine


The purpose of the CAESARS Risk Scoring Engine is to produce information about the
configuration, compliance, and risks to which the Client Layer platforms are exposed, and then
to make this information available to the CAESARS Presentation Engine for presentation to the
user. The Risk Scoring Engine is an executable component of the CAESARS Database that is
charged with reducing the various components of the Findings Database to a small set of scores
for presentation to requesters using the Presentation and Reporting Subsystem. Risk Scoring can
be done to reflect both enterprise-wide and local criteria. Enterprise-wide risk scoring is done in
(at least) two ways:
Scoring risk by platform (and groups of platforms) allows the requestor to assess overall
risk faced by each platform, irrespective of whether there are possible countermeasures
that may be taken to reduce the risk. This is of primary concern in identifying
vulnerabilities for which there is no current feasible countermeasure, and for which the
enterprise must decide whether to accept the risk nonetheless. Platform Risk Scoring also
allows the requestor to assess the risks that a vulnerability on one platform may pose to
another platform (for example, how a misconfigured firewall may pose increased risk to
all of the platforms that it is intended to protect).
Scoring risk by responsible organization allows the requestor to identify the extent to
which responsible organizations are effectively carrying out the security countermeasures
that have been assigned to them. For example, a single organization might be charged
with configuring a software CI on two platforms that are not co-located or directly
connected; a misconfiguration of the CI nonetheless affects both platforms.
The Risk Scoring Engine also allows local administrators to perform their own specific scoring
to reflect local conditions, and to use the Presentation Engine to present this information to their
own organizations in locally specific ways.

5.1 Goals
The goals of the CAESARS Risk Scoring Engine are as follows:

5.1.1 Modular Analysis, Independent of Scoring and Presentation Technologies


The CAESARS Risk Scoring Engine interfaces directly with, and is interdependent upon, only
the CAESARS Database. Thus, the technologies that constitute the CAESARS Risk Scoring
Engine are independent of those that are used by the Sensor Subsystem or the CAESARS
Analysis Engine or any other portion of CAESARS. Ideally, the technologies of the CAESARS
Risk Scoring Engine can be selected, installed, configured, and maintained in nearly complete
independence of the other CAESARS components, provided only that the CAESARS Database
protocols are followed.

CAESARS

53

September 2010

5.1.2 Common Base of Raw Data Can Be Accessed by Multiple Analytic Tools
The CAESARS Risk Scoring Engine may actually be composed of multiple, independent
analytic scoring tools, each operating independently and side-by-side, provided only that each
tool follow the inter-layer protocols. Tools may range from small, internally-developed ad hoc
components to complete COTS systems. The ability to use multiple tools independently means
that many different types of analyses can be done using the same CAESARS Database, without
having to collect new data from individual platforms for each analysis.

5.1.3 Multiple Scoring Tools Reflect Both Centralized and Decentralized Analyses
The scope of Risk Scoring may range from detailed analyses of local platform enclaves, or even
of individual platforms, up to comparative summary analyses of the entire Enterprise IT facility.
The fact that these multiple analytic tools all depend consistently upon a single CAESARS
Database ensures that, as the purview of the analysis rises to include larger views, the results of
these different analyses present a consistent and coherent picture; conclusions drawn from the
analyses can be aggregated up or individually separated out as needed.

5.2 Centralized Scoring that Is Performed Enterprise-Wide


The CAESARS Risk Scoring Engine allows summary analyses to be done across the entirety of
the Enterprise IT facility. These scorings provide the most uniform, high-level snapshot of the
facility, usually in the form of a single risk score being evaluated for each platform, then
summarized for each enclave, then each site, and finally for the Enterprise as a whole. These
scores are intended to provide a valid and meaningful comparison, platform versus platform, site
versus site, etc. The CAESARS Risk Scoring Engine takes a number of steps to ensure that these
comparisons are valid.

5.2.1 Allow Common Scoring for Consistent Comparative Results Across the
Enterprise
An enterprise IT facility may contain sites with widely different locations, missions, and
configurations. For any summary comparison across sites to be meaningful, the data that
contribute to it must be carefully selected and vetted for consistent collection and interpretation.
CAESARS allows an enterprise to select summary information that can be consistently collected
at different sites and interpreted in a way that will yield the most meaningful overall
comparisons.

5.2.2 Show Which Components Are Compliant (or Not) with High-Level Policies
The single most important function of any Continuous Monitoring effort is to ensure that
Enterprise-wide (and similar high-level) policies are being consistently and uniformly followed
by all Enterprise components. Indeed, many important aspects of information assurance (e.g.,
virus control and software patch management) are quantifiable only in the simple yes or no of
compliance: the policies are either being followed or they are not. CAESARS allows enterpriselevel analyses to show, for many types of compliance-based monitoring, whether these policies
are being consistently followed, and if they are not, which components of the enterprise are noncompliant in which areas.
CAESARS

54

September 2010

Much of the data for compliance monitoring is binary or ordinal data. This data can be
aggregated in the CAESARS Database (either by the Analysis Engine or the Scoring Engine or
even by the individual sensors themselves) and be valid for higher-level comparisons. But the
original, unaggregated data is still available in the individual sensor stores, in order to permit
detailed diagnostics and correction on individual platforms.

5.2.3 Permit Decomposition of Scores into Shadow Prices24


A single overall "risk score" for each site may be useful in comparing sites to each other, and can
truly show where large areas of risk are being confronted. But the cost of controlling a risk might
not necessarily correspond to its level of threat; some major risks can be inexpensively
controlled, while some relatively minor risks would be very expensive or difficult to eradicate.
The CAESARS Risk Scoring Engine encourages "additive risk" modes of analysis which can
then be used to derive "shadow prices" (roughly speaking, score components that identify the
greatest reduction in total risk per marginal dollar spent on countermeasures.) This ensures that
any additional resources (time or money) spent on security can be targeted to the areas where
they will be most effective in reducing security risks.

5.2.4 Show Which Components Are Subject to Specific Threats/Attacks


The CAESARS Risk Scoring Engine allows data to be analyzed at whatever level of summary is
appropriate. Individual platforms can be viewed in detail, or platforms can be aggregated as
needed, e.g., by site or region, by platform function, or by network trust zone.

5.2.5 Allow Controlled Enterprise-Wide Change to Reflect Evolving Strategies


Information Assurance policies and strategies, at all levels from the Enterprise down to an
individual platform can evolve and change, and the responsible administrators need to know
how proposed changes in these policies and strategies will affect the risk scores. The CAESARS
Risk Scoring Engine allows multiple, independent modes of scoring, so that administrators can
construct what if scenarios to predict the effects of policy/strategy changes that are being
considered before actually implementing them.

5.3 Decentralized Analyses that Are Unique to Specific Enterprise


Subsets
5.3.1 Raw Local Data Is Always Directly Checkable by Local Administrators
All data collected in the CAESARS Database originates from sensors that are based on
individual platforms, and each platform is under the administrative control of a local
administrator. All local administrators always have the ability to interrogate the CAESARS
database to view directly the data from their respective platforms and compare it to the raw data

24

In a business application, a shadow price is the maximum price that management is willing to pay for an extra unit
of a given limited resource. For example, if a production line is already operating at its maximum 40 hour limit, the
shadow price would be the maximum price the manager would be willing to pay for operating it for an additional hour,
based on the benefits from this change. Source: http://en.wikipedia.org/wiki/Shadow_price

CAESARS

55

September 2010

in their platforms original sensor data stores. Thus, any errors in the data or misconfigurations
of the platform tools that originally collected the data can readily be identified and corrected.

5.3.2 Different Processor Types or Zones Can Be Analyzed Separately


While the majority of user-operational platforms can be treated with fairly generic and uniform
monitoring, some types of platforms present more specialized problems in monitoring their
effectiveness. For example, ensuring the consistency of complex firewalls rules applies only to
firewall platforms and is meaningless on a user platform, while the analysis of intrusion
detection and response (ID/R) platforms may require a detailed differentiation of threats and
attacks that applies only to ID/R platforms. Platforms residing in different network trust zones
may have to be scored by different criteria. The CAESARS Risk Scoring Engine allows
individual platforms to be scored by criteria that are appropriate to their function.

5.3.3 Region- and Site-Specific Factors Can Be Analyzed by Local Administrators


Platform administrators in different regions and at different sites may have specific analytic
criteria that they feel represent their specific situation more accurately. They can use the
CAESARS Risk Scoring Engine to implement ad hoc analyses that test these special criteria. If
these ad hoc analyses prove to be realistic and defensible, then the local administrators can use
the tools to present a business case for incorporating these modified analyses into the higherlevel enterprise-wide summary scores.

5.3.4 Users Can Create and Store Their Own Analysis Tools for Local Use
The CAESARS Database/Repository Subsystem and the CAESARS Analysis/Risk Scoring
Subsystem allow all users to prepare analyses that are directly relevant to their own view.
Enterprise-level users can create summaries of the overall IT enterprise, while local users can
obtain detailed analyses of their respective portions. Those users whose needs are more
specialized (e.g., configuring firewalls or monitoring intrusion detection/response platforms) can
formulate more specialized analyses to answer these questions. Since the Analysis/Risk Scoring
Subsystem accesses the Database through standardized protocols, any analysis can be stored and
executed repeatedly over time to obtain trend information and to see the effects of changes to the
systems being monitored.

5.4 Description of the iPost Implementation of the Analysis and Scoring


Engines
This section considers the iPost system, the custom application that continuously monitors and
reports risks on the IT infrastructure at DOS. In iPost, the functions of the Analysis and Scoring
engines are not separated, so iPost is analyzed here to describe how it meets the combined goals
of both the CAESARS Analysis and Scoring Engines detailed above. Particular attention is
devoted to the question of to what extent the application might be applicable to other agencies.
However, only the specific scoring algorithm of iPost is considered. iPost itself is a custom
application; its purely technical features (such as how it interacts with the iPost data sources or
its user convenience features) are not considered here, because other agencies seeking to adopt
this methodology will have to redesign and replace these components in most cases. Therefore

CAESARS

56

September 2010

this section is intended as an illustration of risk scoring, not a comprehensive treatment of all
cases where risk scoring may be effective.

5.4.1 Synopsis of the iPost Scoring Methodology


This section presents a brief synopsis of the iPost scoring methodology. This discussion is not
intended to be a full description of the iPost scoring; the full description is given in the document
iPost: Implementing Continuous Monitoring at the Department of State, Version 1.4, United
Sates Department of State, November, 2009.
The iPost system collects data primarily from individual platforms. The data is collected from
three primary sensors: Microsoft Active Directory (AD), Microsoft System Management Server
(SMS), and the Tenable Security Center. The data collected is grouped into ten components.
The ten components and their abbreviations and sources are displayed in Table 2.
Table 2. Currently Scored iPost Components
Component

Abbreviation

What Is Scored

Source

Vulnerability

VUL

Vulnerabilities detected on a host

Tenable

Patch

PAT

Patches required by a host

SMS

Security
Compliance

SCM

Failures of a host to use required security settings

Tenable

Anti-Virus

AVR

Out of date anti-virus signature file

SMS

SOE Compliance

SOE

Incomplete/invalid installations of any product in the Standard


Operating Environment (SOE) suite

SMS

AD Users

ADU

User account password ages exceeding threshold (scores


each user account, not each host)

AD

AD Computers

ADC

Computer account password ages exceeding threshold

AD

SMS Reporting

SMS

Incorrect functioning of the SMS client agent

SMS

Vulnerability
Reporting

VUR

Missed vulnerability scans

Tenable

Security
Compliance
Reporting

SCR

Missed security compliance scans

Tenable

Table 3 presents still other prospective risk scoring components that may be implemented in the
future.
Table 3. Components Under Consideration for iPost Scoring
Prospective Component

What Would Be Scored

Unapproved Software

Every Add / Remove Programs string that is not on the official


approved list

AD Participation

Every computer discovered on the network that is not a member of


Active Directory

Cyber Security Awareness Training

Every user who has not passed the mandatory awareness training
within the last 365 days

On each platform, a raw score is calculated for each component (see subsequent sections for a
discussion of each component). The raw risk score for a platform is then simply the sum of the
ten individual raw risk component scores on that platform. (For certain of the security tests, iPost
CAESARS

57

September 2010

recognizes the difference between failing a security test and being unable to conduct the test
[perhaps for scheduling reasons], and in cases where the truth cannot be definitely resolved, iPost
has the option to score the test as having been passed.)
The risk score for an aggregate is obtained simply by adding the raw risk scores of all platforms
in the aggregate, then dividing this total by the number of platforms in the aggregate, and is
expressed as an average risk score per platform. A letter grade (A, B, C, D, F) is also assigned
based on the average risk per platform.
In all cases, the raw risk score is an open-ended inverse metric: zero represents a perfect score
and higher scores represent higher risk (undesirable). Thus, if new risk scoring components (such
as the three listed above) were to be added, both raw scores and average scores would rise, but
there would be no need to adjust existing risk scoring to accommodate the new components.
The iPost architecture has within it separate Cleansing and Staging databases. These
databases allow data to be pre-processed before scoring is done. In particular, it allows data from
multiple reporting components to be harmonized to prevent duplication. In the case of the double
reporting of system vulnerabilities and software patches, the databases search for cases in which
a patch already in the DOS Patch Management program is associated with a reported
vulnerability; the vulnerability report is discarded because the situation has already been scored
as a missing patch.
iPost is expressly designed to recognize that specific platform risks will be assigned to specific
administrators as part of their responsibilities, and that in certain cases these administrators will
not have the capability to correct certain risks that would normally be under their purview.
Therefore, iPost has a detailed program of exceptions that allow specific security tests to be
removed from the scoring of specific platforms or groups of platforms.

5.4.2 Using iPost for Centralized Analyses


iPost is used by DOS primarily as a tool for monitoring and comparing IT sites at an enterprisewide level. The iPost scoring algorithm is widely publicized within the DOS IT community, and
(subject to the Exceptions program noted above) is applied consistently to all DOS IT platforms
and sites.
iPost is able to present highly detailed information about local platform compliance with highlevel policies. It does have some limitations in the data it collects (as noted, it cannot always
distinguish between a failed test and an unperformed test). But these problems can be worked
around by the local administrators.
Because the iPost risk scores are simply additive, it is easy to decompose a platform or site risk
score into shadow prices for its components. Local administrators can therefore easily see which
security enhancements would yield the greatest reductions in risk score.

5.4.3 Using iPost for Decentralized Analyses


Because DOS has placed a high emphasis on using existing sensors and a consistent scoring
algorithm, it is fairly easy for both local and enterprise-level administrators to investigate the
effect of potential changes to the algorithm before such changes are actually implemented. They
can also use both the local sensors and the iPost algorithm to conduct local analyses at a more
CAESARS

58

September 2010

detailed level, or to investigate region- and site-specific factors that might required modified
scoring treatment.
The current iPost documentation does not describe differing analysis methodologies for different
processor types, nor does it address specialized processors (e.g., firewalls, switches) that might
exist in different connectivity zones.

5.4.4 The Scoring Methodologies for iPost Risk Components


This section considers the iPost scoring methodologies for the ten risk components, and
addresses their potential applicability to the DHS environment. As discussed above, data
reduction/consolidation methods, especially those that use a weighting system to treat binary and
ordinal data as numeric data, represent express enterprise-specific policy decisions that may be
appropriate for one organization but not for another. Furthermore, in looking at these ten
components and their scoring, the essential question is whether a one-point risk increase in one
component is approximately equivalent to a one-point increase in any another.
5.4.4.1 Vulnerability (VUL)
Vulnerability detection is provided by the Tenable Security Center. The vulnerabilities
discovered are weighted twice first by the ordinal weighting system inherent in the
CVSS/NVD, and again by a specific mapping that de-emphasizes low-risk vulnerabilities. Under
this scheme, it would take between 12 and 1,000 low-risk vulnerabilities to equal a single highrisk one. The VUL score is the sum of the weighted scores of each vulnerability that is
discovered.
5.4.4.2 Patch (PAT)
Patch detection is provided by SMS on an ordinal (Low, Medium, High, Critical) scale, which is
then weighted. The PAT score is the sum of the weighted scores of each patch that is not fully
installed.
5.4.4.3 Security Compliance (SCM)
Security Compliance detection is provided by the Tenable Security Center. Security checks in 11
categories are attempted and only definitive failures are scored. Weights are assigned to each of
the 11 categories, based on CVSS/NVD scores, a low-risk de-emphasis (the same as is used in
the VUL component), and a final adjustment (details unavailable). The SCM score is the sum of
the category weights for each failed security check.
5.4.4.4 Anti-Virus (AVR)
This numeric score component, provided by SMS, checks the age of the virus signature file. The
signature file is required to be updated every day, but a grace period of six days is allowed. If no
updates occur, then the VUL score assesses six points for each day, beginning with 42 points on
day seven, and so on. There is no consideration of the severity of new signatures.
5.4.4.5 Standard Operating Environment (SOE) Compliance
DOS platforms are required to have 19 security products installed, each at a specific version
level. SMS checks the compliance of each product/version on the platform, and assesses an SOE
CAESARS

59

September 2010

score of five points for each product missing or of the wrong version. There is no differentiation
among the 19 products.
5.4.4.6 Active Directory Users (ADU)
All user passwords are required to be changed at least every 60 days. AD checks the age of all
user passwords and, for any password that hasnt been changed in 60 days or more, assesses the
ADU score one point for every password and every day over 60. (Inactive accounts and those
requiring two-factor authentication are not scored.) There is no differentiation of the accounts of
highly privileged users or of security administrators.
5.4.4.7 Active Directory Computers (ADC)
AD Computer account passwords are treated identically to user passwords, except that the
maximum change window is 30 days rather than 60.
5.4.4.8 SMS Reporting (SMS)
The Microsoft SMS monitors the health of the client agent installed on each Windows host. A
variety of factors are monitored, and an SMS error message is generated for each factor that fails.
Only certain messages (form 1xx or 2xx) are scored, and the number of such messages is
irrelevant. The only two relevant considerations are (i) whether one or more 1xx/2xx error
messages has been generated and (ii) the number of consecutive days that 1xx/2xx error
messages have appeared. If any 1xx/2xx error messages appear, the SMS score is assessed 100
points plus 10 points for each consecutive day that such messages have appeared. There is no
differentiation among the 1xx/2xx error messages.
5.4.4.9 Vulnerability Reporting (VUR)
DOS IT systems are required to have externally directed vulnerability scans every seven days. A
grace period of two weeks (i.e., two missed scans) is allowed; after that the VUR score is
assessed five points for each missed scan. If the scan was attempted but could not be completed,
no risk score is assessed.
5.4.4.10 Security Compliance Reporting (SCR)
DOS IT systems are required to have externally directed security compliance scans every 15
days. A grace period of one month (i.e., two missed scans) is allowed; after that the VUR score is
assessed five points for each missed scan. If the scan was attempted but could not be completed,
no risk score is assessed.

CAESARS

60

September 2010

6. CAESARS Presentation and Reporting Subsystem


For government organizations, information is useful only if it supports better decision making
and action. The expected end result of collecting, storing, analyzing, and scoring the state of
security risk measurements is for IT managers at all levels to make better decisions and prioritize
actions to secure the enterprise. The CAESARS Presentation and Reporting Subsystem must
provide information in a form that promotes timely, prioritized, and targeted actions.

6.1 Goals
The goals for the Presentation and Reporting Subsystem of the architecture are aligned to the
design goals for the system as a whole, as listed in Section 1.5, and specifically to provide and
display risk score data to:
Motivate administrators to reduce risk
Motivate management to support risk reduction
Inspire competition
Measure and recognize improvement
In many organizations, the challenges of reducing security risk fall in three categories, which
better reporting and scoring information can inform:
What vulnerabilities represent higher risks than others?
What remediation actions will result in the most effective reduction of risk?
What action is actually required to reduce or eliminate each such vulnerability?
The focus of the Analysis/Risk Scoring Subsystem and the risk scoring algorithm is answering
the first question. The focus of the Presentation and Reporting Subsystem is answering the next
two.

6.1.1 Modular Presentation, Independent of Presentation and Reporting Subsystem


Technologies
The CAESARS Presentation and Reporting Subsystem interfaces directly with, and is
interdependent upon, only the Database Subsystem, and is strictly governed by CAESARS
Database protocols. Thus, the technologies that constitute the CAESARS Presentation and
Reporting Subsystem are independent of those that are used by the Analysis/Risk Scoring
Subsystem or any other subsystem. Ideally, the technologies of the Presentation and Reporting
Subsystem can be selected, installed, configured, and maintained in near complete independence
of the other subsystems, provided only that the CAESARS Database protocols are followed.

6.1.2 Allow Either Convenient Dashboard Displays or Direct, Detailed View of Data
The Presentation and Reporting Subsystem must be flexible enough to allow viewing of data at
any level of granularity at which decisions and comparisons may be made. The Presentation and
Reporting Subsystem can include multiple presentation tools, with either local or enterprise-wide
CAESARS

61

September 2010

perspectives, yet still provide confidence that no one type of presentation will influence or skew
any other. The Analysis/Risk Scoring Subsystem allows for presentation tools that are directly
interfaced to an existing analysis tool or tools that combine ad hoc analysis and presentation in a
single component. Such a combined analysis-plus-presentation tool must access the CAESARS
Database, and thus must follow the Databases Master Schema protocol. Likewise, the
connectivity and security comments made above for the CAESARS Analysis and Risk Scoring
Subsystem also apply to the CAESARS Presentation and Reporting Subsystem.

6.2 Consistent Display of Enterprise-Wide Scores


Central to the idea of motivating action is providing enough comparative information for all
system owners, administrators, and managers to be able to determine the best use of limited
resources in reducing risk for which they are responsible. The comparative nature of the
information is crucial, because risk is relative, not absolute. This requires being able to display
risk scores to answer questions from a variety of viewpoints, such as:
Which organizations25 are responsible for the highest risks scores?
Which vulnerabilities are responsible for the highest risks scores (across multiple
devices)?
Which specific machines are responsible for the highest risk vulnerabilities?
How have security risk scores changed over time?
Ordinary Red-Yellow-Green stoplight charts are too simplistic to support this analysis and
decision making, because they rely on static standards for critical, important, and routine actions.
In practice, such displays oversimplify by aggregating many different types of vulnerabilities
into a small number of artificial categories, and they encourage the view that simply resolving
the Red and Yellow problems is sufficient to reduce risk to the Green level. Ranking risks
relatively, rather than absolutely, provides the necessary foundation for improvement, using
rational comparisons of security risk, regardless of the absolute starting point or progress made.
For similar reasons, although risk scoring does not eliminate the need for POA&Ms in the
FISMA compliance process, it does provide a sound basis for allocating limited resources that
addresses the priorities to which those resources can be effectively applied.
Additionally, the decentralized nature of many hierarchical organizations introduces separation
between those who know about and understand the vulnerabilities and those who have the ability
to do something about them. The monitoring and measurement of workstation, server, and
network appliance vulnerabilities are often conducted at a network operations center or security
operations center; the control of the devices is usually distributed organizationally or
geographically, or both, according to which organization is responsible for the devices and
business processes. Key to motivating the most critical risk reduction measures is giving the
system owner and system administrator the tools and information to take the needed actions.

25

The term organization in this context represents any distinct subset of the agency or enterprise that is responsible
for operating IT systems according to agency-defined security policies. The subset may be based on geographic
location, business lines, functional or service lines, or however the agency IT enterprise is organized and operated.
CAESARS is intended to apply to any federal organization with their existing organizational structure, roles, and
responsibilities.

CAESARS

62

September 2010

The minimum set of reports and displays should reflect current and historical information for
consistent comparison of devices, sites, and organizations on the basis of the agencys accepted
scoring algorithm. Based on DOS current and planned implementation, these would display, at
different levels of granularity the source of the risk score and the information needed to take
action to eliminate that source.

6.2.1 Device-Level Reporting


The fundamental building blocks of all analysis and reporting are the individual devices that
constitute the assets of the information system enterprise.
At the device level, a report would include all of the variables that the organization has
implemented the client, database, and analytical infrastructure to support. For example, on a
Microsoft Windows workstation associated with specific users, the reportable scoring elements
that would be scored, based on the DOS implementation, are displayed in Table 4:
Table 4. Reportable Scoring Elements (Sample)
Component

What Is Scored

Vulnerability

Vulnerabilities detected on a host

Patch

Patches required by a host

Security Compliance

Failures of a host to use required security settings

Anti-Virus

Out of date anti-virus signature file

SOE Compliance

Incomplete/invalid installations of any product in the Standard Operating


Environment (SOE) suite

AD Users

User account password ages exceeding threshold (scores each user account, not
each host)

AD Computers

Computer account password ages exceeding threshold

SMS Reporting

Incorrect functioning of the SMS client agent

Vulnerability Reporting

Missed vulnerability scans

Security Compliance Reporting

Missed security compliance scans

Unapproved Software

Every Add/Remove Programs string that is not on the official approved list

AD Participation

Every computer discovered on the network that is not a member of Active Directory

Cyber Security Awareness


Training

Every user who has not passed the mandatory awareness training within the last
365 days

The Presentation and Reporting Subsystem needs to have the ability to map every finding to a
specific device at a specific point in time, so that time-based risk scores and changes in risk over
time can be tracked. In addition, each device must be associated with a specific organization and
site, so that the responsibility for the risk score for each device can be assigned to the appropriate
organization. A full description of the data model for the risk scoring architecture is beyond the
scope of this document, but the sections that follow provide an overview of the functions that the
data model must support.

6.2.2 Site-, Subsystem-, or Organizational-Level Reporting


The first level of aggregation of devices may be based on site, organization, or functional
decomposition of the agency. At this level, the Presentation and Reporting Subsystem should
support the reporting by graphical and textual means of a total risk score, average risk score, and
CAESARS

63

September 2010

risk grade for that organizational element. It should also provide organizational grade ranks and
include the scores of that element on each risk component. For the DOS, the Risk Score Advisor
summarizes all the scoring issues for a site and provides advice on how to improve the score.
Subsequent pages of the Risk Score Advisor provide details on each risk component. In addition
to security information, the site dashboard also provides network information to support other
administrative functions of the users, and includes a site level summary of scores, open trouble
tickets, and active alerts as well as summary statistics of the hosts and component risk scores.
The ability to sort a large number of hosts according to their salient characteristics is key to
enabling site managers to identify where to target their efforts most effectively and to see
patterns that contribute to higher risk scores. The Presentation and Reporting Subsystem must
support multiple tailored views of the risk scoring data on demand.

6.2.3 Enterprise-Level Reporting


For the enterprise or substantial subset of organizational elements, the Presentation and
Reporting Subsystem should provide enterprise management with a table of the scores and grade
for each element within its organizational responsibility. For the DOS, this is accomplished by
the Risk Score Monitor Report.

6.2.4 Risk Exception Reporting


Because exception management is a significant component that supports the perceived fairness
of the risk scoring system, reports are required showing details of scoring exceptions. For the
DOS, there are two reports, one for site administrators and one for enterprise management.

6.2.5 Time-Based Reporting


In addition to the current state of system vulnerabilities and risks that are scored, the Presentation
and Reporting Subsystem must support chronological analysis. This requires display of
information showing changes in risk scores over time, according to the dimensions of risk that
contribute to the total and average scores. This analysis has proven beneficial to the DOS by
highlighting progress and by focusing attention on the reasons that risk scores change over time.

CAESARS

64

September 2010

7. Areas for Further Study


As stated in Section 1, the goals of the CAESARS Reference Architecture include:
Enabling agencies to see how their existing tools and processes could be adapted to such
a framework
Identifying gaps in their own set of solution tools, based on the architectural constructs
developed
Supporting a build or buy decision for acquiring continuous risk scoring tools and
services
Providing standardized approaches to SOWs or PWSs to acquire the needed services and
products
Establishing an accepted framework in which more detailed technical and functional
requirements can be developed
To achieve these objectives will require additional analysis in each area:
Identifying existing tools that perform one or more functions described in the reference
architecture
Performing a broad market survey of tools that could fit in the reference architecture in
some way
Developing a concept of operations that reflects common federal expectations of the
application and operation of security risk scoring capabilities
Developing detailed functional, technical, and non-technical requirements to describe the
target environment with greater specificity26
Developing requests for information (RFI) and requests for proposals (RFP) to support
the creation of acquisition and procurement vehicles for government agencies to acquire
these capabilities in cost effective ways
These steps will enable agencies to assess their readiness for, and develop plans for,
implementation of continuous security monitoring capabilities that may soon become required
federal policy or accepted best practices under NIST and FIPS.
Appendix B is a partial mapping of tools needed to conduct continuous risk analysis and scoring
as described in this reference architecture to the catalogue of controls described in NIST SP 80053. The NIST control catalogue is the authoritative source for all government agencies to define
the security controls needed for any IT system other than national security systems. Therefore, it
is essential to demonstrate which of these controls are covered by the CAESARS reference
architecture and which ones are not. For the ones that are not, further research is needed to
determine how additional controls can fit into the CAESARS reference architecture and which
are inherently not subject to continuous monitoring and risk scoring.

26

See DHS Acquisition Directive 102-01 and supporting guidebooks for a complete taxonomy of requirements.

CAESARS

65

September 2010

Appendix C shows the mapping of tools needed to conduct continuous risk analysis and scoring
to the Consensus Audit Guidelines (CAG), commonly called the 20 Critical Security Controls.27
Although the CAG Top 20 have no official status as federal policy, they represent a sound,
defensible approach to prioritizing the controls most critical to defending against known attack
methods. As the SANS description says,
Because federal agencies do not have unlimited money, current and past federal CIOs
and CISOs have agreed that the only rational way they can hope to meet these
requirements is to jointly establish a prioritized baseline of information security measures
and controls that can be continuously monitored through automated mechanisms.
The number and types of controls that are subject to continuous monitoring through automated
mechanisms are not limited to those described in this reference architecture. Additional research
is needed to integrate sensors that can automatically assess the effectiveness of a full range of
controls, such as Data Loss Prevention (SANS Control #15), and assign a risk score that reflects
the potential harm to the IT systems and the agency mission.
Research is already being conducted, and needs to continue, on how to refine the risk scoring
algorithm, to generalize it for widespread use, and to allow tailoring of the scoring to achieve
specific aims for every agency. One of the criticisms of the iPost scoring is that it does not
account in a consistent way for variances in threat,28 (which includes the capability and intent to
cause harm) and impact (the adverse effect of a breach of security, through a loss of
confidentiality, integrity, or availability). For example, not all devices are equally important to
the functions of the organization; systems are rated as Low, Moderate, or High impact, but the
current risk scoring algorithm treats a specific weakness as equally risky in every environment.
This clearly does not reflect the intent of impact categorization. Impact is a function of mission
value of a particular asset as well as the capability to mitigate the effect of a security weakness
through other means. For example, a vulnerability on a device that is not reachable through the
Internet would be of less concern (other things being equal) than the vulnerability on a device
that was directly accessible from the Internet. The NSA is working on a process to include these
elements in a dynamic assessment of the most critical weaknesses and actions to address them.
This will lead to a dynamic risk model that conforms more closely to OMB M-10-15. A full
treatment of these issues will lead to improvements in risk scoring, but the lack of such
investigation should not impede progress toward a partial solution.

27

See http://www.sans.org/critical-security-controls/guidelines.php for details.


Threat is defined to be any circumstance or event with the potential to adversely impact an IS through
unauthorized access, destruction, disclosure, modification of data, and/or denial of service. Comm. on Natl Security
Systems, National Information Assurance Glossary, CNSSI No. 4009 (April 26, 2010).
28

CAESARS

66

September 2010

8. Conclusions and Recommendations


The security posture monitoring and risk scoring capabilities that DOS, DOJ, and the IRS have
fielded represent operational prototypes of a more generalized capability that could provide
significant value to most or all federal agencies, or to any IT enterprise. The reference
architecture presented here abstracts the design and implementation of these tools to a model that
can be implemented with a variety of tools and products, depending on the existing infrastructure
and system management technologies available to federal IT managers. The reference
architecture also enables IT managers to add capabilities beyond those in existing
implementations, to extend risk scoring and continuous monitoring to all IT components
(network, database, server, workstations, and other elements) in a modular, interoperable,
standards-based implementation that is scalable, flexible, and tailorable to each agencys
particular organizational and technical environment. The authors recommend that the DHS FNS
Branch continue to engage with other federal stakeholders to refine this reference architecture
based on the experience of other federal agencies with similar capabilities and to establish a
federal forum for sharing capabilities and knowledge that support the goals of risk scoring and
continuous monitoring.
Data management at the repository level is a critical consideration for success of enterprise
situational awareness and risk scoring. The use of SCAP-compliant and other standards-based
solutions supports data integration at all levels. Further development to create a common
semantic representation of terminology (extending SCAP and its enumeration components) will
support data integration across agencies and departments and, later, integration of data at the
federal level.
At its core, CAESARS is a model for a decision support system. The agency implementations on
which it is based have proven to be effective in visualizing technical details about the security
posture of networked devices in order to derive actionable information that is prioritized, timely,
and tailored. It is not a substitute for the underlying business management processes that assign
responsibility and accountability for agency processes and results, but it does help make explicit
what those responsibilities are for the security of IT systems and, by extension, it helps identify
when there are overlaps or gaps in responsibility.

CAESARS

67

September 2010

Appendix A. NIST-Specified Security Content Automation Protocol


(SCAP)
To maximize the use of existing security management tools investments (Sensor Subsystem Goal
#2) and ensure interoperability between best-of-breed point solutions, NIST has worked with
NSA, DISA, the Federal CIO Council, and industry to define a standard Security Content
Automation Protocol (SCAP) for communicating information such as known software flaws
(vulnerabilities) and security configurations.29 Some of the elements of this still-evolving set of
protocols are described below.
Extensible Configuration Checklist Description Format (XCCDF) is an XML for
authoring security configuration checklists (benchmarks) and for reporting results of
checklist evaluation
Open Vulnerability and Assessment Language (OVAL)30 is an XML for representing
system configuration information, assessing machine state, and reporting assessment
results
Common Platform Enumeration (CPE)31 is a nomenclature and dictionary of hardware
and software applications
Common Configuration Enumeration (CCE)32 is a nomenclature and dictionary of secure
software configurations
Common Vulnerabilities and Exposures (CVE)33 is a nomenclature and dictionary of
vulnerabilities
Common Vulnerability Scoring System (CVSS)34 is an open specification for measuring
the severity of known vulnerabilities

29

NIST SP 800-126, The Technical Specification for the Security Content Automation Protocol (SCAP): SCAP
Version 1.0, November 2009.
30
OVAL (http://oval.mitre.org/)
31
CPE (http://cpe.mitre.org/)
32
CCE (http://cce.mitre.org/)
33
CVE (http://cve.mitre.org/)
34
CVSS (http://www.first.org/cvss)

CAESARS

68

September 2010

Appendix B. Addressing NIST SP 800-53 Security Control Families

FDCC Scanners

Authenticated
Configuration
Scanners

Authenticated
Vulnerability and
Patch Scanners

Network
Configuration
Management Tools

Unauthenticated
Vulnerability
Scanners

Web Vulnerability
Scanners

Database
Vulnerability
Scanners

System
Management
Tools

Anti-virus Tools

This appendix provides a template for mapping tools needed to conduct continuous risk analysis and scoring as described in this reference architecture. Populating the table will
enable system owners and agencies to demonstrate how the capabilities used for risk scoring also help meet the objectives of security controls from the catalogue of controls
described in NIST SP 800-53.

Risk Assessment (RA)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Planning (PL)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

System & Services Acquisition


(SA)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Security Assessment and


Authorization (CA)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Program Management (PM)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Personnel Security (PS)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Physical and Environmental


Protection (PE)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Operational Controls

Management Controls

NIST SP 800-53 Control Families

Contingency Planning (CP)


Configuration Management
(CM)
Maintenance (MA)
System and Information
Integrity (SI)

CAESARS

Partial/ Assess
security
configuration of
laptops and
workstations
N/A
Partial/ Assess
security
configuration of
laptops and

Partial/ Assess
security
configuration of
computing assets
N/A
Partial/ Assess
security
configuration of
computing assets

Partial/ Assess
patch
compliance of
computing
assets
N/A

Yes/ Manage
configuration of
network
equipment
N/A

Partial/ Assess
patch
compliance of
computing

69

Yes/ Manage
configuration of
network
equipment

Yes/ Platformspecific
N/A
Yes/ Platformspecific

Yes/ Anti-virus
system
N/A
Yes/ Anti-virus
system

September 2010

Technical Controls

workstations

CAESARS

assets

Media Protection (MP)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Incident Response (IR)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Awareness and Training (AT)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Identification & Authentication


(IA)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Access Control (AC)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Audit and Accountability (AU)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

System & Communications


Protection (SC)

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

70

September 2010

Appendix C. Addressing the Automatable Controls in the Consensus Audit Guidelines

2: Inventory of Authorized and


Unauthorized Software

Partial/
Patches only

Yes/
Management
of authorized
network
devices

3: Secure Configurations for


Hardware and Software on
Laptops, Workstations, and
Servers

Yes/
Workstations
and laptops

4: Secure Configurations for


Network Devices such as
Firewalls, Routers, and Switches

Partial/
Microsoft
Personal
Firewall

CAESARS

Yes/ FDCC +
Servers

N/A

Partial/ Patch
only

Yes/ Platformspecific

N/A

Partial/
Software
vulnerabilities

Partial/
Identification
of web
services only

Partial/
Identification
of database
only

Partial/
Management
of security
configuration
policy

N/A

Partial/
Assessment
of web
services only

Partial/
Assessment
of database
only

Yes/
Management
of security
configuration
policy

N/A

71

N/A

N/A

N/A

Partial/
Authorized
devices

N/A

NIST SP 800-53
Security
Controls

N/A

Partial/
Identification
of both
authorized &
unauthorized
devices

Anti-virus Tools

Partial/
Identification
of software on
authorized
devices

System
Management
Tools

N/A

Partial/
Identification
of software on
authorized
devices

Partial/
Management
of authorized
network
devices

Database
Vulnerability
Scanners

Partial/
Verification of
authorized
devices

Web
Vulnerability
Scanners

Partial/
Verification of
authorized
devices

Unauthenticated
Vulnerability
Scanners

Partial/
Verification of
authorized
devices

Network
Management
Tools

Authenticated
Vulnerability &
Patch Scanners

1: Inventory of Authorized and


Unauthorized Devices

Authenticated
Configuration
Scanners

15 automatable of
20 Critical Security Controls

FDCC Scanners

This appendix maps the 15 automatable Consensus Audit Guideline security controls to the appropriate tools referenced in Appendix B, and to the corresponding NIST SP 800-53
Security Controls.

CM-8 (a, c, d, 2, 3,
4), PM-5, PM-6
N/A

Partial/
Malware

CM-1, CM-2 (2, 4,


5), CM-3, CM-5 (2,
7), CM-7 (1, 2),
CM- 8 (1, 2, 3, 4,
6), CM-9, PM-6,
SA-6, SA-7

N/A

CM-1, CM-2 (1, 2),


CM-3 (b, c, d, e, 2,
3), CM-5 (2), CM6(1, 2, 4), CM-7
(1), SA-1 (a), SA-4
(5), SI-7 (3), PM-6

N/A

AC-4 (7, 10, 11,


16), CM-1, CM-2
(1), CM-3 (2), CM5 (1, 2, 5), CM-6
(4), CM-7 (1, 3), IA2 (1, 6), IA-5, IA-8,

September 2010

NIST SP 800-53
Security
Controls

Anti-virus Tools

System
Management
Tools

Database
Vulnerability
Scanners

Web
Vulnerability
Scanners

Unauthenticated
Vulnerability
Scanners

Network
Management
Tools

Authenticated
Vulnerability &
Patch Scanners

Authenticated
Configuration
Scanners

FDCC Scanners

15 automatable of
20 Critical Security Controls

RA-5, SC-7(2, 4, 5,
6, 8, 11, 13, 14,
18), SC-9

5: Boundary Defense

6: Maintenance, Monitoring, and


Analysis of Audit Logs

7: Application Software Security

N/A

N/A

N/A

N/A

N/A

Partial/
Security
configuration
settings only

8: Controlled Use of
Administrative Privileges

Partial/
Assessment
only

Partial/
Assessment
only

9: Controlled Access Based on


Need to Know

Partial/
Assessment
only

Partial/
Assessment
only

CAESARS

N/A

N/A

Partial/ Patch
only

N/A

Partial/
Assessment
only

Partial/ Need
NAC

N/A

N/A

Yes

Partial/ NAC
only

Partial/
Assessment
of perimeter
security only

N/A

Partial/
Assessment
only

N/A

N/A

N/A

N/A

N/A

N/A

Partial/
Assessment
only

Partial/
Assessment
only

N/A

N/A

Partial/
Assessment
of web
services only

Partial/
Assessment
of database
only

Partial/ User
accounts
management

N/A

Partial/
Assessment
of web
services only

Partial/
Assessment
of database
only

Partial/ User
accounts
management

72

N/A

AC-17 (1), AC-20,


CA-3, IA-2 (1, 2),
IA-8, RA-5, SC-7 (1,
2, 3, 8, 10, 11, 14),
SC-18, SI-4 (c, 1, 4,
5, 11), PM-7

N/A

AC-17 (1), AC-19,


AU-2 (4), AU-3
(1,2), AU-4, AU-5,
AU-6 (a, 1, 5), AU8, AU-9 (1, 2), AU12 (2), SI-4 (8)

N/A

CM-7, RA-5 (a, 1),


SA-3, SA-4 (3), SA8, SI-3, SI-10

N/A

AC-6 (2, 5), AC-17


(3), AC-19, AU-2
(4)

N/A

AC-1, AC-2 (b, c),


AC-3 (4), AC-4,
AC-6, MP-3, RA-2
(a)

September 2010

NIST SP 800-53
Security
Controls

Anti-virus Tools

System
Management
Tools

Database
Vulnerability
Scanners

Web
Vulnerability
Scanners

Unauthenticated
Vulnerability
Scanners

Network
Management
Tools

Authenticated
Vulnerability &
Patch Scanners

Authenticated
Configuration
Scanners

FDCC Scanners

15 automatable of
20 Critical Security Controls

RA-3 (a, b, c, d),


RA-5 (a, b, 1, 2, 5,
6)

10: Continuous Vulnerability


Assessment and Remediation

N/A

N/A

Yes

N/A

Yes

Yes

Yes

N/A

N/A

11: Account Monitoring and


Control

N/A

N/A

N/A

Yes

N/A

N/A

N/A

Yes

N/A

AC-2 (e, f, g, h, j, 2,
3, 4, 5), AC-3

12: Malware Defenses

N/A

N/A

N/A

N/A

N/A

N/A

N/A

N/A

Yes

SC-18, SC-26, SI-3


(a, b, 1, 2, 5, 6)

13: Limitation and Control of


Network Ports, Protocols, and
Services

N/A

N/A

N/A

Yes/ Platformspecific

N/A

N/A

N/A

N/A

N/A

CM-6 (a, b, d, 2, 3),


CM-7 (1), SC-7 (4,
5, 11, 12)

14: Wireless Device Control

N/A

N/A

N/A

Partial/ NAC
only

N/A

N/A

N/A

N/A

N/A

AC-17, AC-18 (1, 2,


3, 4), SC-9 (1), SC24, SI-4 (14, 15)

N/A

AC-4, MP-2 (2),


MP-4 (1), SC-7 (6,
10), SC-9, SC-13,
SC-28(1), SI-4 (4,
11), PM-7

15: Data Loss Prevention

CAESARS

N/A

N/A

N/A

N/A

N/A

73

N/A

N/A

N/A

September 2010

Appendix D. List of Applicable Tools


The purpose of this Appendix is to provide a list of tools that are available to fulfill the functions
of the CAESARS reference architecture. Many of these tools are available in the commercial offthe-shelf (COTS) marketplace; others are government-developed but still available off-the-shelf
(GOTS). Most are software-based, although a few have a dedicated hardware component.
It must be noted carefully that this is Appendix not in any sense an Approved Products List to
which Government agencies are restricted in their selection of tools. Nor does inclusion on the
list necessarily imply that the product was or will be successful in its application. It is merely a
list of products that have been applied by one or more government agencies, so that the agency in
question has knowledge of the product and can speak to the products strengths and weaknesses.
The list is organized as a table with the following columns:
Tool The common or commercial name of the tool, with version numbers if relevant
Form (HW, SW, Network Appliance, etc.) The form factor of the tool. Most of the
tools are entirely software packages, but those that are hardware (E.G., Network
Appliances) are noted.
GSA Schedule? Indicates whether the product appears on the GSA Procurement
Schedule. The SAIR Tier 1 is indicated if it is known.
Agency Basis of Information Lists the federal agency that is the source of the
information about the tool. Note carefully that the agency is usually not the source of the
tool itself.
Function Describes the functionality of the tool, as claimed by its creator or vendor.
NIST SCAP Validated Product List Indicates that the tool has been validated by
NIST as being compliant with the SCAP. For the full NIST-validated list, see the NIST
website at (http://nvd.nist.gov/scapproducts.cfm).
CAESARS Component Lists how the tool has been applied by a government agency
to serve as a particular component of the CAESARS reference architecture. Within this
portion of the list the CAESARS Subsystem and Sensor component taxonomy follows
that of Section 2.
Having the CAESARS box checked in this table does not necessarily imply that the tools
application has been successful or complete within the CAESARS context. For example,
almost every Sensor tool has a Reporting and Presentation component of its own, so this
box in the table is checked. However, in most cases the tools native Reporting and
Presentation component is limited to reporting and presenting only the data that was
originally collected by that Sensor, and does not satisfy the CAESARS-wide need for
unified, all-source data Reporting and Presentation.

CAESARS

74

September 2010

Microsoft SMS

SW

YES

DOS

Administration of group policy

Microsoft AD

SW

YES

DOS, IRS

Directory service

Net IQ App Manager

SW

YES

DOS, IRS

Performance monitoring and


management

Microsoft System Center


Configuration Manager

SW

YES

IRS

Administration of group policy

SPAWAR SCAP Compliance


Checker

SW

NO

IRS

FDCC Scanner

Tripwire Enterprise

SW

YES

IRS

Host-based IDS

CIS Configuration Audit Tool

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

CA IT Client Manager

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

McAfee ePolicy Orchestrator,


MaAfee Foundstone

SW

YES /
SAIR Tier 1

CAESARS

DOS

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

Authenticated Configuration
Scanner/FDCC Scanner

75

September 2010

Net IQ Security Configuration


Manager

SW

YES

BigFix Security Configuration and


Vulnerability Management Suite

SW

YES /
SAIR Tier 1

Authenticated Vulnerability &


Patch Scanner

eEye Retina

SW

YES

Unauthenticated Vulnerability
Scanner

Fortinet FortiScan

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

Gideon SecureFusion

SW

YES /
SAIR Tier 1

Authenticated Configuration
Scanner/FDCC Scanner

HP SCAP Scanner

SW

NO

Authenticated Configuration
Scanner/FDCC Scanner

Lumension PatchLink Security


Configuration Management

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

CAESARS

IRS

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

Authenticated Configuration
Scanner/FDCC Scanner

76

September 2010

Qualys QualysGuard FDCC


Scanner

SW

YES

FDCC Scanner

Shavlik Security Suite: NetChk


Protect

SW

YES

Authenticated Vulnerability &


Patch Scanner

SignaCert Enterprise Trust


Server

SW

NO

Authenticated Configuration
Scanner/FDCC Scanner

Symantec Control Compliance


Suite Federal Toolkit

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

Telos Xacta IA Manager


Continuous Assessment

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

ThreatGuard Secutor
Magnus/Secutor Prime/S-CAT

SW

YES

nCircle Configuration
Compliance Manager

SW / Network
Appliance

YES

CAESARS

IRS

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

Authenticated Configuration
Scanner/FDCC Scanner

Authenticated Configuration
Scanner/FDCC Scanner

77

September 2010

SW

YES

Triumfant Resolution Manager

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

BMC BladeLogic Client


Automation

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

ASG IA2 SCAP Module

SW

YES

Authenticated Configuration
Scanner/FDCC Scanner

SAINT Vulnerability Scanner

SW

YES

Unauthenticated Vulnerability
Scanner

nCircle IP360

Network
Appliance

YES

Rapid7 NeXpose

SW

YES

DbProtect

SW

YES

DOS, IRS

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Authenticated Configuration
Scanner/FDCC Scanner

Tenable Security Center

CAESARS

DOS

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

Unauthenticated Vulnerability
Scanner
Unauthenticated Vulnerability
Scanner

IRS

Unauthenticated Vulnerability
Scanner

78

September 2010

AppDetective

SW

YES

IRS

Unauthenticated Vulnerability
Scanner

Cisco Works Campus Manager

SW

YES

IRS

Network Management Tool

Tavve PReView

NO

Network Monitoring Tool

Niksun NetOmni

NO

Network Monitoring Tool

Microsoft SQL Server 2005, 2008

SW

YES

DOS, IRS

Relational Database
Management System
(RDBMS)

Oracle 10g, 11i

SW

YES

IRS

Relational Database
Management System
(RDBMS)

BMC Remedy Service Desk

SW

YES

IRS

Trouble Ticketing (Helpdesk


Workflow Management)
System

Microsoft Active Directory Server

SW

YES

DOS, IRS

Directory service

Microsoft Windows 2003


Enterprise Server

SW

DOS, IRS

Operating System

CAESARS

79

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

September 2010

SW

YES

Microsoft SQL Server Integration


Services

SW

YES

RDBMS Service Component

Microsoft SQL Server reporting


Services

SW

YES

RDBMS Service Component

Microsoft Internet Information


Server 6 (With Load balancing
and cluster Services)

SW

YES

ADO.net

SW

YES

Active Server Pages (ASP).NET


(C#)

SW

YES

DOS, IRS

Middleware

Microsoft .NET Framework


version 3.5

SW

YES

DOS, IRS

Middleware

iPost (PRSM)

SW

NO

DOS

GOTS

Agiliance RiskVision

SW

NO

IRS

Presentation Engine

Archer GRC Solution

SW

YES

IRS

Presentation/
Risk Scoring Engine

DOS, IRS

Presentation and
Reporting
Subsystem
Reporting

Risk Scoring and


Analysis

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem
Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Relational Database
Management System
(RDBMS)

Microsoft SQL Server 2005

CAESARS

DOS, IRS

Function

Authenticated
Vulnerability and
Patch Scanner

Agency Basis of
Information

Authenticated
Configuration
Scanner

GSA
Schedule?

Sensor Subsystem

FDCC Scanner

Tool

Form
(HW, SW,
Network
Appliance,
etc.)

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools

CAESARS Component

Web server

Middleware

80

September 2010

Tool

CAESARS
Form
(HW, SW,
Network
Appliance,
etc.)
GSA
Schedule?

ArcSight Enterprise Security


Manager
SW
YES

Dundas Chart
SW
YES

Agency Basis of
Information

IRS

Function

81

Reporting

Risk Scoring and


Analysis

Presentation and
Reporting
Subsystem

Database/
Repository
Subsystem
Analysis/Risk
Scoring
Subsystem

Sensor Subsystem

Relational DBMS

Network
Configuration
Management Tool

System Configuration
Management Tool

Anti-Virus Tool

Web Vulnerability
Scanner

Database
Vulnerability Scanner

Unauthenticated
Vulnerability Scanner

Authenticated
Vulnerability and
Patch Scanner

Authenticated
Configuration
Scanner

FDCC Scanner

NIST SCAP Validated Product List


(http://nvd.nist.gov/scapproducts.cfm)

Technology Risk Management and Monitoring


List of Applicable Tools
CAESARS Component

Presentation/ Security
Information & Event Manager
(SIEM)

Data Visualization Tool

September 2010

Appendix E. Sample List of SCAP Security Configuration Checklists


The following is a partial list of SCAP security configuration checklists in the NIST NCP
Repository and in government agency and industry offerings. It should be noted that this list does
not encompass all available COTS software platforms; both government and industry are
contributing to this repository on a regular basis.
Workstation Operating Systems
Platform

Source

Microsoft Windows XP

USGCB and NVD

Windows Vista

USGCB and NVD

Windows 7

CIS*

Apple Mac OS X 10.5

CIS*

Server Operating Systems


Platform

Source

Microsoft Windows Server 2003 (32- and 64-bit platforms)

IRS**, CIS*

Windows Server 2008 (32- and 64-bit platforms)

IRS**, CIS*

Sun Solaris 10

IRS**, CIS*

Sun Solaris 2.5.1 9.0

CIS*

HP-UX 11

IRS**, CIS*

IBM AIX 6

IRS**, CIS*

RedHat EL 5

IRS**, CIS*

Debian Linux 5

IRS**, CIS*

FreeBSD

CIS

Apple Mac OS X Snow Leopard Server Edition

CIS*

Network Operating Systems


Platform

Source

Cisco IOS 12

CIS*

Workstation Applications
Platform

Source

Microsoft IE 7

USGCB and NVD

Microsoft IE 8

CIS*

Mozilla Firefox

CIS*

Microsoft Office 2003

CIS*

Microsoft Office 2007

CIS*

Symantec Norton AntiVirus 10.0

CIS*

Symantec Norton AntiVirus 9.0

CIS*

McAfee VirusScan 7.0

CIS*

CAESARS

82

September 2010

Server Applications
Platform

Source

Microsoft IIS 6.0, 7.0, and 7.5

IRS**, CIS*

Apache 2.0, 1.3.

CIS*

Apache Tomcat 5.5, 6.0

CIS*

BEA WebLogic Server 8.1, 7.0

CIS*

Microsoft SQL Server 2005, 2008

IRS**, CIS*

Oracle Database Server 9i, 10g, and 11g

IRS**, CIS*

IBM DB2 Version 8.0 and 9.5.

IRS**, CIS*

VMware ESX 3.5

CIS*

* SCAP benchmarks by the Center for Internet Security (CIS) are proprietary and cannot be used
by other SCAP-certified Authenticated Configuration Scanners. This is because the CIS
Benchmark Audit Tools have OVAL components embedded into the tools. SCAP benchmarks
by CIS are currently available as a part of CIS-CAT Benchmark Audit Tool.
** SCAP benchmarks by the IRS are designed for meeting the IRSs security configuration
policies. However, SCAP content such as OVAL, CCE, and CPE can be reused by other
agencies with express agreement from the IRS. SCAP contents are currently being developed by
the IRS; the first formal release is scheduled for October 2010.

CAESARS

83

September 2010

Appendix F. Sample Risk Scoring Formulas


The table below provides examples of formulas used to calculate risk scores for workstations in a
representative federal agency. They are intended to illustrate how different measurements can be
combined into a numerical representation of relative risk for an asset of interest.
Sensor Derived Configuration Scores
Name

Vulnerability Score

Abbreviation

VUL

Description

Each vulnerability detected by the sensor is assigned a score from 1.0 to 10.0
according to the Common Vulnerability Scoring System (CVSS) and stored in
the National Vulnerability Database (NVD). To provide greater separation
between HIGH and LOW vulnerabilities (so that it takes many LOWs to equal
one HIGH vulnerability), the raw CVSS score is transformed by raising to the
power of 3 and dividing by 100.

Individual
Scores

VUL Score = .01 * (CVSS Score)^3.

Host Score

Host VUL Score = SUM(VUL scores of all detected vulnerabilities)

Notes

The VUL score for each host is the sum of all VUL scores for that host.

Name

Patch Score

Abbreviation

PAT

Description

Each patch which the sensor detects is not fully installed on a host is assigned
a score corresponding directly to its risk level.

Individual
Scores

Host Score

CAESARS

Patch Risk Level

Risk Score

Critical

10.0

High

9.0

Medium

6.0

Host PAT Score = SUM(PAT scores of all incompletely installed patches)

84

September 2010

Notes

None

Name

Security Compliance Score

Abbreviation

SCM

Description

Each security setting is compared to a template of required values, based on


the operating system of the host. Scores are based on general comparable risk
to the CVSS vulnerability scores, then algebraically transformed in the same
way as the CVSS vulnerability scores, then uniformly scaled to balance the
total Security Compliance score with other scoring components.

Individual
Scores

SCM Score for a failed check = Score of the checks Security Setting Category

Host Score

Host SCM Score = SUM(SCM scores of all FAILed checks)

Notes
Security Setting Category

Initial Adjusted
Final
CVSS- CVSSAgency
Based
Based
Score
Score
Score

File Security

10.0

4.310

0.8006

Group Membership

10.0

4.310

0.8006

System Access

10.0

4.310

0.8006

Registry Keys

9.0

3.879

0.5837

Registry Values

9.0

3.879

0.5837

Privilege Rights

8.0

3.448

0.4099

Service General Setting

7.0

3.017

0.2746

Event Audit

6.0

2.586

0.1729

Security Log

5.0

2.155

0.1001

System Log

3.0

1.293

0.0216

Application Log

2.0

0.862

0.0064

NOTE: There is no SCM score for a check that cannot be completed. Only a FAIL is scored.

CAESARS

85

September 2010

Name

Anti-Virus Score

Abbreviation

AVR

Description

The date on the anti-virus signature file is compared to the current date. There
is no score until a grace period of 6 days has elapsed. After six days, a score of
6.0 is assigned for each day since the last update of the signature file, starting
with a score of 42.0 on day 7.

Individual
Scores

Not applicable

Host Score

Host AVR Score = (IF Signature File Age > 6 THEN 1 ELSE 0) * 6.0 *
Signature File Age

Notes

None

Name

Standard Operating Environment Compliance Score

Abbreviation

SOE

Description

Each product in the Standard Operating Environment is required and must be


at the approved version. SOE Compliance scoring assigns a distinct score to
each product that is either missing or has an unapproved version. Currently,
each product has an identical score of 5.0. There are 19 products in the SOE,
so a workstation with no correctly installed SOE products would score 19 *
5.0 = 95.0.

Individual
Scores

Product SOE Score = 5.0 (for each product not in approved version)

Host Score

Host SOE Score = SUM(SOE product scores)

Notes

Potential enhancement: Add Unapproved Software as a separate scoring


component. Add five points for each Unapproved Software product detected.
Product SOE Score = 5.0 (for each unapproved software product or version)
Host SOE Score = SUM(SOE product scores)
Password Age Scores

Name

CAESARS

User Password Age Score

86

September 2010

Abbreviation

UPA

Description

By comparing the date each user password was changed to the current date,
user account passwords not changed in more than 60 days are scored one point
for every day over 60, unless:
The user account is disabled, or
The user account requires two-factor authentication for login

Individual
Scores

UPA Score = (IF PW Age > 60 THEN 1 ELSE 0) * 1.0 * (PW Age 60)

Host Score

Same

Notes

If the date of the last password reset cannot be determined, e.g., if the user
account has restrictive permissions, then a flat score of 200 is assigned.

Name

Computer Password Age Score

Abbreviation

CPA

Description

By means of Group Policy Objects (GPOs), workstations should refresh


passwords every 7 days, while the server refresh is set to 30 days.
By comparing the date the password was changed to the current date, a score
of 1.0 is assigned for each day over 30 since the last password refresh, unless
the computer account is disabled.

Individual
Scores

CPA Score = (IF PW Age > 30 THEN 1 ELSE 0) * 1.0 * (PW Age 30)

Host Score

Same

Notes

None
Incomplete Reporting Scores

Name

SMS Reporting

Abbreviation

SMS

Description

SMS Reporting monitors the health of the SMS client agent that is installed on
every Windows host. This agent independently reports the following types of
information:
Hardware inventory data, e.g., installed memory and serial number

CAESARS

87

September 2010

Software inventory data, i.e., .EXE files


Patch status, i.e., which patches are applicable to the host and the
installation status of those patches
The SMS Reporting Score serves as a surrogate measure of risk to account for
unreported status. Error codes have been added to SMS to assist in identifying
the reason that a client is not reporting. For each host, its error codes, if any,
are examined to determine the score. If an error code has the form 1xx or 2xx,
a score is assigned to the host. The score is a base of 100 points plus 10 points
for every day since the last correct report (i.e., no 1xx or 2xx error codes).
Individual
Scores

Error Codes 1xx or 2xx

Host Score

Host SMS Score =


(IF Error Code = 1xx/2xx THEN 1 ELSE 0) * (100.0 + 10.0 * (SMS Reporting
Age))

Notes

SMS is necessary to score each of the following other scoring components:


Patch
Anti-Virus
SOE Compliance
If there is a score for SMS Reporting, the scores for those three scoring
components are set to 0.0, since any residual SMS data is not reliable.

Name

Vulnerability Reporting

Abbreviation

VUR

Description

Vulnerability Reporting measures the age of the most recent vulnerability scan
of a host. This scan is conducted from outside the host rather than by an agent.
It is therefore possible that a host may not have recent scan information for
one of the following reasons:
The host was powered off when the scan was attempted.
The hosts IP address was not included in the range of the scan.
The scanner does not have sufficient permissions to conduct the scan
on that host.
The date of the most recent scan is used as a base date and compared to the
current date. There is a conceptual grace period of 2 consecutive scans.
Operationally, each host is scanned for vulnerabilities every 7 days. Therefore,
a grace period of 15 days is allowed for VUR. After this period, a score of 5.0
is assigned for each subsequent missed scan.

CAESARS

88

September 2010

Individual
Scores

VUR Age

Host Score

Host VUR Score =


(IF VUR Age > 15 THEN 1 ELSE 0) * 5.0 * FLOOR((VUR Age - 15)/7

Notes

If a host has never been scanned, e.g., the host is new on the network, the
current date is used as the base date.

Name

Security Compliance Reporting

Abbreviation

SCR

Description

Security Compliance Reporting measures the age of the most recent security
compliance scan of a host. This scan is conducted from outside the host rather
than by an agent. It is therefore possible a host may not have recent scan
information for one of the following reasons:
The host was powered off when the scan was attempted.
The hosts IP address was not included in the range of the scan.
The scanner does not have sufficient permissions to conduct the scan
on that host.
The date of the most recent scan is used as a base date and compared to the
current date. There is a conceptual grace period of 2 consecutive scans.
Operationally, each host is scanned for security compliance every 15 days.
Therefore, a grace period of 30 days is allowed for SCR. After this period, a
score of 5.0 is assigned for each subsequent missed scan.

Individual
Scores

SCR Age

Host Score

Host SCR Score =


(IF SCR Age > 30 THEN 1 ELSE 0) * 5.0 * FLOOR((SCR Age 30) / 15)

Notes

If a host has never been scanned, e.g., the host is new on the network, the
current date is used as the base date.

CAESARS

89

September 2010

Acronyms
AD

Active Directory

ADC

Active Directory Computer

ADU

Active Directory User

AVR

Anti-Virus

CAG

Consensus Audit Guidelines

CAESARS

Continuous Asset Evaluation, Situational Awareness, and Risk Scoring

CCE

Common Configuration Enumeration

CERT

Computer Emergency Response Team

CI

Configuration Item

CIO

Chief Information Officer

CIS

Center for Internet Security

CISO

Computer Information Security Officer

COTS

Commercial Off-tThe-Shelf

CPE

Common Platform Enumeration

CSAM

Cyber Security Assessment and Management

CSC

Critical Security Control

CVE

Common Vulnerabilities and Exposures

CWE

Common Weakness Enumeration

DHS

Department of Homeland Security

DBMS

Database Management System

DISA

Defense Information Systems Agency

DOJ

Department of Justice

DOS

Department of State

ESB

Enterprise Service Bus

FDCC

Federal Desktop Core Configuration

FIPS

Federal Information Processing Standard

FISMA

Federal Information System Management Act

CAESARS

90

September 2010

FNS

Federal Network Security

HTML

Hypertext Markup Language

IA

Information Assurance

ID

Identifier

ID/R

Intrusion Detection and Response

IDS

Intrusion Detection System

IRS

Internal Revenue Service

ISSO

Information System Security Officer

IT

Information Technology

NIST

National Institute of Standards and Technology

NAC

Network Admission Control

NSA

National Security Agency

NVD

National Vulnerability Database

OMB

Office of Management and Budget

OS

Operating System

OVAL

Open Vulnerability Assessment Language

PAT

Patch

P.L.

Public Law

POA&M

Plan of Action and Milestones

PWS

Performance Work Statement

RAT

Router Audit Tool

RBD

Risk-Based Decision

RFI

Request for Information

RFP

Request for Proposal

RMF

Risk Management Framework

SANS

SysAdmin, Audit, Network, Security (Institute)

SCAP

Security Content Automation Protocol

SCM

Security Compliance

SCPMaR

Security Compliance Posture Monitoring and Reporting

CAESARS

91

September 2010

SCR

Security Compliance Reporting

SMS

Systems Management Server

SOA

Service Oriented Architecture

SOE

Standard Operating Environment

SOW

Statement of Work

SP

Special Publication

SQL

Structured Query Language

SRR

Security Readiness Review

SSH

Secure Shell

STIG

Security Technical Implementation Guide

SwA

Software Assurance

VPN

Virtual Private Network

VUL

Vulnerability

VUR

Vulnerability Reporting

WAN

Wide Area Network

WS

Web Service

XCCDF

Extensible Configuration Checklist Definition Format

XML

Extensible Markup Language

CAESARS

92

September 2010

S-ar putea să vă placă și