Sunteți pe pagina 1din 21

Computer Security

Lecture 2
Security Models

Syed Naqvi
snaqvi@ieee.org

Access Control Models

09 November 2010 Lecture 2: Security Models 2

1
Access Control
♦ Access control constrains what a User can do directly, as
well as what programs executing on his behalf are allowed
to do.

♦ Activity in the system is initiated by entities known as


Subjects. Subjects are typically Users or Programs
executing on their behalf.

♦ A User may sign on to the system as different Subjects on


different occasions.

♦ Subjects can themselves be Objects. A Subject can create


additional Subjects in order to accomplish its task.

09 November 2010 Lecture 2: Security Models 3

Access Control Types


♦ Discretionary Access Control (DAC)

♦ Mandatory Access Control (MAC)

♦ Role-Based Access Control (RBAC)

09 November 2010 Lecture 2: Security Models 4

2
Discretionary Access Control
♦ used to control access by restricting a subject's
access to an object. It is generally used to limit a
user's access to a file. In this type of access
control it is the owner of the file who controls
other users' accesses to the file
Individuals Resources
Application
Server 1 Access List
Name Access
Server 2 Tom Yes
John No
Cindy Yes
Server 3

09 November 2010 Lecture 2: Security Models 5

Mandatory Access Control


♦ The need for a mandatory access control (MAC)
mechanism arises when the security policy of a system
dictates that:
– protection decisions must not be decided by the object owner.
– the system must enforce the protection decisions (i.e., the system
enforces the security policy over the wishes or intentions of the
object owner).

09 November 2010 Lecture 2: Security Models 6

3
Mandatory Access Control

Individuals Resources

Server 1
“Top Secret”
Secret”

Server 2
“Secret”
Secret”

Server 3
“Classified”
Classified”

09 November 2010 Lecture 2: Security Models 7

DAC vs. MAC

DAC MAC
♦ Object owner has full ♦ Object owner CAN have
power some power
♦ Complete trust in users ♦ Only trust in
♦ Decisions are based only administrators
on user id and object ♦ Objects and tasks
ownerships themselves can have ids
♦ Impossible to control data ♦ Makes data flow control
flow possible

09 November 2010 Lecture 2: Security Models 8

4
Role-Based Access Control
♦ A user has access to an object based on the assigned role.

♦ Roles are defined based on job functions.

♦ Permissions are defined based on job authority and


responsibilities within a job function.
♦ Operations on an object are invocated based on the
permissions.
♦ The object is concerned with the user’s role and not the
user.

09 November 2010 Lecture 2: Security Models 9

Role-Based Access Control


Individuals Roles Resources

Role 1
Server 1

Role 2 Server 2

Server 3
Role 3

Users change frequently, Roles don’


don’t
09 November 2010 Lecture 2: Security Models 10

5
Role-Based Access Control
♦ Roles are engineered based on the principle of least
privileged.
♦ A role contains the minimum amount of permissions to
instantiate an object.
♦ A user is assigned to a role that allows him or her to
perform only what’s required for that role.
♦ No single role is given more permission than the same
role for another user.

09 November 2010 Lecture 2: Security Models 11

Role-Based Access Control

User Permission
Assignment Assignment
Users Roles Operations Objects

Permissions
user_sessions role_sessions
(one-to-many) (many-to-many)

Sessions

An important difference from classical models is that


Subject in other models corresponds to a Session in RBAC

09 November 2010 Lecture 2: Security Models 12

6
Role-Based Access Control
♦ Example: Hospital Setup
– The role of doctor can include operations to perform
diagnosis, prescribe medication, and order laboratory
tests.
– The role of a researcher can be limited to gathering
anonymous clinical information for studies.

09 November 2010 Lecture 2: Security Models 13

Confidentiality Model

09 November 2010 Lecture 2: Security Models 14

7
The Bell-LaPadula Model
♦ also called the multi-level model,

♦ was proposed by Bell and LaPadula of MITRE for


enforcing access control in government and military
applications.
♦ It corresponds to military-style classifications.

♦ In such applications, subjects and objects are often


partitioned into different security levels.

09 November 2010 Lecture 2: Security Models 15

The Bell-LaPadula Model


♦ A subject can only access objects at certain levels
determined by his security level.
♦ For instance, the following are two typical access
specifications: ''Unclassified personnel cannot read data at
confidential levels'' and '' Top-Secret data cannot be
written into the files at unclassified levels''

09 November 2010 Lecture 2: Security Models 16

8
The Bell-LaPadula Model
♦ Simplest type of confidentiality classification is a set of
security clearances arranged in a linear (total) ordering.
♦ Clearances represent the security levels.
♦ The higher the clearance, the more sensitive the info.
♦ Basic confidential classification system:

individuals documents
Top Secret (TS) Peter, Thomas Personnel Files
Secret (S) Sally, Samuel Electronic Mails
Confidential (C) Claire, Clarence Activity Log Files
Unclassified (UC) Hannah, John Telephone Lists

09 November 2010 Lecture 2: Security Models 17

The Bell-LaPadula Model


♦ Let L(S)=ls be the security clearance of subject S.
♦ Let L(O)=lo be the security classification of object O.
♦ Simple Security Condition: (No Read Up)
S can read O if and only if lo<=ls and
S has discretionary read access to O.
♦ *-Property (Star property): (No Write Down)
S can write O if and only if ls<=lo and
S has discretionary write access to O.
♦ TS personnel can not write documents lower than TS.
 Prevent classified information leak.
♦ But how can different groups communicate?

09 November 2010 Lecture 2: Security Models 18

9
The Bell-LaPadula Model
♦ Basic Security Theorem:
– Let Σ be a system with secure initial state σ0
– Let T be the set of state transformations.
– If every element of T preserves the simple
security condition, preliminary version, and the
*-property, preliminary version,
Then every state σi, i≥0, is secure.

09 November 2010 Lecture 2: Security Models 19

The Bell-LaPadula Model


♦ Total order of classifications not flexible enough
– Alice cleared for missiles; Bob cleared for warheads; Both cleared
for targets
♦ Solution: Categories
– Each category describe a kind of information.
– These category arise from the “need to know” principle
• no subject should be able to read objects unless reading them is
necessary for that subject to perform its function.
– Example: three categories: NUC, EUR, US.
– Each security level and category form a security level or
compartment.
– Subjects have clearance at (are cleared into, or are in) a security
level.
– Objects are at the level of (or are in) a security level.
09 November 2010 Lecture 2: Security Models 20

10
The Bell-LaPadula Model
♦ Security Lattice
{NUC, EUR, US}

{NUC, EUR} {NUC, US} {EUR, US}

{NUC} {EUR} {US}

Φ
♦ William may be cleared into level (SECRET, {EUR})
♦ George into level (TS, {NUC, US}).
♦ A document may be classified as (C, {EUR})
♦ Someone with clearance at (TS, {NUC, US}) will be denied access to
document with category EUR.

09 November 2010 Lecture 2: Security Models 21

The Bell-LaPadula Model


♦ The security level (L, C) dominates the security level
(L’, C’) if and only if L’ ≤ L and C’ ⊆ C
♦ ¬ Dom  dominate relation is false.
♦ Geroge is cleared into security level (S, {NUC, EUR})
♦ DocA is classified as (C, {NUC})
♦ DocB is classified as (S, {EUR, US})
♦ DocC is classified as (S, {EUR})
♦ George ______
dom DocA
♦ ¬ dom DocB
George ______
♦ dom DocC
George ______

09 November 2010 Lecture 2: Security Models 22

11
The Bell-LaPadula Model
♦ Let C(S) be the category set of subject S.
♦ Let C(O) be the category set of object O.
♦ Simple Security Condition (not read up):
S can read O if and only if S dom O and
S has discretionary read access to O.
♦ *-Property (not write down):
S can write to O if and only if O dom S and
S has discretionary write access to O.
♦ Basic Security Theorem:
Let Σ be a system with secure initial state σ0
Let T be the set of state transformations.
If every element of T preserves the simple security
condition, preliminary version, and the *-property,
preliminary version,
Then every state σi, i≥0, is secure.
09 November 2010 Lecture 2: Security Models 23

The Bell-LaPadula Model


♦ Bell-LaPadula allows higher-level subject to write into
lower level object that low level subject can read.
♦ A subject has a maximum security level and a current
security level. maximum security level must dominate
current security level.
♦ A subject may (effectively) decrease its security level from
the maximum in order to communicate with entities at
lower security levels.
♦ Colonel’s maximum security level is (S, {NUC, EUR}).
She changes her current security level to (S, {EUR}). Now
she can create document at Major is clearance level (S,
{EUR}).
09 November 2010 Lecture 2: Security Models 24

12
The Bell-LaPadula Model
♦ Example:
• Alice’s level is secret, Bob’s level is unclassified, Carol’s level
is classified
• Memo1 is classified and Memo2 is top secret
• The simple security property specifies that:
– Memo2 should not be read by Alice, Bob, or Carol
– Bob is not allowed to read memo1, but both Alice and
Carol are allowed to read it
• The *-property specifies that:
– Bob and Carol can write to memo1, since its level is not
lower than theirs
– Alice’s level is secret, so she is not permitted to write to
memo1
– Alice, Bob, and Carol are all at a lower level than memo2
and can therefore write to it

09 November 2010 Lecture 2: Security Models 25

Integrity Model

09 November 2010 Lecture 2: Security Models 26

13
The Biba Model
♦ Based on Bell-LaPadula
– Subject, Objects
– Integrity Levels with dominance relation
• Higher levels
– more reliable/trustworthy
– More accurate
♦ Information transfer path:
Sequence of subjects, objects where
– si r oi
– si w oi+1

09 November 2010 Lecture 2: Security Models 27

The Biba Model


♦ Characterized by the phrase: no write up, no read down.

♦ Users can only create content at or below their own


security level.
♦ Users can only view content at or above their own security
level
♦ Information may only flow downwards.

09 November 2010 Lecture 2: Security Models 28

14
The Biba Model
♦ Prevents corruption of clean higher level entities by dirty
lower level entities.
– Biba model addresses integrity whereas Bell-La Padula concerns
disclosure of information
♦ Notations
– Subjects and objects are ordered by an integrity scheme denoted
I(s) and I(o)
♦ Properties
– Simple Integrity Property: Subject s can modify (or have write
access to) object o iff I(s) ≥ I(o)
– Integrity *-property: If subject s has read access to object o with
integrity level I(o), s can have write access to object p iff I(o) ≥
I(p)
♦ Problem: Ignores secrecy

09 November 2010 Lecture 2: Security Models 29

The Biba Model


♦ Low-Water-Mark Policy
– s w o ⇔ i(o) ≤ i(s) prevents writing to higher level

– s r o ⇒ i’(s) = min(i(s), i(o)) drops subject’s level

– s1 x s2 ⇔ i(s2) ≤ i(s1) prevents executing higher level objects

♦ Ring Policy
– sro allows any subject to read any object

– s w o ⇔ i(o) ≤ i(s) (same as above)

– s1 x s2 ⇔ i(s2) ≤ i(s1)

09 November 2010 Lecture 2: Security Models 30

15
The Biba Model
♦ Biba’s Model: Strict Integrity Policy (dual of
Bell-LaPadula)
– s r o ⇔ i(s) ≤ i(o) (no read-down)
– s w o ⇔ i(o) ≤ i(s) (no write-up)
– s1 x s2 ⇔ i(s2) ≤ i(s1)

♦ Theorem for each:


– If there is an information transfer path from object o1 to
object on+1, then the enforcement of the policy requires
that i(on+1) ≤ i(o1) for all n>1

09 November 2010 Lecture 2: Security Models 31

Data Isolation Model

09 November 2010 Lecture 2: Security Models 32

16
The Chinese Wall Model
♦ Used mainly by services and consultancy firms
♦ Effective in securing data/information that may lead to
conflict of interests within an organization/corporation
♦ Intended to prevent unauthorized flow of information from
one organization to another via consultant working at both
♦ Introduces concept of separation of duty into access
control
♦ GENERAL RULE: there must be no information flow that
causes a conflict of interest

09 November 2010 Lecture 2: Security Models 33

The Chinese Wall Model

competitors
Company A Company B

has account in has account in

Bank X
updates Bank’s has access to
consults for portfolio w/ info Bank’s portfolio consults for
on Company A

Analyst A Analyst B

09 November 2010 Lecture 2: Security Models 34

17
The Chinese Wall Model
♦ The simple security policy
– A subject has access to a particular object in
company X only if such subject has had access
to such object
♦ The * property
– A subject can write to an object in a given
company X only if such subject cannot read any
data (or objects) from any company that is
competitor of X unless such objects have been
sanitized

09 November 2010 Lecture 2: Security Models 35

The Chinese Wall Model


♦ Object
– File containing commercial information
– If an object contains information that is not
commercially sensitive it is said to be sanitized
♦ Company dataset
– Set of files belonging to a particular organization
♦ Conflict of interest class
– Set of companies whose owners are competitors
– Oil companies

09 November 2010 Lecture 2: Security Models 36

18
The Chinese Wall Model
♦ Set of subjects S
♦ Set of objects O
♦ Set of companies C
♦ Set of conflict of interest classes K
– Each company belongs to at least one conflict of interest class
♦ Every unsanitized object has a security label (x(o), y(o))
– y : O ! C identifies the owner of an object
– x : O ! K identifies the object’s conflict of interest class
♦ Every sanitized object has the same security label
♦ A history matrix H

09 November 2010 Lecture 2: Security Models 37

The Chinese Wall Model


♦ The Chinese Wall model must address confidentiality
requirements over time
♦ The history matrix is used to record a history of past access
to objects
– Rows indexed by subjects
– Columns indexed by objects
– Entries 0 or 1
• [s, o] = 1 indicates that subject s has accessed object o

09 November 2010 Lecture 2: Security Models 38

19
The Chinese Wall Model
♦ Consistency
– If y(o1) = y(o2) then x(o1) = x(o2)
– If o1 and o2 are owned by the same company then they
belong to the same COI
• If o1 and o2 belong to different COIs then they are owned by
different companies
♦ Simple security property
– s can access o if for all p such that [s, p] = 1 either x(o)
≠ x(p) or y(o) = y(p)
– s can access o if s hasn’t already accessed an object in
the same COI or o contains sanitized information

09 November 2010 Lecture 2: Security Models 39

The Chinese Wall Model


♦ *-Property
– s can write to an object o if the simple security
property is satisfied and for all unsanitized
objects p that s can read, y(o) = y(p)
– Sensitive information can only flow from one
object to another if both objects are owned by
the same company

09 November 2010 Lecture 2: Security Models 40

20
Exercise …
♦ Define 3 project managers with different classification
levels.
♦ Populate 6 databases (could simply be arrays) with
different classification levels.
♦ Any project manager can choose a database corresponding
to his/her clearance level.
– But once (s)he has selected a database then (s)he can no longer
access the other databases

♦ You can improve the program by adding more features of


confidentiality, integrity, etc.

09 November 2010 Lecture 2: Security Models 41

21

S-ar putea să vă placă și