Download Lecture12

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Post-quantum cryptography wikipedia , lookup

Wireless security wikipedia , lookup

Distributed firewall wikipedia , lookup

Airport security wikipedia , lookup

Trusted Computing wikipedia , lookup

Multilevel security wikipedia , lookup

Hacker wikipedia , lookup

Unix security wikipedia , lookup

Computer and network surveillance wikipedia , lookup

Next-Generation Secure Computing Base wikipedia , lookup

Information security wikipedia , lookup

Mobile security wikipedia , lookup

Cyber-security regulation wikipedia , lookup

Social engineering (security) wikipedia , lookup

Computer security wikipedia , lookup

Cybercrime countermeasures wikipedia , lookup

Security-focused operating system wikipedia , lookup

Transcript
Cyber Security
Lecture for July 2, 2010
Security Architecture
and Design
Dr. Bhavani Thuraisingham
7/6/2017 13:03
13-2
Outline
0 Computer Architecture
0 Operating System
0 System Architecture
0 Security Architecture
0 Security Models
0 Security Models of Operation
0 System Evaluation Methods
0 Open Vs Closed Systems
0 Some security threats
7/6/2017 13:03
13-3
Computer Architecture Components
0 Central Processing Unit (CPU)
0 Registers
0 Memory Units
0 Input/output Processors
0 Single Processor
0 Multi-Processor
0 Multi-Core Architecture
0 Grids and Clouds
7/6/2017 13:03
13-4
Operating Systems
0 Memory Management
0 Process management
0 File Management
0 Capability Domains
0 Virtual Machines
7/6/2017 13:03
13-5
System Architecture
0 The software components that make up the system
0 Middleware
0 Database management
0 Networks
0 Applications
7/6/2017 13:03
13-6
Security Architecture
0 Security critical components of the system
0 Trusted Computing Base
0 Reference Monitor and Security Kernel
0 Security Perimeter
0 Security Policy
0 Least Privilege
7/6/2017 13:03
13-7
Trusted Computing Base
0 The trusted computing base (TCB) of a computer system is the set of
all hardware, firmware, and/or software components that are critical
to its security, in the sense that bugs or vulnerabilities occurring
inside the TCB might jeopardize the security properties of the entire
system. By contrast, parts of a computer system outside the TCB
must not be able to misbehave in a way that would leak any more
privileges than are granted to them in accordance to the security
policy.
0 The careful design and implementation of a system's trusted
computing base is paramount to its overall security. Modern
operating systems strive to reduce the size of the TCB so that an
exhaustive examination of its code base (by means of manual or
computer-assisted software audit or program verification) becomes
feasible.
7/6/2017 13:03
13-8
Reference Monitor and Security Kernel
0 In operating systems architecture, a reference monitor is a
tamperproof, always-invoked, and small-enough-to-be-fully-testedand-analyzed module that controls all software access to data
objects or devices (verifiable).
0 The reference monitor verifies that the request is allowed by the
access control policy.
0 For example, Windows 3.x and 9x operating systems were not built
with a reference monitor, whereas the Windows NT line, which also
includes Windows 2000 and Windows XP, was designed to contain a
reference monitor, although it is not clear that its properties
(tamperproof, etc.) have ever been independently verified, or what
level of computer security it was intended to provide.
7/6/2017 13:03
13-9
Security Models
0 Bell and LaPadula (BLP) Confidentiality Model
0 Biba Integrity Model (opposite to BLP)
0 Clark Wilson Integrity Model
0 Other Models
- information Flow Model
- Non Interference Model
- Graham Denning Model
- Harrison-Ruzzo-Ullman Model
- Lattice Model
7/6/2017 13:03
13-10
Bell and LaPadula
0 A system state is defined to be "secure" if the only permitted access modes
of subjects to objects are in accordance with a security policy. To determine
whether a specific access mode is allowed, the clearance of a subject is
compared to the classification of the object (more precisely, to the
combination of classification and set of compartments, making up the
security level) to determine if the subject is authorized for the specific access
mode. The clearance/classification scheme is expressed in terms of a lattice.
The model defines two mandatory access control (MAC) rules and one
discretionary access control (DAC) rule with three security properties:
0 The Simple Security Property - a subject at a given security level may not
read an object at a higher security level (no read-up).
0 The *-property (read "star"-property) - a subject at a given security level must
not write to any object at a lower security level (no write-down). The *property is also known as the Confinement property.
0 The Discretionary Security Property - use of an access matrix to specify the
discretionary access control.
7/6/2017 13:03
13-11
Biba
0 In general, preservation of data integrity has three goals:
- Prevent data modification by unauthorized parties
- Prevent unauthorized data modification by authorized parties
- Maintain internal and external consistency (i.e. data reflects the real
world)
0 Biba security model is directed toward data integrity (rather than
confidentiality) and is characterized by the phrase: "no read down, no write
up". This is in contrast to the Bell-LaPadula model which is characterized by
the phrase "no write down, no read up".
0 The Biba model defines a set of security rules similar to the Bell-LaPadula
model. These rules are the reverse of the Bell-LaPadula rules:
0 The Simple Integrity Axiom states that a subject at a given level of integrity
must not read an object at a lower integrity level (no read down).
0 The * (star) Integrity Axiom states that a subject at a given level of integrity
must not write to any object at a higher level of integrity (no write up).
7/6/2017 13:03
13-12
Clark Wilson Model
0 The Clark-Wilson integrity model provides a foundation for
specifying and analyzing an integrity policy for a computing system.
0 The model is primarily concerned with formalizing the notion of
information integrity.
0 Information integrity is maintained by preventing corruption of data
items in a system due to either error or malicious intent.
0 An integrity policy describes how the data items in the system
should be kept valid from one state of the system to the next and
specifies the capabilities of various principals in the system.
0 The model defines enforcement rules and certification rules.
0 The model’s enforcement and certification rules define data items
and processes that provide the basis for an integrity policy. The core
of the model is based on the notion of a transaction.
7/6/2017 13:03
13-13
Clark Wilson Model
0 A well-formed transaction is a series of operations that transition a system
from one consistent state to another consistent state.
0 In this model the integrity policy addresses the integrity of the transactions.
0 The principle of separation of duty requires that the certifier of a transaction
and the implementer be different entities.
0 The model contains a number of basic constructs that represent both data
items and processes that operate on those data items. The key data type in
the Clark-Wilson model is a Constrained Data Item (CDI). An Integrity
Verification Procedure (IVP) ensures that all CDIs in the system are valid at a
certain state. Transactions that enforce the integrity policy are represented
by Transformation Procedures (TPs). A TP takes as input a CDI or
Unconstrained Data Item (UDI) and produces a CDI. A TP must transition the
system from one valid state to another valid state. UDIs represent system
input (such as that provided by a user or adversary). A TP must guarantee
(via certification) that it transforms all possible values of a UDI to a “safe”
CDI
7/6/2017 13:03
13-14
Clark Wilson Model
0 At the heart of the model is the notion of a relationship between an
authenticated principal (i.e., user) and a set of programs (i.e., TPs) that
operate on a set of data items (e.g., UDIs and CDIs). The components of such
a relation, taken together, are referred to as a Clark-Wilson triple. The model
must also ensure that different entities are responsible for manipulating the
relationships between principals, transactions, and data items. As a short
example, a user capable of certifying or creating a relation should not be able
to execute the programs specified in that relation.
0 The model consists of two sets of rules: Certification Rules (C) and
Enforcement Rules (E). The nine rules ensure the external and internal
integrity of the data items. To paraphrase these:
0 C1—When an IVP is executed, it must ensure the CDIs are valid. C2—For
some associated set of CDIs, a TP must transform those CDIs from one valid
state to another. Since we must make sure that these TPs are certified to
operate on a particular CDI, we must have E1 and E2.
7/6/2017 13:03
13-15
Clark Wilson Model
0
E1—System must maintain a list of certified relations and ensure only TPs certified to
run on a CDI change that CDI. E2—System must associate a user with each TP and set of
CDIs. The TP may access the CDI on behalf of the user if it is “legal.” This requires
keeping track of triples (user, TP, {CDIs}) called “allowed relations.”
0
C3—Allowed relations must meet the requirements of “separation of duty.” We need
authentication to keep track of this.
0
E3—System must authenticate every user attempting a TP. Note that this is per TP
request, not per login. For security purposes, a log should be kept.
0
C4—All TPs must append to a log enough information to reconstruct the operation.
When information enters the system it need not be trusted or constrained (i.e. can be a
UDI). We must deal with this appropriately.
0
C5—Any TP that takes a UDI as input may only perform valid transactions for all possible
values of the UDI. The TP will either accept (convert to CDI) or reject the UDI. Finally, to
prevent people from gaining access by changing qualifications of a TP:
0
E4—Only the certifier of a TP may change the list of entities associated with that TP
7/6/2017 13:03
13-16
Security Modes of Operation
0 Dedicated
0 Systems High
0 Compartmented
0 Multilevel
0 Trust and Assurance
7/6/2017 13:03
13-17
Secure System Evaluation: TCSEC
0 Trusted Computer System Evaluation Criteria (TCSEC) is a United
States Government Department of Defense (DoD) standard that sets
basic requirements for assessing the effectiveness of computer
security controls built into a computer system. The TCSEC was used
to evaluate, classify and select computer systems being considered
for the processing, storage and retrieval of sensitive or classified
information.
0 The TCSEC, frequently referred to as the Orange Book, is the
centerpiece of the DoD Rainbow Series publications. Initially issued
in 1983 by the National Computer Security Center (NCSC), an arm of
the National Security Agency, and then updated in 1985,.
0
TCSEC was replaced by the Common Criteria international standard
originally published in 2005.
7/6/2017 13:03
13-18
Secure System Evaluation: TCSEC
0 Policy: The security policy must be explicit, well-defined and
enforced by the computer system. There are two basic security
policies:
0 Mandatory Security Policy - Enforces access control rules based
directly on an individual's clearance, authorization for the
information and the confidentiality level of the information being
sought. Other indirect factors are physical and environmental. This
policy must also accurately reflect the laws, general policies and
other relevant guidance from which the rules are derived.
- Marking - Systems designed to enforce a mandatory security
policy must store and preserve the integrity of access control
labels and retain the labels if the object is exported.
0 Discretionary Security Policy - Enforces a consistent set of rules for
controlling and limiting access based on identified individuals who
have been determined to have a need-to-know for the information.
7/6/2017 13:03
13-19
Secure System Evaluation: TCSEC
0 Accountability: Individual accountability regardless of policy must be
enforced. A secure means must exist to ensure the access of an authorized
and competent agent which can then evaluate the accountability information
within a reasonable amount of time and without undue difficulty. There are
three requirements under the accountability objective:
0 Identification - The process used to recognize an individual user.
0 Authentication - The verification of an individual user's authorization to
specific categories of information.
0 Auditing - Audit information must be selectively kept and protected so that
actions affecting security can be traced to the authenticated individual.
0 The TCSEC defines four divisions: D, C, B and A where division A has the
highest security. Each division represents a significant difference in the trust
an individual or organization can place on the evaluated system. Additionally
divisions C, B and A are broken into a series of hierarchical subdivisions
called classes: C1, C2, B1, B2, B3 and A1.
7/6/2017 13:03
13-20
Secure System Evaluation: TCSEC
0 Assurance: The computer system must contain
hardware/software mechanisms that can be independently
evaluated to provide sufficient assurance that the system
enforces the above requirements. By extension, assurance
must include a guarantee that the trusted portion of the
system works only as intended. To accomplish these
objectives, two types of assurance are needed with their
respective elements:
0 Assurance Mechanisms : Operational Assurance: System
Architecture, System Integrity, Covert Channel Analysis,
Trusted Facility Management and Trusted Recovery
0 Life-cycle Assurance : Security Testing, Design Specification
and Verification, Configuration Management and Trusted
System Distribution
7/6/2017 13:03
13-21
Secure System Evaluation: ITSEC
0 The Information Technology Security Evaluation Criteria
(ITSEC) is a structured set of criteria for evaluating computer
security within products and systems. The ITSEC was first
published in May 1990 in France, Germany, the Netherlands,
and the United Kingdom based on existing work in their
respective countries. Following extensive international
review, Version 1.2 was subsequently published in June 1991
by the Commission of the European Communities for
operational use within evaluation and certification schemes.
0 Levels E1 – E6
7/6/2017 13:03
13-22
Secure System Evaluation: Common Criteria
0 The Common Criteria for Information Technology Security
Evaluation (abbreviated as Common Criteria or CC) is an
international standard (ISO/IEC 15408) for computer security
certification.
0 Common Criteria is a framework in which computer system users
can specify their security functional and assurance requirements,
vendors can then implement and/or make claims about the security
attributes of their products, and testing laboratories can evaluate the
products to determine if they actually meet the claims. In other
words, Common Criteria provides assurance that the process of
specification, implementation and evaluation of a computer security
product has been conducted in a rigorous and standard manner.
0 Levels: EAL 1 – EAL 7 (Evaluation Assurance Levels)
7/6/2017 13:03
13-23
Certification and Accreditation
0 Certification and Accreditation (C&A) is a process for implementing
information security. It is a systematic procedure for evaluating,
describing, testing and authorizing systems prior to or after a
system is in operation.
0 Certification is a comprehensive assessment of the management,
operational, and technical security controls in an information
system, made in support of security accreditation, to determine the
extent to which the controls are implemented correctly, operating as
intended, and producing the desired outcome with respect to
meeting the security requirements for the system.
0 Accreditation is the official management decision given by a senior
agency official to authorize operation of an information system and
to explicitly accept the risk to agency operations (including mission,
functions, image, or reputation), agency assets, or individuals,
based on the implementation of an agreed-upon set of security
controls.
7/6/2017 13:03
13-24
Open vs. Closed System
0 Open systems allow users to reuse, edit, manipulate, and contribute
to the system development
- Open source software is an example of Open systems
= Licensed to the public
- Freeware is also an example of Open systems
0 Closed system permits users the system as it is
7/6/2017 13:03
13-25
Some Security Threats
0 Buffer Overflow
0 Maintenance Hooks
0 Time of check / Time of use attacks