Download Security Architectures

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Trusted Computing wikipedia , lookup

Information security wikipedia , lookup

Computer security wikipedia , lookup

Transcript
Information Systems Security
Security Architecture
Domain #5
Hardware Components
 CPU
– Primary Storage
– Control Unit
 Coordinates activities during instruction execution
 Does not process data
– Arithmetic Logic Unit (ALU)
 Perform mathematical functions on data
Memory Types
 Primary Memory (RAM/ROM/EPROM/EE)
 Real Memory
– Available to users
 Cache Memory
– Buffers used to increase performance
– Holds data that is accessed often
 Virtual Memory
– Combination of real and secondary storage
Memory Management






Keep track of used memory segments
Assign memory to processes
Manage swapping
Memory protection
Access control
Control virtual memory addressing
Protection Rings
 Organize Code and components in an
operating system into concentric rings
 Modern OS’s use a 4-ring model
 Ring 0 – highest privilege – kernel
 Ring 1 – remainder of the OS
 Ring 2 – drivers and utilities
 Ring 3 – applications and programs – user
mode
Hardware Bus
 Data Bus
– Transfers instructions and data
– Differs based on architectures
 EISA – 8/16
 MCA – 16/32
 VLB – 32
 PCI – 32/64
 AGP - 32
Process and Threads
 Process
– Application and users run as processes in OS
– Process can contain several threads of code
– Thread are individual instruction sets
Threads
 Advantages
– Much quicker to create than a process
– Much quicker to switch between threads
– Share data easier
– Used in browsers and windowing systems
 Disadvantages
– No security between threads
– If one user thread blocks, all are blocked
Process States
 Stopped – not running
 Waiting – waiting for interrupt
 Running – being executed by the CPU
 Ready – available and waiting for instruction
System Functionality
 Multithreading
– Several threads processing at one time
 Multitasking
– Several processes at one time
 Multiprocessing
– Multiple CPU available
System Security Modes
 Dedicated Security Mode
– All users have clearance and need-to-know to
access all information on the system
– Does not require complex methods of
controlling access between different levels
 Multilevel Security Mode
– All users have clearance but not need-to-know
– Two of more levels of classification
– Data is compartmentalized in containers
Security Modes
 Dedicated Mode
– Single state system
– All have need to know and clearance
 System High Mode
– All have need-to-know for ‘some’ material
 Compartmented Mode
– Not all have access for all information
 Multilevel Mode
– Not all have clearance or need-to-know
Levels of System Trust
 Processes with higher trust can access
more system instructions
 CPU architecture dictates the levels of trust
available and the rights of access
 CPU executes instructions in different states
depending upon the process trust level
– User mode – less trusted
– Privilege mode – most trusted
Trusted Computing Base
 All mechanisms that provide protection for
the system
– Software, firmware, hardware
 Made up of processes that executed in
privileged mode
 Term originated from the Orange Book
System Protection
 Reference Monitor
– Access control concept that is referred to as an
abstract machine that mediates all accesses
– Controls relationship between subjects and
objects
 Security Kernel
– Enforces the reference monitors rules
– Physical implementation of reference monitor
– Part of TCB concerned with access control
Access Control Models
 Provides rules and structures used to
control access and shows how decisions are
made
 Main components are subjects, objects,
operations, and their relationships
 Goal is to control how objects are accessed
and ensure a security principle
– Confidentiality, integrity
Finite State Machine
 Execution sequence for each possible state
transformation
 Mappings for each state change
 Does not specify protection mechanisms or
means of enforcing model
 If system comes up in a secure state and
shuts down in a secure state, the system is
secure
Information Flow
 Information must flow securely through the
system
– Bell – Lapadula
– Biba
– Clark-Wilson
– Take-Grant
– Access Control Matrix
– Noninterference
Bell LaPadula
 Confidentiality Model
 Information cannot flow to an object of
lesser classification
 Mathematical model uses a set theory to
define access rights
 Maps a subject’s clearance and an object’s
classification and creates a relationship
Rules
 Subjects cannot read data from an object in
a higher security level
– “No Read Up” – simple security property
– “No Write Up” – star property
– “No Write Up and No Read Down” – strong star
Biba
 Integrity Model
– No subject can depend on an object of lesser
integrity
– Based on hierarchical lattice
– Prevents modification of objects by
unauthorized subjects
– Prevents unauthorized modification by
authorized users
Rules of Biba
 “No Write Up” – integrity axiom
– No writing data at a higher integrity level
 “No Read Down” – simple axiom
– No reading data from a lower integrity level
 Disadvantages
– Does not address confidentiality
– Does not address control management nor
provide a way to change classification levels
Clark - Wilson
 Integrity Model
– Model for commercial integrity
– Requires well formed transactions and
separation of duties
– Does not use lattice approach, partitions objects
into programs and data
– Access triple – subject must go through a
program to access and modify data
– Separation of duties with auditing required
Non-Interference
 Based on theory where users are separated
into different domains
 An output stream remains unchanged when
inputs come from levels that are less
dominant
 Subject cannot be influenced by the
behavior of other subjects at higher security
levels
Lattice Based
 Every subject and object relationship has a
partially ordered set with a lower and upper
bounds
 Rules are set that dictate how information
can flow from one class to another
– Confidential can flow to secret but secret cannot
flow to confidential
Access Control
 Relational table
 Specifies the operations and rights allowed
for each subject
 Access Control Lists – DACL, trustees
Brewer - Nash
 Also known as “Chinese Wall”
 Mathematical theory used to implement
dynamically changing access permissions
 Defines a wall and develops a set of rules
that ensures no subject accesses objects on
the other side
 Enforces “no conflict of interest” rules
 Allows separation of competitors’ data
Take Grant
 Mathematical framework for granting and
revoking access authorization
 Analytical tool for auditors to test software
security
 Rules for how users transfer their
permissions to others
Trusted Computer System
Evaluation Criteria (TCSEC)
 Developed by National Security Computer
Center
 Based on the Bell-LaPadula model
 Uses a series of evaluation classes
 “Orange Book”
Requirements of TCSEC






Security Policy
Marking – labels associated with objects
Identification – individual ID of subjects
Accountability – audit data collected
Assurance – each mechanism evaluated
Continuous protection – mechanisms
always protected against unauthorized
changes
TCSEC Ratings





A1 – Verified Protection
B3,B2,B1 – Mandatory Protection
C2,C1 – Discretionary Protection
D – Minimal Security
Red Book – Trusted Network Interpretation
Layers of TCSEC






C1 – Discretionary Security Protection
C2 – Controlled Access Protection
B1 – Labeled Security
B2 – Structured Security (covert channels)
B3 – Security Domains (covert timing)
A1 – Verified Protection
Information Technology Security
Evaluation Criteria (ITSEC)
 Evaluates functionality and assurance
separately
– F1 to F10 for functionality
– E0 to E6 for assurance
 E0 = D
 F1+E1 = C1
 F2+E2 = C2
 F3+E3 = B1
 etc
ITSEC
 Advantages
– More granular approach
– Goes beyond the Orange Book
 Disadvantages
– Increased amount of rating combinations
– Still does not provide all the answers
Common Criteria






ISO created in 1993
TCSEC was too rigid
ITSEC added too much complexity
Target of Evaluation (TOE)
Security Target (ST)
EALs – E1 (functionally tested only) –
E7(formally verified, designed, and tested)
Covert Channels
 Timing Channels – conveys information by
altering the performance of a system
component in a predictable manner
 Storage Channels – conveys information by
writing data to a common storage area
where another process can read it.
 Level B2 address covert channels
 Level B3 address covert timing
Certification and Authentication
 Certification
– 1st phase – comprehensive evaluation of the
security features of an IT system
 Accreditation
– Management decides the certification of the
system satisfies their needs
 Definition, Verification, Validation, Post
Accreditation
Other Threats








Back Doors
Maintenance Hooks
Asynchronous Attack – TOC/TOU
Race Attacks
Data Validation (Unicode attack)
Buffer Overflow (Use input controls)
SYN Flood
Ping of Death
More Attacks
 TCP Session Hijacking
 Web Spoofing
 DNS Poisoning