Download Reading for This Class - Computer Secrity Classes

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Data remanence wikipedia , lookup

Computer and network surveillance wikipedia , lookup

Computer security compromised by hardware failure wikipedia , lookup

Artificial intelligence for video surveillance wikipedia , lookup

Transcript
INF523: Assurance in Cyberspace Applied to
Information Security
Testing
Prof. Clifford Neuman
Lecture 8
2 March 2016
OHE 120
Reading for This Class
• Bishop book, Chapter 23, “Vulnerability Analysis”, pp.
645-660 (penetration testing)
• Analysis Techniques for Information Security, pp. 5-10
(static testing)
• Nathaniel Ayewah, David Hovemeyer, J. David
Morgenthaler, John Penix, William Pugh, Using static
analysis to find bugs, IEEE Software, vol. 25, no. 5, pp.
22–29, Sep./Oct. 2008
• P. Oehlert, Violating assumptions with fuzzing, 2005
(fuzzing/dynamic testing)
• Jose Fonseca, et. al., Testing and comparing web
vulnerability scanning tools for SQL injection and XSS
attacks, 2007 (vulnerability scanning)
1
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
2
Disposal
Black Box and White Box Testing
• Black box testing
–
–
–
–
Tester has no information about the implementation
Good for testing independence
Not good for test coverage
Hard to test individual modules
–
–
–
–
–
–
Tester has information about the implementation
Knowledge guides tests
Simplifies diagnosis of problem
Can zero in on specific modules
Possible to have good coverage
May compromise tester independence
• White box testing
3
Layers of Testing
• Module testing
– Test individual modules or subset of system
• Systems integration
– Test collection of modules
• Acceptance testing
– Test to show that system meets requirements
– Typically focused on functional specifications
4
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
5
Security Testing
• A process to find system flaws that would lead to
violation of the security policy
– Find flaws in security mechanisms
– Find flaws that could bypass security mechanisms
• Focus is on security policy, not function
6
Security Testing
• Functional testing: Does system do what it is
supposed to do?
– In the presence of good inputs
• Security testing: Does the system do what it is
supposed to do, and nothing more?
– For good and bad inputs
– E.g., I can only get access to my data after I log in
• But can I get access to only my data?
• Security testing assumes intelligent adversary
– Test functional and non-functional security
requirements
– Test as if you were an attacker
7
Testing Security Mechanisms
• Security mechanisms thought of as “nonfunctional”
– Often not tested during system testing!
• But many security mechanisms do have
functional specifications
• Must test security mechanisms as if they were
the subject of functional testing
– E.g., test identification and authentication
mechanisms
– Do they correctly enforce the policy?
– What if malicious inputs?
– Do they “fail safe”?
8
What to Test in Security Testing
• Violation of assumptions
– About inputs
• Behavior of system with “bad” inputs
• Inputs that violate type, size, range, …
– About environment
– About operational procedures
– About configuration and maintenance
• Often due to
– Ambiguous specifications
– Sloppy procedures
• Special focus on Trust Boundaries
9
Types of Flaws – Implementation Bugs
• Coding errors
– E.g., use of gets() function and other unchecked
buffers
• Logical errors
– E.g., time of check to time of use (“TOUCTOU”)
– Race condition where, e.g., authorization changes but
Victim access still allowed
Attacker
if (access("file", W_OK) !=
0) { exit(1); }
fd = open("file", O_WRONLY);
write(fd, buffer,
sizeof(buffer));
// After the access check,
symlink("/etc/passwd",
"file");
// Before the open, "file"
points to the password
database
10
eBay Password Reset Bug
• Reported Nov 2014 (http://thehackernews.com/2014/09/hacking-ebay-accounts.html)
• Programming error - used wrong “secret code”
11
Types of Flaws – Design Flaws
•
•
•
•
•
•
Error handling - E.g., failure in insecure states
Transitive trust issues (typical of DAC)
Unprotected data channels
Broken or missing access control mechanisms
Lack of audit logging
Concurrency issues (timing and ordering)
• Design flaws are likely hardest to detect
• Usually most critical
• Probably most prevalent
12
A Fundamental, “Unsolvable” Problem
• Fundamental problem: lack of reference monitor
– Entire system (“millions of lines of code”) vulnerable
– Buffer overflow in GUI is as serious as bug in access
control mechanism
– No way to find the numerous flaws in all of that code
• Reference monitor is “small enough to be
verifiable”
– Helps bound testing
• But testing still required for reference monitor
13
Limits of Testing
• “Testing can prove the presence of errors, but
not their absence” – Edsger W Dijkstra
• How much testing is enough?
– Undecidable
– Never “enough” because never know if found all bugs
– But resources, including time, are finite
• Testing would probably miss eBay flaw, for
example
– Requires deep understanding of flaw and precise test
• Subversion? Find a trap-door? Forget about it.
• Must prioritize
14
Prioritizing Risks and Tests
• Create security misuse cases
– I.e., threat assessment
• Identify security requirements
– Use identified threats with policy to derive reqs
• Perform architectural risk analysis
– Where will I get the biggest bang for my buck?
– Trust boundaries are very interesting here
• Build risk-based security test plans
– Test the “riskiest” things
• Perform the (now limited, but focused) testing
15
Misuse Cases
• “Negative scenarios”
– I.e., threat modeling
• Define what an attacker would want
• Assume level of attacker abilities/skill
– Helps determine what steps are possible and risk
• Imagine series of steps an attacker could take
– Attack-defense tree or requires/provides model
– Or “Unified Modeling Language” (UML)
• Identify potential weak spots and mitigations
16
Example of UML
17
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
18
Static Testing
• Analyze code (and documentation)
– Usually only source code, but sometimes object
– Program not executed
– Testing abstraction of the system
• Code analysis, inspection, reviews,
walkthroughs
– Human techniques often called “code review”
• Automated static testing tools
–
–
–
–
Checks against coding standard (e.g., formatting)
Coding flaws
Potentially malicious code
May also refer to formal proof of code correctness
19
Static Testing Techniques
• Many Static Testing techniques based on
compiler technology
• Some techniques:
–
–
–
–
Type analysis
Abstract Interpretation
Data-flow analysis
Taint analysis
20
Type Analysis
• Type analysis
–
–
–
–
For languages without strong typing, like JavaScript
Program analyzed against type constraints
Each construct has derived type, or expected type
May have false positives
function onlyWorksOnNumbers(x) {
return x * 10;
}
onlyWorksOnNumbers(‘Hello, world!’);
21
Abstract Interpretation
• Abstract Interpretation
– Partial execution using an interpreter
– Map variable values to ranges or relations
• E.g., map pointer values to “points-to” relation
– For control or data flow, without performing
calculations
– Abstraction can be sound or unsound
• Sound – never false negatives but may be false
positives
– “Over-abstraction” may include unreachable states
– Usually slower tools
• Unsound – may have false negatives and false
positives
– Over- and Under-abstraction possible
– Time trade-off so faster
22
Data Flow Analysis
• Data-flow analysis
– Gathers information about possible set of variable
values at specific points in the program
– Uses control flow graph (CFG) and lattice theory
– Examples:
•
•
•
•
•
Liveness
Dead variables
Uninitialized variables
Sign analysis
Lower and upper bounds
The “reaching”
1.if b==4 then
definition of
2. a = 5;
variable “a’” at line
3.else
7 is the set of
4. a = 3;
assignments a=5 at
5.endif
7.if a < 4 then line 2 and a=3 at
line 4.
8....
23
Taint Analysis
• Taint analysis
– Tries to identify variables affected by user input
– Tracks flow of data dependencies in program
– If tainted variables are ever passed to sensitive
functions, flags an error
24
“Lint-like” Tools
• Finds “suspicious” software constructs
–
–
–
–
E.g., Variables being used before being initialized
Divide by zero
Constant conditions
Calculations outside the range of a type
• Language-dependent
• Can check correspondence to style guidelines
25
Example Static Testing Tool
• Splint – Modern version of classic “lint” tool
#include <stdio.h>
int main()
{
char c;
while (c != 'x');
{
c = getchar();
if (c = 'x')
return 0;
switch (c){
case '\n':
case '\r':
printf("Newline\n");
default:
printf("%c",c);
}
}
return 0;
}
Splint's output:
* Variable c used before definition
* Suspected infinite loop. No value
used in loop test (c) is modified by
test or loop body.
* Assignment of int to char: c =
getchar()
* Test expression for if is assignment
expression: c = 'x'
* Test expression for if not boolean,
type char: c = 'x'
* Fall through case (no preceding
break)
26
Limitations of Static Testing
• Lots of false positives and false negatives
• Automated tools seem to make it easy, but it
takes experience and training to use effectively
• Misses many types of flaws
• Won’t find vulnerabilities due to run-time
environment
27
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
28
Dynamic Testing
• Test running software in “real” environment
– Contrast with static testing
• Techniques
– Simulation – assess behavior/performance
– Error seeding – bad input, see what happens
• Use extremes of valid/invalid input
• Incorrect and unexpected input sequences
• Altered timing
– Performance monitoring – e.g., real-time memory use
– Stress tests – e.g., abnormally high workloads
29
Limits to Dynamic Testing
• From outside, cannot test all software paths
• Cannot even test all hardware faults
• May not find rare events (e.g., due to timing)
30
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
31
Fuzzing
• Tool used by both security testers and attackers
• Form of dynamic testing, usually automated
• Provide many invalid, unexpected, often random
inputs to software
– Extreme limits, or beyond limits, of value, size, type, ...
– Can test command line, GUI, config, protocol, format, file
contents, …
• Observe behavior – if unexpected result, a flaw!
– Crashes or other bad exception handling
– Violations of program state (assertions)
– Memory leaks
• Flaws could conceivably be exploited
• Fix, and re-test
32
Fuzzing Examples
• Testing for integer overflows
– -1, 0, 0x100, 0x3fffffff, 0x7ffffffe, 0x7fffffff, 0xfffffff, etc.
• Testing for buffer overflows
– ‘A’ x Z, where Z is in domain {1, 5, 33, 129 257, 513,
etc.}
• Testing for format string errors
– %s%p%x%d, .1024d, %d%d%d%d, %%20s, etc.
33
Fuzzing Methods
• Mutation-based
– Mutate existing test data, e.g., by flipping bits
• Generation-based
– Generate test data based on models of input
– Use a specification
• Black box – no reference to code
– Useful for testing proprietary systems
• White (or gray) box – use code as a guide of
what to test
• Recursive – enumerate all possible inputs
• Replacive – use only specific values
34
Limits of Fuzzing
• Random sample of behavior
• Usually finds only simple flaws
• Best for rough measure of software quality
– If find lots of problems, better re-work the code
• Also good for regression testing, or comparing
versions
• Demonstrates that program handles exceptions
• Not a comprehensive bug-finding tool
• Not a proof that software is correct
35
Fuzzers
• Lots of different fuzzing programs available
• SPIKE, framework for protocol fuzzing (linux)
– http://www.immunitysec.com/resources-freesoftware.shtml
– Intro to use: http://resources.infosecinstitute.com/intro-tofuzzing/
• Peach (Windows, Mac, linux)
– http://sourceforge.net/projects/peachfuzz/
– Data definitions written in XML
• CERT Basic Fuzzing Framework (BFF)
– https://www.cert.org/vulnerability-analysis/tools/bff.cfm
• Or not hard to roll your own, at least for simple
random fuzzing
36
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
37
Vulnerability Scanning
• Another tool used by attackers and defenders
alike
• Automated
• Look for flaws using database of known flaws
– Contrast with fuzzing
• As comprehensive as database of vulnerabilities
is
• Different types of vulnerability scanners
(example):
–
–
–
–
–
Port scanner (NMAP)
Network vulnerability scanner (Nessus)
Web application scanner (Nikto)
Database (Scuba)
38
Host security audit (Lynis)
Vulnerability Scanning Methods
• Passive – probe without any changes
– E.g., Check version and configuration, “rattle doors”
– Do nothing that might crash the system
• Active – attempt to see if actually vulnerable
– Run exploits and monitor results
– Might disrupt, crash, or even damage target
– Always get explicit permission (signed agreement)
before running active scans
39
Example Nessus Output
Taking the following actions across 10 hosts would resolve 20% of the vulnerabilities
on the network:
Action to take
Vulns
Hosts
OpenSSH LoginGraceTime / MaxStartups DoS: Upgrade to OpenSSH 6.2 and review the
associated server configuration settings.
12
3
Samba 3.x < 3.5.22 / 3.6.x < 3.6.17 / 4.0.x < 4.0.8 read_nttrans_ea_lis DoS: Either install the
patch referenced in the project's advisory, or upgrade to version 3.5.22 / 3.6.17 / 4.0.8 or later.
9
1
Dropbear SSH Server < 2013.59 Multiple Vulnerabilities: Upgrade to the Dropbear SSH 2013.59
or later.
6
3
MS05-051: Vulnerabilities in MSDTC Could Allow Remote Code Execution (902400)
(uncredentialed check): Microsoft has released a set of patches for Windows 2000, XP and
2003.
4
1
Firewall UDP Packet Source Port 53 Ruleset Bypass: Either contact the vendor for an update or
review the firewall rules settings.
4
2
40
Limits of Vulnerability Scanning
• Passive scanning only looks for known
vulnerabilities
– Or potential vulnerabilities (e.g., based on
configuration)
• Passive scanning often simply checks versions
– then reports known vulnerabilities in those versions
– and encourages updating
• Active scanning can crash or damage systems
• Active scanning potentially requires a lot of
“hand-holding”
– Due to unpredictable system behavior
– E.g., system auto-log out
41
Outline
•
•
•
•
•
•
Security testing
Static testing
Dynamic testing
Fuzzing
Vulnerability scanning
Penetration testing
42
Penetration Testing
• Actual attacks on a system carried out with the goal of
finding flaws
– Called a “test”, when used by defenders
– Called an “attack” when used by attackers
• Human, not automated
• Usually goal driven – stop when achieve
• Step-wise (like requires/provides)
– When find one way to achieve a step, go on to next step
• Identifies vulnerabilities that may be impossible for
automated scanning to detect
• Shows how different low-risk vulns can be combined into
successful exploit
• Same precautions as for other forms of active testing
– Explicit permission; don’t interfere with production
43
Flaw-Hypothesis Methodology
• Five steps:
1. Information gathering
–
Become familiar with the system’s design,
implementation, operating procedures, and use
2. Flaw hypothesis
–
Think of flaws the system might have
3. Flaw testing
–
Test for exploitable flaws
4. Flaw generalization
–
Generalize vulnerabilities that can be exploited
5. Flaw elimination (often skipped)
44
Limits of Penetration Testing
• Informal, non-rigorous, semi-systematic
– Depends on skill of testers
• Not comprehensive
– Proves at least one path, not all
– When find one way to achieve a step, go on to next step
• Does not prove lack of path if unsuccessful
• But, performed by experts
– Who are not the system developers
– Who think like attackers
• Tests developer and operator assumptions
– Helps locate shortcomings in design and implementation
– Probably only test technique that would find eBay bug
45
Reading for Next Time
• The Design and Implementation of Tripwire: A
File System Integrity Checker, Gene Kim, 1993
– Really for this time, but I didn’t mention it last time
46
INF523: Assurance in Cyberspace Applied to
Information Security
Secure Operation
Prof. Clifford Neuman
Lecture 9
9 March 2016
OHE 120
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
48
Disposal
What’s Left?
•
•
•
•
•
Secure distribution
Secure installation and configuration
Patch management
System audit and integrity monitoring
Secure disposal
• For very high-assurance systems:
– Covert channel analysis
– Formal (mathematical) methods:
• Specification and proofs
• FSPM, FTLS, DTLS
49
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
50
Disposal
Secure Distribution
• Problem: Integrity of distributed software
– How can you “trust” distributed software?
– Watch out for subversion!
– How is this accomplished for iOS?
• Hint: It is in the news this week.
• Is this the actual program from the vendor?
• … or did someone substitute or tamper with it?
– Who might want to do that?
51
Checksums
• Compare hashes on downloaded files with
published value (e.g., on developer’s web site)
– If values match, good to go
– If values do not match, don’t install!
• Often download site different from publisher
– So checksum is control on distribution
• Use good hash algorithms
– MD5 – compromised (can reliably make duplicates)
– SHA-1 – no demonstration of compromise, but feared
– SH-256 (aka SHA-2) still OK
52
Are Checksums Reliable?
• Don’t run install from distribution point
– Download, calculate and compare checksum first
• Make sure connected to right hash source
– What if visit spoofed site?
– How do you know you are on the right site?
• What if download file and checksum from same site?
– What use is the checksum?
• Make sure connection to hash source is tamperproof
– What if MITM attack?
– How do you know your connection hasn’t been compromised?
53
Cryptographic Signing
• Solves checksum reliability problems?
• Typically uses PKI cryptography
• Signing algorithm:
– Calculate checksum (hash) on object
– Encrypt checksum using signer’s private key
– Attach seal to object (along with certificate of signer)
• Verification algorithm:
– Calculate checksum on object
– Decrypt encrypted checksum using signers’ public
key
– Compare calculated and decrypted checksums
54
Cryptographic Signing
Source: Wikipedia
55
Cryptographic Signing
• Solves checksum reliability problems?
• Typically uses public/private key cryptography
• Signing algorithm:
– Calculate checksum (hash) on object
– Encrypt checksum using signer’s private key
– Attach seal to object (along with certificate of signer)
• Verification algorithm:
– Calculate checksum on object
– Decrypt encrypted checksum using signers’ public key
– Compare calculated and decrypted checksums
• Missing step: Check signer’s certificate
56
Do You Trust the Certificate?
• You trust a source because the calculated
checksum matches the checksum in the seal
• Certificate contains signer’s public key
• You use public key to decrypt seal
• How do you know that signer is trustworthy?
• Certificates (like for SSL), testify as to signer
identity
• Based on credibility of certificate authority
• But what if fake certificate?
– E.g., Stuxnet
57
Secure Distrib in High-Assurance
System
• E.g., GTNP FER (page 142)
– Based on cryptographic seals and data encryption. All
kernel segments are encrypted and sealed.
Formatting information on distributed volumes is
sealed but not encrypted. Keys to check seals and
decrypt are shipped separately [i.e., sent out of band; no
certification authority].
– Hardware distribution through authenticator for each
component, implemented as cryptographic seal of
unique identifier of component, such as serial number
of a chip or checksum on contents of a PROM
[Physical HW seal and checked by SW tool]
58
More on GTNP Secure Distribution
• Physical seal on HW to detect tampering
• Install disk checks HW “root of trust” during
install
– Will only install on specific system
• System Integrity checks at run-time
• Multi-stage boot:
– PROM checks checksum of boot loader
– Boot loader checks checksum of kernel
59
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
60
Disposal
Secure Installation and Configuration
• Evaluated, high-assurance systems come with
documentation and tools for secure
configuration
• Lower-assurance systems have less guidance
• Usually informal checklists
– Benchmarks
– Security Technical Implementation Guides (STIGs)
• Based on “best practices”
– E.g., “change default admin password”
– No formal assessment of effectiveness
• Not based on security policy model
61
E.g., Microsoft Baseline Security
Analyzer
• http://www.microsoft.com/en-us/download/details.aspx?id=7558
• Standalone security and vulnerability scanner
• Helps determine security state
– Missing patches
– Microsoft configuration recommendations
• Some of the checks it does:
– Administrative vulnerabilities
– Weak passwords
– Presence of known IIS and SQL vulnerabilities
62
STIGS
• Security Technical Implementation Guides
(STIGs)
• E.g., https://web.nvd.nist.gov/view/ncp/repository
– (Need SCAP tool to read them)
• Based on “best practices”
• Not based on security policy model
63
Security Content Automation Protocol
• Security Content Automation Protocol (SCAP)
– Tools can automatically perform configuration
checking using XML checklist
• Example rule for Windows 7:
64
Configuration Management Systems
• Centralized tools and databases to manage
configs
• Ideally:
– Complete list of systems
– Complete list of software
– Complete list of versions
•
•
•
•
•
Logs status and changes
Can automatically push out patches/changes
Can detect unauthorized changes
E.g., Windows group policy management
For more info: https://www.sei.cmu.edu/productlines/frame_report/config.man.htm
65
Example for High Assurance System
• E.g., GTNP FER (p. 102)
– System Maintenance Utility – Used to define physical
disk partitions, define RAM disks, allocate logical
volumes to disk partitions, and modify physical device
parameters
– System Generation Utility – Used to format volumes,
set volumes' read-only attribute, establish links
between volumes and mount segments, define
system resource limits used by the kernel, and define
configuration and initial environment of initial TCB
processes
• Mistakes can make system unusable, but not
violate MAC security policy
66
Certification and Accreditation
• Evaluated systems are certified
– Under specific environmental criteria
– (e.g., for TCSEC, criteria listed in Trusted Facility
Manual)
• But environmental criteria must be satisfied for
accreditation
– E.g., security only under assumption that network is
physically isolated
– If instead use public Internet, cannot be accredited
67
Operational Environment and Change
• Must “configure” environment
• Not enough to correctly install and configure
a system if the environment is out of spec
• What if the system and environment start out
correctly configured, but then change?
68
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
69
Disposal
Maintenance
• System is installed and
configured correctly
• Environment satisfies
requirements
• Will they stay that way?
• Maintenance needs to
1. Preserve known, secure
configuration
2. Permit necessary configuration
changes
• E.g., patching
70
Patch Management
• All organizations use low-assurance systems
• Low-assurance systems have lots of bugs
• A “patch” is a security update to fix vulnerabilities
– Maybe to fix bugs introduced in last patch
• Constant “penetrate-and-patch” cycle
– Must constantly acquire, test, and install patches
• Patch management:
– Strategy and process of determining
• what patches should be applied,
• to which programs and systems, and
• when
71
Risk of Not Applying Patches
• Ideally, install patches ASAP
• Risk goes way up when patches are not installed
– System then has known vulnerabilities
– “Assurance” of system is immediately very low
– Delay is dangerous – live exploits often within hours
• But is there risk of installing patches too soon?
72
Patch Management Tradeoffs
• Delay means risk
• But patches may break applications
– Custom applications or old, purchased applications
• Patches may even break the system
– Microsoft, for example, “recalls” patches
– (Microsoft Recalls Another Windows 7 Update Over
Critical Errors http://www.techlicious.com/blog/faulty-windows-7-updatekb3004394/)
• Must balance the two risks
– Sad fact: Security often loses in these battles
– Must find other mitigating controls
73
Patch Testing and Distribution
• Know what patches are available
• Know what systems require patching
• Test patches before installing
– On non-production systems
– Test as completely as possible with operational
environ.
• Distribute using signed checksum
– Watch out for subversion, even inside the
organization
74
Challenges in Patch Management
• NIST Guide to Patch Management Technologies
(http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-40r3.pdf)
• Timing, prioritization, and testing
• Ideally, deploy immediately
– But must test first! Possible side-effects
– Testing takes time and resources
– May require rebooting/restarting, so more delay
• Vendors may bundle patches so less frequent
– But then window of exposure is much longer
• Difficult to keep track of installed patches
– System vuln scanners or config mgmt. tools can help
75
Patching High-Assurance Systems
• Distribution same as for original system:
– Cryptographic seals and data encryption
– Keys to check seals and decrypt are shipped
separately
• Advantage of high-assurance system:
– Lots of effort to “get it right” from the beginning
– Modularization, layering, proper testing, FSPM, etc.
• No TCSEC Class A1 system ever needed
security patch (per Roger Schell)
76
Preserve Known, Secure Configuration
• Two steps:
1. Document that installation and initial
configuration are correct
– Don’t forget environment
– Update documentation as necessary after patching
2. Periodically check that nothing has changed in
system (or environment)
–
Compare results of check to documentation
77
System Audit and Integrity Monitoring
• Static audit: scan systems and note
discrepancies
–
–
–
–
–
Missing patches
Mis-configurations
Changed, added, or deleted system files
Changed, added or deleted applications
Added or deleted systems!
• Dynamic system integrity checking
– Same as static, but continuous
• Example: Tripwire (http://www.tripwire.com/)
78
Tripwire
• Used to create checksums of
–
–
–
–
–
user data,
executable programs,
configuration data,
authorization data, and
operating system files
• Saves database
• Periodically calculates new checksums
• Compares to database to detect unauthorized or
unexpected changes
79
Continuous Monitoring
• Static audit is good, but systems may be out of
compliance almost immediately
• Goal: Real-time detection and mediation
– Sad reality: minutes to days to detect, maybe years to resolve
• Need to automate monitoring
• See, e.g.,
– SANS Whitepaper:
http://www.sans.org/reading-room/whitepapers/analyst/continuous-monitoring-is-needed-35030
– NIST 800-137 Information Security Continuous Monitoring
(ISCM) for Federal Information Systems and Organizations
http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf
80
Inventory of Systems and Software
• IT operations in constant state of flux
– New services, legacy hardware and software, failure
to follow procedures and document changes
• Make a list of authorized systems, software, and
versions (and patches)
– Create baseline
– Discovery using administrative efforts, active and
passive technical efforts
• Regularly scheduled scans to look for deviations
– Continuously update as new approved items added or
items deleted
81
Other things to Monitor
•
•
•
•
•
System configurations
Network traffic
Logs
Vulnerabilities
Users
• To manage workload:
– Determine key assets
– Prioritize alerts
82
System Integrity in High Assurance
System
• E.g., GTNP FER (p. 121-123)
– HW and SW integrity tests at boot time
– Continuously running diagnostic tests (system idle)
• Obviously standalone
• Not part of a larger networked environment
83
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
Policy
Distribution
Security Req’s
Instal. & Config.
Design
Maintenance
Implementation
84
Disposal
Secure Disposal Requires Attention
• Delete sensitive data on systems before
disposal
– Not always obvious where media is
• E.g., copy machines have hard drives
http://www.cbsnews.com/news/digital-photocopiers-loaded-with-secrets/
• E.g., mobile phones not properly erased
http://www.theguardian.com/technology/2010/oct/12/mobile-phones-personal-data
– 50% of second-hand mobile phones
contain personal data
85
Secure Disposal
• User proper disposal techniques
– E.g., shred drives or other storage
media for best results
– Degaussing of magnetic media not enough
– SSDs even harder to erase
86
Reading for Next Time
• Bishop book, Chapter 17 Confinement Problem
• Shared Resource Matrix Methodology: An
Approach to Identifying Storage and Timing
Channels, Richard Kemmerer, 1983
• Covert Flow Trees: A Visual Approach to
Analyzing Covert Storage Channels, Richard
Kemmerer, 1991
• An Entropy-Based Approach to Detecting Covert
Timing Channels, Steven Gianvecchio and
Haining Wang, 2011
87
INF523: Assurance in Cyberspace Applied to
Information Security
Covert Channel Analysis
Prof. Clifford Neuman
Lecture 10
23 March 2016
OHE 120
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
FSPM
Version Mgmnt
Policy
Distribution
Secure Distribution
FTLS
Security Req’s
Instal. & Config.
Secure Install & Config
Proof
Covert
Channel
Analysis
Intermediate
spec(s)
Design
Maintenance
Secure coding
Implementation
Patching; Monitoring
Disposal
Secure Disposal
Code Correspondence
89
Reading for This Time
• Bishop book, Chapter 17 Confinement Problem
• Shared Resource Matrix Methodology: An
Approach to Identifying Storage and Timing
Channels, Richard Kemmerer, 1983
• Covert Flow Trees: A Visual Approach to
Analyzing Covert Storage Channels, Richard
Kemmerer, 1991
• An Entropy-Based Approach to Detecting Covert
Timing Channels, Steven Gianvecchio and
Haining Wang, 2011
90
Covert Channels – TCSEC Definition
• A communication channel that allows a process
to transfer information in a manner that violates
the system's security policy.
– Source: NCSC-TG-030 A Guide to Understanding Covert
Channel Analysis of Trusted Systems (“light pink book”)
91
Covert Channels – Better Definition
• Given a nondiscretionary (mandatory) security
policy model M and its interpretation I(M) in an
operating system, any potential communication
between two subjects I(Sh) and I(Si) of I(M) is
covert if and only if any communication between
the corresponding subjects Sh and Si of the
model M is illegal in M.
– Source: C.-R. Tsai, V.D. Gligor, and C.S. Chandersekaran, “A formal
method for the identification of covert storage channel in source
code”, 1990
92
Observations
• Covert channels are irrelevant for DAC policies because
Trojan Horse can leak information via valid system calls
and system can’t tell what is illegitimate
– Covert channel analysis only useful for trusted systems
• A system can correctly implement (interpret) a
mandatory security policy model (like BLP) but still not
be secure due to covert channels (violates metapolicy)
– E.g., protects access to objects but not to shared resources
• Covert channels apply to integrity as much as secrecy
– E.g., don’t want low-integrity user to be able to influence highintegrity application through covert channel
93
Two Types of Covert Channels TCSEC
• Storage channel “involves the direct or indirect
writing of a storage location by one process [i.e., a
subject of I(M)] and the direct or indirect reading of
the storage location by another process.”
• Timing channel involves a process that “signals
information to another by modulating its own use
of system resources (e.g., CPU time) in such a
way that this manipulation affects the real
response time observed by the second process.”
– Source: TCSEC
94
Other Attributes Used in Covert
Channels
•
•
•
•
•
Timing: amount of time a computation took
Implicit: control path a program takes
Termination: does a computation terminate?
Probability: distribution of system events
Resource exhaustion: is some resource
depleted?
• Power: how much energy is consumed?
• Any time SL can detect varying results that
depend on actions by SH, that could form a
covert channel
95
Storage Channel Example
• Attempted access by SL to a high level resource
returns one of two error messages: Resource
not found or Access denied. By modulating the
status of the resource, SH can send a bit of
information on each access attempt by SL.
• This is called a covert storage channel because
SH is recording information within the system
state.
96
Storage Channel Example, cont’d
• Consider a simple system that has READ and
WRITE operations with the following semantics:
– READ (S, O): if object O exists and LS ≥ LO, then
return its current value; otherwise, return a zero
– WRITE (S, O, V): if object exists O and LS ≤ LO,
change its value to V; otherwise, do nothing
• These operations pretty clearly are acceptable
instances of READ and WRITE for a BLP
system
Source: Bill Young, Univ of Texas
97
Storage Channel Example, cont’d
• Add two new operations, CREATE and
DESTROY to the system, with the following
semantics:
– CREATE (S, O): if no object with name O exists
anywhere on the system, create a new object O at
level LS ; otherwise, do nothing
– DESTROY (S, O): if an object with name O exists and
the LS ≤ LO, destroy it; otherwise, do nothing
• These operations seem to satisfy the BLP rules,
but are they “secure”?
98
Storage Channel Example
In this system, a high level subject SH can signal one
bit of information to a low level subject SL as follows:
In the first case, SL sees a value of 0; in the second
case, SL sees a value of 1. Thus, SH can signal one
bit of information to SL by varying its behavior
99
Example Exploit
• To send 0:
– High subject creates high object
– Recipient tries to create same object but at low
• Creation fails, but no indication given
– Recipient gives different subject type permission to read, write
object
• Again fails, but no indication given
– Subject writes 1 to object, reads it
• Read returns 0
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-100
Example Exploit
• To send 1:
– High subject creates nothing
– Recipient tries to create same object but at low
• Creation succeeds as object does not exist
– Recipient gives different subject type permission to read, write
object
• Again succeeds
– Subject writes 1 to object, reads it
• Read returns 1
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-101
Another Example Storage Channel
•
•
•
•
•
Assume multi-level Unix
Removal of non-empty
directories in Unix is prohibited
High-level subject can signal a
low-level subject simply by
manipulating the contents of the
high-level directory
What secure system have we
studied that also had a storage
object hierarchy? How did it
avoid this problem?
Multics permitted the removal of
non-empty directories
Source: NCSC-TG-030 A Guide to Understanding Covert Channel
Analysis of Trusted Systems (“light pink book”)
102
Example Timing Channels
•
•
•
CPU quanta is shared resource
High signals low by amount of
CPU time it uses
First example:
–
–
–
•
Low counts time between its
quanta
Long (>= T) equals 1
Short (< T) equals 0
Second example:
–
–
–
High runs or not each quantum
High runs equals 1
High doesn’t run equals 0
103
Another Example Timing Channel
• Developed by Paul Kocher
• This computes x = az mod n, where z = z0 … zk–1
x := 1; atmp := a;
for i := 0 to k–1 do begin
if zi = 1 then
x := (x * atmp) mod n;
atmp := (atmp * atmp) mod n;
end
result := x;
• Length of run time related to number of 1 bits in
z
104
Storage or Timing Channel?
• Processes H and L are not allowed to communicate,
but they share access to a disk drive. The scanning
algorithm services requests in the order of which
cylinder is currently closest to the read head.
• Process H either accesses cylinder 140 or 160
• Process L requests accesses on cylinders 139 and
161
• Thus, L receives values from 139 and then 161, or
from 161 and then 139, depending on H’s most
recent read
• Is this a timing or storage channel? Neither? Both?
105
Timing Channel
• Timing or storage?
– Usual definition  storage (no timer, clock)
• Modify example to include timer
– L uses this to determine how long requests take to
complete
– Time to seek to 139 < time to seek to 161  1;
otherwise, 0
• Channel works same way
– Suggests it’s a timing channel; hence our definition
• Relative ordering channels are timing channels
106
Implicit Channel
• An implicit channel is one that uses the control flow
of a program. For example, consider the following
program fragment:
H := H mod 2;
L := 0;
if H = 1 then L := 1 else skip;
• The resulting value of L depends on the value of H.
• Language-based information flow tools can check
for these kinds of dependencies in programming
languages
107
Reading for Next Time
• D2L “Readings” folder:
– NCSC-TG-030 A Guide to Understanding Covert
Channel Analysis of Trusted Systems (“light pink
book”), pp. 1-74
108
Side Channel vs. Covert Channel
• Covert channel
– Intentional use of available channel
– Intention to conceal its existence
• Side channel
– Unintentional information leakage due to
characteristics of the system operation
– E.g., malicious VM gathering information about
another VM on the same HW host
• Share CPU, RAM, cache, etc.
• This really can happen:
Yinqian Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2012. Cross-VM side channels
and their use to extract private keys. In Proc. of the 2012 ACM Conference on Computer and
Communications Cecurity (CCS '12). ACM, New York, NY, USA, 305-316.
DOI=10.1145/2382196.2382230
109
Covert Channels in the Real World
• Cloud IaaS covert channel
– Side channel on the previous slide combined with
encoding technique (anti-noise) and synchronization
– Trojan horse on sending VM can signal another VM
• Trojan horse in your network stealthily leaking
data
– Hidden in fields of packet headers
– Hidden in timing patterns of packets
110
Covert Storage Channel in Network
Packets
• Many unused packet header (e.g., IP and TCP)
fields
– E.g., IP packet identification field
– TCP initial sequence number field
– TCP acknowledged sequence number field
TCP Header
IP Header
111
Covert Timing Channel in Network
Packets
• Can use timing interval (quantum)
• Can use varying inter-packet delays
• More sophisticated attacks look like normal
traffic
– E.g., size or number of packet “bursts”
Source: Xiapu Luo; Chan, E.W.W.; Chang, R.K.C., "TCP covert timing channels: Design and detection," Dependable Systems and Networks With FTCS and DCC, 2008. DSN
2008. IEEE International Conference on , vol., no., pp.420,429, 24-27 June 2008, doi: 10.1109/DSN.2008.4630112
112
Note the Implicit Mandatory Policy
• May enforce only DAC inside the system
• But still have mandatory policy with two
clearances:
– Inside, “us”
– Outside, “them”
• Covert channel exfiltrates data from “us” to
“them”
• So covert channels of interest for security even
in systems that use DAC policy internally
113
Structure of a Covert Channel
• Sender and receiver must synchronize
• Each must signal the other that it has read or
written the data
• In storage channels, 3 variables, abstractly:
– Data variable used to carry data
– Sender-receiver synchronization variable (ready)
– Receiver-sender synchronization variable (finished)
• Write-up is allowed, so may be legitimate data flow
• In timing channels, synchronization variables
replaced by observations of a time reference
114
Example of Synchronization
• Processes H, L not allowed to communicate
– But they share a file system
• Communications protocol:
– H sends a bit by creating a file called 0 or 1, then a
second file called send
• H waits until send is deleted before repeating to send another
bit
– L waits until file send exists, then looks for file 0 or 1;
whichever exists is the bit
• L then deletes 0, 1, and send and waits until send is
recreated before repeating to read another bit
• Creation and deletion of send are the
synchronization variables
115
Example of Synchronization
• Recall the Create/Delete object calls channel
• How would you implement covert channel
synchronization in this system.
116
Covert Channel Characteristics
• Existence: Is a channel present?
• Bandwidth: Amount of information that can be
transmitted (bits per second)
• Noisiness: How much loss or distortion in the
channel?
117
Noisy vs. Noiseless Channels
• Noiseless: covert channel uses
resource available only to sender
and receiver
• Noisy: covert channel uses resource
available to others as well as to
sender and receiver
– E.g., other processes moving disk head
in earlier example
– Extraneous information is “noise”
– Receiver must filter noise to be able to
read sender’s “signal”
118
Objectives of Covert Channel Analysis
1. Detect all covert channels
– Not generally possible
– Find as many as possible
2. Eliminate them
– By modifying the system implementation
– Also may be impossible, or impractical
3. Reduce bandwidth of remaining channels
– E.g., by introducing noise or slowing the time reference
4. Monitor any that still exceed the acceptable
bandwidth threshold
– Look for patterns that indicate channel is being used
– I.e., intrusion detection
119
Noise and Filters
• If can’t eliminate channel, try to
reduce bandwidth by
introducing noise
• But filters and encoding can be
surprisingly effective
– Need a lot of carefully designed
noise to degrade channel
bandwidth
– Designers often get this wrong
• And added noise may
significantly reduce system
performance
120
Step #1: Detection
• Manner in which resource is shared controls
who can send, receive using that resource
– Shared Resource Matrix Methodology
– Covert flow trees
– Non-interference
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-121
Covert Storage Channels, encore
Several conditions must hold for
there to be a covert storage channel:
1. Both sender and receiver must have
access to some attribute of a shared
object
2. The sender must be able to modify
the attribute
3. The receiver must be able to observe
(reference) that attribute
4. Must have a mechanism for initiating
both processes and sequencing their
accesses to the shared resource
122
SRMM
• Technique developed by Richard Kemmerer at
UCSB
• Build a table describing system commands and their
potential effects on shared attributes of objects
– An R means the operation “References” (provides
information about) the attribute, under some
circumstances.
– An M means the operation “Modifies” the attribute, under
some circumstances
Attributes
READ
File existence
R
File size
R
File level
R
WRITE
DESTROY
CREATE
M
M
M
M
M
M
M
123
Using SRMM
If you see an R and M in the same
row, that indicates a potential
channel. Why potential?
SRMM doesn’t identify covert
channels, but suggests where to
look for them
Any shared resource matrix is for a
specific system. Other systems may
have different semantics for the
operations
124
SRMM Subtlety
• Suppose you have the following operation:
– CREATE (S, O): if no object with name O exists anywhere
on the system, create a new object O at level LS ;
otherwise, do nothing
• For the attribute file existence, should you have an
R or not for this operation or not? Consider this:
after this operation, you know that the file exists.
(Why?)
• That’s not enough. It’s not important that you know
something about the attribute; what’s important is
that the operation tells you something about the
attribute
125
SRRM Example: Unix
• Unix files have these attributes:
– Existence, size, owner, group, access permissions
(others?)
• Unix file operations to create, delete, open, read,
write, chmod operations (others?)
• Homework: Fill
matrixchmod
readin the
writeshared
delete resource
create
open
existence
size
owner
group
Access permissions
126
SRMM Example
• File attributes:
– Existence, label
• File manipulation
operations:
read
– read, write, delete, create
• Each returns completion
code
– create succeeds if file does
not exist; gets creator’s
label
– others require file exists,
appropriate labels
write
delete
create
existence R
R
R, M
R, M
label
R
R
M
R
• Subjects:
– High, Low
127
Example
• Consider existence row: has both R and M
• Let High be sender, Low receiver
• Create operation references and modifies existence
attribute
– Low can use this due to semantics of create
• Need to arrange for proper sequencing accesses to
existence attribute of file (shared resource)
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-128
Use of Channel
• 3 files: ready, done, 1bit
• Low creates ready at High level
• High checks that file exists
– If so, to send 1, it creates 1bit; to send 0, skip
– Delete ready, create done at High level
• Low tries to create done at High level
– On failure, High is done
– Low tries to create 1bit at level High
• Low deletes done, creates ready at High level
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-129
Transitive Closure
• Matrix initially shows direct flows
• Must also find indirect flows
• Transitively combine direct flows to find indirect
flows and add to matrix
• A TCB primitive indirectly reads a variable y
whenever a variable x, which the TCB primitive can
read, can be modified by TCB functions based on a
reading of the value of variable y
• When only informal specs of a TCB interface are
available (not internal specs of each primitive), this
step unnecessary since provides no additional
information
130
Indirect Flows are Internal
= Covert channel
131
Uses of SRM Methodology
• Applicable at many stages of software life cycle
model
– Flexbility is its strength
• Used to analyze Secure Ada Target
– Participants manually constructed SRM from flow
analysis of SAT model
– Took transitive closure
– Found 2 covert channels
• One used assigned level attribute, another assigned type
attribute
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-132
SRM Summary
• Methodology comprehensive but incomplete
– How to identify shared resources?
– What operations access them and how?
• Incompleteness a benefit
– Allows use at different stages of software engineering
life cycle
• Incompleteness a problem
– Makes use of methodology sensitive to particular
stage of software development
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-133
Non-interference Definition
• Intuitively:
– Low-level user’s “view” of the system should not be
affected by anything that a high-level user does
• More formally:
– Suppose L is a subject in the system
– Now suppose you:
1. run the system normally, interleaving the operations
of all users
2. run the system again after deleting all operations
requested by subjects which should not be able to
pass information to (interfere with) L
– From L’s point of view, there should be no visible
difference
– The system is “non-interference secure” if this is true
of every subject in the system
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-134
Non-interference Implementation
• Non-interference is another policy, more abstract
than BLP
• The enforcement mechanisms may be anything,
including the BLP rules
• The more system state you add to the definition
of “view”, can catch covert channels that uses
that state
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-135
Limitations of Non-interference
• Non-interference is very difficult to achieve for
realistic systems
• It requires identifying within the view function all
potential channels of information
• Realistic systems have many such channels
• Modeling must be at very low level to capture many
such channels
• Dealing with timing channels is possible, but difficult
• Very few systems are completely deterministic
• Some “interferences” are benign, e.g., encrypted
files
136
TCSEC Bandwidth Guidelines
• Low bandwidths represent a lower risk
• Rate of one hundred (100) bps is considered
"high“
– not appropriate to call a computer system "secure"
• Rate < one (1) bps acceptable in most
environments
• Audit any rate > one (1) bit in ten (10) seconds
• Trade-off system performance and CC
bandwidth
– Provide information for system developer to assess
137
Measuring Capacity
• Intuitively, difference between unmodulated,
modulated channel
• E.g.,
– Normal uncertainty in channel is 8 bits
– Attacker modulates channel to send information,
reducing uncertainty to 5 bits
– Covert channel capacity is 3 bits
• Modulation in effect fixes those bits
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-138
Mitigation of Covert Channels
• Problem: channels work by varying use of
shared resources
• One solution:
– Require processes to say what resources they need
before running
– Provide access to them in a way that no other
process can access them
• Cumbersome!
– Includes running (CPU covert channel)
– Resources stay allocated for lifetime of process
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-139
Alternate Approach
• Obscure amount of resources being used
– Receiver cannot distinguish between what the sender
is using and what is added
• How? Two ways:
– Devote uniform resources to each process
– Inject randomness into allocation, use of resources
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-140
Uniformity or Randomness
• Uniformity: Subjects always use same amount of
resources
– Variation of isolation
– Process can’t tell if second process using resource
• Example: KVM/370 covert channel via CPU
usage
– Give each VM a time slice of fixed duration
– Do not allow VM to surrender its CPU time
• Can no longer send 0 or 1 by modulating CPU usage
• Randomness: Make noise dominate channel
– Does not close it, but makes it useless
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-141
Randomness
• Example: MLS database
– Probability of transaction being aborted by user other
than sender, receiver approaches 1 -> very high noise
– How to do this: have participants abort transactions
randomly
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-142
Problem: Loss of Efficiency
• Fixed allocation constrains use and wastes
resources
• Randomness wastes resources
• Policy question: Is the inefficiency preferable to
the covert channel?
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-143
Example
• Goal: limit covert timing channels on VAX/VMM
• “Fuzzy time” reduces accuracy of system clocks
by generating random clock ticks
– Random interrupts take any desired distribution
– System clock updates only after each timer interrupt
– Kernel rounds time to nearest 0.1 sec before giving it
to VM
• Means it cannot be more accurate than timing of interrupts
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-144
Example
• I/O operations have random delays
• Kernel distinguishes 2 kinds of time:
– Event time (when I/O event occurs)
– Notification time (when VM told I/O event occurred)
• Random delay between these prevents VM from figuring out
when event actually occurred)
• Delay can be randomly distributed as desired (in security
kernel, it’s 1–19ms)
– Added enough noise to make covert timing channels
hard to exploit
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-145
Improvement
• Modify scheduler to run processes in increasing
order of security level
– Now we’re worried about “reads up”, so …
• Countermeasures needed only when transition
from dominating VM to dominated VM
– Add random intervals between quanta for these
transitions
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-146
Reading for Next Time
• Bishop, pp. 545-551
• On D2L:
– A Specifier’s Introduction to Formal Methods,
Jeannette M. Wing
– Formal Specifications, a Roadmap, Axel van
Lamsweerde
147
Reading for Time after Next
• Jonathan K. Millen. 1976. Security Kernel
validation in practice. Comm. ACM 19, 5 (May
1976), 243-250. DOI=10.1145/360051.360059
• T. Levin, S. Padilla, and R. Schell, Engineering
Results from the A1 Formal Verification Process,
in Proceedings of the 12th National Computer
Security Conference, Baltimore, Maryland,
1989. pp. 65-74
148
INF523: Assurance in Cyberspace as
Applied to Information Security
Formal Methods Introduction
Clifford Neuman
Lecture 11
30 Mar 2016
Reading for Next Time
• Jonathan K. Millen. 1976. Security Kernel
validation in practice. Comm. ACM 19, 5 (May
1976), 243-250. DOI=10.1145/360051.360059
• T. Levin, S. Padilla, and R. Schell, Engineering
Results from the A1 Formal Verification Process,
in Proceedings of the 12th National Computer
Security Conference, Baltimore, Maryland,
1989. pp. 65-74
150
Reading for This Time
• Bishop, pp. 545-551
• On D2L:
– A Specifier’s Introduction to Formal Methods,
Jeannette M. Wing
– Formal Specifications, a Roadmap, Axel van
Lamsweerde
151
“Assurance Waterfall”
Org. Req’s
Version Mgmt
Threats
FSPM
Version Mgmnt
Policy
Distribution
Secure Distribution
FTLS
Security Req’s
Instal. & Config.
Secure Install & Config
Proof
Covert
Channel
Analysis
Intermediate
spec(s)
Design
Maintenance
Secure coding
Implementation
Patching; Monitoring
Disposal
Secure Disposal
Code Correspondence
152
Formal Methods
• Formal means mathematical
• Tools and methods for reasoning about correctness
– Correctness means system design satisfies some properties
– Security, but also safety and other types of properties
• Useful way to think completely, precisely, and
unambiguously about the system
–
–
–
–
Help delimit boundary between system and environment
Characterize system behavior under different conditions
Identify assumptions
Identify necessary invariant properties
• Often find flaws just from writing formal specification
153
Informal vs. Formal Specifications
• Informal
Always? What about before
the system is (re)initialized?
– Human language, descriptive
– E.g., “The value of variable x will always be less than 5”
– Often vague, ambiguous, self-contradictory, incomplete,
imprecise, and doesn’t handle abstractions well
• All of which can easily lead to unknown flaws
– But, relatively easy to write
• Formal
– Mathematical
– E.g., ∀t.∀x. (t>= x Ʌ (sys_init(x))) x(t) < 5
– Easily handles abstractions, concise, non-ambiguous,
precise, complete, etc.
– But, requires lots of training and experience to do right
154
Formal vs. “Informal” Verification
• “Informal” verification:
– Testing of various sorts
• Finite, can never can be complete, only demonstrates
cases
• Formal verification:
– Application of formal methods to “prove” a design
satisfies some requirements (properties)
• A.k.a. “demonstrating correctness”
– Can “prove” a system is secure
• I.e., that the system design satisfies some properties
that are the definition of “security” for the system
• I.e., that a system satisfies the security policy
155
Some Uses of Formal Methods
• Prove certain properties
– E.g., invariants, such as BLP always in secure state
• Prove that certain combinations of states never
occur
• Prove value of certain variable never exceeds
bounds
• Prove absence of information flows
– E.g., for transitive closure of shared resource matrix
• Very widely used for hardware
• Not currently widely used for software
– Difficult to capture all effects
– Only used in very critical applications
156
Types of Formal Verification
• Theorem proving (semi-automated)
– Proving of mathematical theorems
• E.g., that FTLS satisfies FSPM
– Complex, prone to error if done totally by hand
– Must use automated (mechanized) theorem proving tools
• Can solve some simple proofs automatically using heuristics
• Non-trivial proofs require lots of human input
• Model checking (automated)
– Specify system as FSM, properties as valid states
• Exhaustively compare possible system states to specification
to show all states satisfy spec
– May run a long time for complex state
• Use heuristics in advance to prune state space
157
Steps in Security Formal Verification
1. Develop FSPM (e.g., BLP)
2. Develop Formal Top-Level Spec (FTLS)
– Contrast with Descriptive Top-Level Specification (DTLS)
• Natural language, not mathematical, specification
3. Proof (formal or informal) that FTLP satisfies FSPM
4. (Possibly intermediate specs and proofs)
– At different levels of abstraction
5. Show implementation “corresponds” to FTLS
– Code proof beyond state of the art (but see https://sel4.systems/)
– Generally informal arguments
– Must show how every part of code fits
158
Attributes of Formal Specifications
• States what system does, but not how
– I.e., like module interfaces from earlier this semester
– Module interfaces are (probably informal)
specifications
• Precise and complete definition of effects
– Effects on system state
– Results returned to callers
– All side-effects, if any
• Not the details of how
– Not how the data is stored, etc.
– I.e., abstraction
• Formal specification language is not code
159
Parts of a Formal Specification
• Basic types of entities
– E.g., in BLP, subjects and objects, access modes
• State variables
– E.g., b, M, f, and H
• Defined concepts and relations
– In terms of entities and state variables
– E.g., dominance, SSC, *-property
• Operations
– E.g., get_read
– Relations of inputs to outputs – e.g., R, D, W
– State changes
160
Bell-La Padula Formal Policy Model
•
From “Secure Computer System: Unified Exposition and Multics Interpretation”,
Appendix
New state
Error if invalid call or not
a get_read call, and no
change to state
Discretionary and
mandatory policy
requirements
If valid get_read call but does not satisfy discretionary or mandatory
policy, no change to state
161
Formal Top-Level Specification
• Represents interface of the system
–
–
–
–
In terms of exceptions, error messages, and effects
Must be shown to accurately reflect TCB interface
Include HW/FW operations, if affect state at interface
TCB “instruction set” consists of HW instructions
accessible at interface and TCB calls
• Describe external behavior of the system
– precisely,
– unambiguously, and
– in a way amenable to computer processing for
analysis
– Without describing or constraining implementation
162
Creating a Formal Specification
• Example, “blocks world”
• 5 objects {a,b,c,d,e}
– Table is not an object in this example
•
•
•
•
Relations {on,above,stack,clear,ontable}
on(a,b); on(b,c); on(d,e)
¬on(a,a), ¬on(b,a), … , etc.
Define all of the other relations in terms of on
163
Creating a Formal Specification
• Define all of the other relations in terms of on
• ∀y.(clear(y) ⇔
¬∃x.on(x,y))
• ∀x.(ontable(x) ⇔
¬∃y.on(x,y))
• ∀x.∀y.∀z.(stack(x,y,z) ⇔
on(x,y) ∧ on(y,z))
• ∀x.∀z.(above(x,z) ⇔
on(x,z) ∨ ∃y.(on(x,y) ∧ above(y,z)))
– We are missing something for above. What is it?
∀x.¬above(x,x)
164
Alternative Specification
•
•
•
•
Define all of the other relations in terms of above
∀x.(ontable(x) ⇔ ¬∃y.above(x,y))
∀x.(clear(x) ⇔ ¬∃y.above(y,x))
∀x.∀y.(on(x,y) ⇔
above(x,y) ∧ ¬∃z.(above(x,z) ∧
above(z,y))
• What about stack?
•
•
•
•
– Can define in terms of on, as before
Need other axioms about above:
∀x.¬above(x,x)
∀x.∀y.∀z. above(x,y) ∧ above(y,z) => above(x,z)
∀x.∀y.∀z. above(x,y) ∧ above(x,z) =>
y=z \/ above(y,z) \/ above(z,y)
• ∀x.∀y.∀z. above(y,x) ∧ above(z,x) =>
y=z \/ above(y,z) \/ above(z,y)
165
Observation
• Many ways to specify the same system
• Not every way is equally good
• If pick less good way, may create lots of
complexity
• E.g., consider how to specify a FIFO queue
1. Infinite array with index of current head and tail
•
Not very abstract – specifies “how”
2. Simple, recursive, add and remove functions and
axioms
•
E.g., ∀x. remove(add(x,EMPTY)) = x
• The first is tedious to reason with
– Lots of “overhead” to keep track of indexes
• The second is easy and highly automatable
166
Formal System Specifications
• Previous example used first-order logic (FOL)
– ∀ and ∃
• For complex systems, FOL may not be enough
• Want “higher-order” logic (HOL), which can take
functions as arguments
• E.g., [Rushby PVS phone book example]
– http://www.csl.sri.com/papers/wift-tutorial/slides.pdf
167
Homework
• Write a formal spec for seating in an airplane:
• An airplane has 100 seats (1..100)
• Every passenger gets one seat
• Any seat with a passenger holds only one
passenger
• The state of a plane P is a function [N -> S]
– Maps a passenger name to a seat number
• Two functions: assign_seat and deassign_seat
• Define the functions
• Show some lemmas that demonstrate correctness
168
Start of Homework Solution
• Types:
–
–
–
–
N : type (of passenger)
S : type (of seat number)
A : type (of airplane function) [N -> S]
e0 : N (represents an empty seat)
• Variables:
– nm : var N (a passenger)
– pl : var A (an airplane function)
– st : var S (a seat number)
169
What you Need to Do
1. Define the axioms for the two functions:
– assign_seat : [A x N x S -> A]
– deassign_seat : [A x S -> A]
2. Be careful that the spec covers all
requirements:
– Can someone have “e0” as their seat number?
– Can a passenger have more than one seat?
– Can a seat have more than one passenger?
3. Identify some lemmas that demonstrate that the
system specification describes what is intended
and sketch the proof
170
Formal Verification is Not Enough
• Formal verification complements, but does not
replace testing (informal verification)
• Requires abstraction which
– May leave out important details (stuff missing)
– May make assumptions that code does not support
(extra stuff)
• Even if “proven correct”, may still not be correct
• “Beware of bugs in the above code; I have only
proved it correct, not tried it.” -Knuth
171
INF523: Assurance in Cyberspace as
Applied to Information Security
Case Studies of Formal Specification and Proofs
Mark R. Heckman
Lecture 12
6 Apr 2016
Reading for This Class
• Jonathan K. Millen. 1976. Security Kernel
validation in practice. Commun. ACM 19, 5 (May
1976), 243-250. DOI=10.1145/360051.360059
• T. Levin, S. Padilla, and R. Schell, Engineering
Results from the A1 Formal Verification Process,
in Proceedings of the 12th National Computer
Security Conference, Baltimore, Maryland,
1989. pp. 65-74
173
DEC PDP 11
• Sold by DEC
• 1970s-1990s
• Most popular
minicomputer
ever
• Smallest minicomputer for
a decade that
could run Unix
174
Millen: PDP 11/45 Proof of Correctness
• Proof of correctness for PDP 11/45 security
kernel
• Correctness defined as proper implementation
of security policy model (BLP)
• Security policy model defined as set of axioms
– Axioms are propositions from which properties are
derived
– E.g., in BLP, SSC and *-property
• Proof is that all operations available at the
interface of the system preserve the axioms
• Also considered covert storage channels
– Method did not address timing channels
175
Millen: PDP 11/45 Proof of Correctness
• Security policy model defined as set of axioms
– Simple security condition
• If a subject has “read” access to an object, level of
subject dominates level of object
– *-property
• If a subject has “read” access to one object and “write”
access to a second object, level of second object
dominates level of first object
– Tranquility principle for object levels
• Level of active object will not be changed
– Exclusion of read access to inactive objects
– Rewriting (scrubbing) of objects that become active
176
Layers of Specification and Proof
• Four stages
• Each stage more detailed and closer to machine
implementation than the one before
1. FSPM (BLP)
2. FTLS – The interface of the system
–
–
Includes OS calls and
PDP 11/45 instructions available outside kernel
–
Semantics of language must be well-understood
3. Algorithmic specification – High-level code that
represents machine language
4. Machine itself: Running code and HW
177
Why Four Proof Stages?
• Simplify proof work
• Big jump from machine to FSPM
– FSPM has subjects, objects, *-property, …
– Machine has code and hardware
• Intermediate layers are closer to each other
• First prove FTLS is valid interpretation of FSPM
• Then further proofs only need to show that lower
stages implement FTLS
– Lower-level proofs don’t need abstractions of subjects
and objects and *-property
178
Stages 1 and 2 Specification Format
• Both FSPM and FTLS are state machines
– States and transitions
– E.g., BLP state is (b, M, f, H)
– FTLS transitions:
•
•
•
•
•
•
•
Create (activate) object
Delete (deactivate) object
Get access to an object for a subject
Release access to an object for a subject
Put a subject in an object’s ACL
Remove a subject from an object’s ACL
PDP-11/45 instructions available at interface
179
V- and O-functions
• State variables and kernel operations are
functions
– State variables are represented as V-functions
• All V-functions are references to objects
– Operations are O-functions
• By subjects to objects
• Accesses are due to O-function executions
• O-functions have effects on state variables
– Indicated by values of V-functions before and after
– E.g., ¬(PS_SEG_INUSE(TCP,dseg)) ˄ RC(TCP) = NO
– I.e., if the object is not in use by the subject then the
return code is “NO”
180
“Shared Resource Problems”
• Covert storage (not timing)
channels
• User A at high level
• Modifies kernel state variable V
• User B at low level
• Receives value from kernel that
was
influenced by V
• Detect by assigning security
level to internal variables
like V
181
Proof Example in Paper
• Verification that DELETE
enforces *-property
• Original spec at right
• Effect statements are
labeled “A”, “B”, etc.
– Used to simplify statement
form for proof
• x = ‘y’ means subject has
read access to y and
write access to x
182
Abbreviated DELETE Specification
• Statements abstracted to
show structure
• Bottom version is in
“conjunctive form”
• “Else” sometimes replaced
by negation of the “If”
condition
• Statements in form
if f then g else h end
sometimes converted to
(f ˄ g) ˅ (¬f ˄ h)
183
Proof Technique: Security Levels
• Object levels
–
–
–
–
Level based on pathname pn of object in hierarchy
Level of object at pathname pn is L(pn)
V-functions that take pn as parameter have level L(pn)
Constant V-functions (no parameters) have sys-low level
• Level of subject with process number proc is
PL(proc)
– PL(proc) is level where subject can both read and write
– V-functions for subjects (i.e., that read state values for
processes, as opposed to system state) have level
PL(proc)
– O-functions have process numbers as parameters
– O-functions and their parameters have level PL(proc)
184
Property Cases and Security Levels
185
Proof Case Example: Explanation of A,
B, C
• Delete(dseg,entry)
– Erases a segment from a directory
– Dseg is directory segment, entry is index in directory
• (A) If the local segment number is not in use, or
• (B) If the process does not have write access to
the directory or the directory entry is empty
• (C) Return “NO”
186
Proof Case Example: Function
Explanation
• dpn is abbreviation for PS_SEG(TCP,dseg)
– Directory path name
• Active segment table (AST) is part of global state
• AST entry numbers must be invisible to avoid
channel
• PS_SEG(TCP,dseg) maps process-local segment
numbers to active segment table AST entries
• PS_SEG_INUSE indicates whether or not an
element in PS_SEG is in use
• AST_WAL is active segment table write access list
187
Proof Case Example: Proof Goal
• Second case: ¬A ˄ B ˄ C
– PS_SEG_INUSE(TCP,dseg) = TRUE and
– AST_WAL(dpn,TCP) = FALSE or… or …
• Prove second case does not violate *-property
• Process is reading from the directory and writing
to the response code, so must prove: L(dir) ≤
L(RC)
• I.e., L(dpn) ≤ PL(TCP)
188
Proof Case Example: Proof
• R1: If PS_SEG_INUSE is true then it must be the case
that the process is in the AST “connected process
list” (CPL) for that segment
• R2: If a process is in the AST_CPL then it must be the
case that L(pn) ≤ PL(proc)
• Relations proven inductively over all operations
189
GEMSOS Verification
• PDP 11/45 verification before TCSEC
• GEMSOS developed to meet TCSEC class-A1
• Gemini Trusted Network Processer (GTNP)
developed to be TNI M-component (multilevel)
– Based on GEMSOS
• Evaluation on GTNP
• This paper, however, about GEMSOS TCB only
190
GEMSOS A1 Formal Verification
Process
• FSPM, FTLS written in InaJo specification
language
• BLP BST proven using FDM theorem prover
– FSPM was not “pure” BLP, but the GEMSOS
interpretation of BLP
• Conformance of FTLS to model also proven
• FTLS also used for code correspondence and
covert storage channel analysis
191
Value of Formal Verification Process
• “Provided formulative and corrective guidance to
the TCB design and implementation”
• I.e., just going through the process helped
prevent and fix errors in the design and
implementation
• Required designers/developers to use clean
designs
– So could be more easily represented in FTLS
– Prevents designs difficult to evaluate and understand
192
GEMSOS TCB Subsets
• Ring 0: Mandatory security kernel
• Ring 1: DAC layer
• Policy enforced at TCB boundary is union of
subset policies
TCB Boundary
DAC layer
Security kernel (MAC)
193
Reference
monitor
Each Subset has its own FTLS and
Model
• Each subset was verified through a separate
Model and FTLS
• Separate proofs, too
• TCB specification must reflect union of subset
policies
194
Where in SDLC?
• Model and FTLS written when interface spec
written
• Preliminary model proofs, FTLS proofs, and
covert channel analysis performed when
implementation spec and code written
• Code correspondence, covert channel
measurements, and final proofs performed when
code is finished
• Formal verification went on simultaneously with
development
195
Goal of GEMSOS TCB Verification
• To provide assurance that TCB implements the
stated security policy
• Through chain of formal and informal evidence
– Statements about TCB functionality
– Each at different levels of abstraction
•
•
•
•
•
Policy
Model
Specification
Source
TCB itself (hardware and software)
– Plus assertions that each statement is valid wrt next
more abstract level
196
Chain of Verification Evidence
• Notes:
– Model-to-policy argument
is informal
– Spec to model argument
is both formal and informal
– Source to spec argument is
code correspondence
– TCB to source means
HW and compiler validation
• I.e., object code
• Considered
“beyond state of the art”
197
Model
•
•
•
•
Mathematical statement of access control policy
“Interpretation” of BLP
Security defined as axioms
Must prove all model transforms preserve
axioms
– SSC
– *-property
– (and probably others, as with PDP 11/45)
• Proof of model shows model upholds policy
198
Key Characteristic of Model
• Not just formal statement of policy or functions
• A model of a reference monitor
– “Linchpin” of security argument
• If show that TCB satisfies reference monitor
model then have shown that it is secure
– Implies that anything outside TCB cannot violate
policy
• What if did not model reference monitor?
– May be “correct” wrt functions, but not necessarily
secure
199
FDM “Levels”
• I.e., levels of abstraction
• InaJo language has method of formally mapping
elements of one level to the elements of the next
level
– Top level: Model
– Second level: FTLS
200
FTLSs
• One each for kernel and for TCB
• Exceptions, error messages, and effects visible at
interface
• Transform for each call and
• Transforms for HW “read” and “write” operations
– Other opcodes are irrelevant for access control security
• Proof maps each transform of FTLS to transform in
model
• Each call specified as conditional statement
– Last case contains any change statements
– Exceptions specified in order of possible occurrence in
code
• Important for Covert Channel Analysis
– Very end specifies everything else unchanged
201
Code Correspondence
• Three parts:
1. Description of correspondence methodology
2. Account of non-correlated source code
3. Map between elements of FTLS and TCB code
• FTLS must accurately describe the TCB
• TCB must be valid interpretation of FTLS
• All security-relevant functions of TCB must be
represented in FTLS
– Prevent deliberate or accidental “trap door”
202
Example of Value of Formal Proof
• Subject is process/ring
• Subject can have range of access classes
(trusted subject)
• Subjects in outer rings can have access class
ranges “smaller” than subjects of the same
process in inner rings
• Formal proof “stuck” trying to prove this
203
Example Formal Spec Detected
Problem
• If range of subject in outer ring not within range
of inner ring, move the outer ring access class to
be within the range
• Original spec and code didn’t take into account
non-comparable access classes
• How to fix?
204
2nd Example Formal Spec Detected
Problem
• Adjusting the access classes depends on the
“move” function
• But it was found that the move function did not
correctly ensure that the access class range of
the outer ring subject was correct (i.e., that the
“read” class dominated the “write” class)
205
Example of Value of Code
Correspondence
• Code correspondence of kernel to spec found
flaws in code:
1. Access to segments in new child processes being
checked using parent’s privileges, not child’s
2. Segment descriptor in Local Descriptor Table not
being set until segment brought into RAM
• Not clear if this just meant inconsistent with model or
was a real security problem
206
Example of Value of Covert Channel
Analysis
• Two unexpected covert storage channels
discovered
• Both related to “dismount_volume” call
• Dismount_volume used to (temporarily) remove set
of segments from the segment structure
• Originally, any process whose access class range
spanned range of volume could dismount the
volume
• What if volume has only Unclassified segments?
– TS process has made_known some of those segments
– Unclassified process tries to dismount the volume, but gets
an error message
• Fix?
– Require caller’s range from volume low to sys-high
207
2nd Covert Channel
• Order of error checking
• Errors about volume could be reported to the
calling subject even if subject did not have
access to dismount the volume
• Fix: check label range before returning errors
related to volume attributes
208
INF523:
Case Studies and Security Kernels
Professor Clifford Neuman
Lecture 11 CONT
April 6, 2016
Systematic Kernel Engineering Process
•
Mappings
Security Policy
Philosophy and Design of Protection
E.g., Hardware Segmentation
System Specifications
Formal Security Policy Model [linchpin]
For Reference Monitor. i.e.,
security kernel API
Formal Top
Descriptive Top
Level Spec
Level Spec
With hardware properties
visible at interface
Development Specifications
Implementation Design Documents
Covert channel
Source Code
analysis
Layering & info hiding
Code correspondence
Security Features
Product End Item
Users Guide
Trusted
Trusted Facility
Distribution
Manuals
210
Product Specifications
Deliverable Product
Only Proven Solution: Security Kernel
“The only way we know . . . to build highly secure software systems of any
practical interest is the kernel approach.”
INF 527 Focus:
Secure System
Engineering
-- ARPA Review Group, 1970s (Butler Lampson, Draper Prize recipient)
Applications
Appliances
Security
Services
INF 525 Focus:
Verifiably
Secure
Platform
Operating
System
Verifiable
Security Kernel
Intel x.86
Hardware Platform
Network
Monitor/
Disk
Keyboard
Truly a paradigm shift: no Class A1 security patches for kernel in years of use
211
Possible Secure System Case Studies
CASE STUDIES
• GARNETS MLS File System Architecture
• NFS Artifice Demonstration Properties
• MLS Cloud NFS-Based Storage Design
• POSSIBLY:
– Crypto Seal Guard Demonstration Concepts
– Crypto Seals in RECON Guard for CIA
212
Overview of Previous RVM Foundation
•
•
•
•
•
•
•
Need for trusted systems
Security kernel (SK) approach and design
Security kernel objects
Security kernel support for segments
SK segments as FSPM interpretation
Security kernel layering
Designing a security kernel
213
Overview of Previous RVM Foundation
•
•
•
•
•
•
•
•
Trusted system building techniques
Kernel implementation strategies
Confinement and covert channels
Synchronization in a trusted system
Secure initialization and configuration
Management of SK rings and labels
Trusted distribution and Trusted Platform Module
Security analysis of trusted systems
214
Kernel Implementation Strategies
• New operating system
– Simple mapping of O/S features to SK features
– Distinctive is lack of backward compatibility
• Compatible operating system (emulation)
• Emulate insecure operating system (ISOS)
– Typically emulator runs in each process
• Renders O/S calls into kernel calls
• Identical operating system (virtual machine)
– Provides isolation, but not sharing, of memory
– Kernel is virtual machine monitor (VMM)
• Principal “objects” are virtual disks, not files
• Subjects – kernel users and VMs
215
Designing a Security Kernel
• Most used highly secure technique
– Not easy to build a security kernel
• SK is reference validation mechanism (RVM)
– Defined as H/W and S/W that implements RM
• Most RMs implement multilevel security (MLS)
• Non-security-relevant functions managed by O/S
• Subject must invoke RM to reference any object
– Basis for completeness
• Must have domain control, e.g., protection rings
– Basis for isolation to block subversion
• SK software engineered for RM verifiability
216
Security Analysis of Trusted Systems
• Need independent 3rd party evaluation/analysis
• TCSEC/TNI security kernel evaluation factors
–
–
–
–
–
–
–
–
–
System architecture
Design specification & verification
Sensitivity label management
External interfaces
Human interfaces
Trusted system design properties
Security analysis
System use and management
Trusted system development and delivery
217
INF523
System Security Architecture
Considerations
Professor Clifford Neuman
Lecture XX
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
• Asynchronous attacks and argument validation
• Protected subsystems
– Static process, on-demand process, multiple domains
• Secure file systems
– Alternate naming structures; unique identifiers
219
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
220
Designing a Security Kernel
• Most used highly secure technique
– Not easy to build a security kernel
• SK is reference validation mechanism (RVM)
– Defined as H/W and S/W that implements RM
• Most RMs implement multilevel security (MLS)
• Non-security-relevant functions managed by O/S
• Subject must invoke RM to reference any object
– Basis for completeness
• Must have domain control, e.g., protection rings
– Basis for isolation to block subversion
• SK software engineered for RM verifiability
221
GEMSOS Security Kernel Layering
• Segment make known is illustrative example
– All modules in call chain are strictly layered
• Top gate layer is kernel API – implements FSPM
– Receives call and completes entry to Ring 0
– Parameters copied from outer ring to Ring 0
– Entry point call to next layer
• Process-local modules at the top
– Their “information hiding” data bases are per process
– Code is shared with same PLSN by all processes
• Kernel-global modules
– Kernel API “effects” reflect data all processes share
222
GEMSOS Make Known Example
(Applications, Kernel Gate Library)
Gate Layer
Process Manager (PM)
Upper Device Manager (UDM)
Segment Manager (SM)
Upper Traffic Controller (UTC)
Memory Manager (MM)
Inner Device Manager (IDM)
Secondary Storage Manager
Non-Discretionary Security Manager (NDSM)
Kernel Device Layer (KD)
Inner Traffic Controller (ITC)
Core Manger (CM)
Intersegment Linkage Layer (SG)
System Library (SL)
(Hardware)
223
Process
Local
Kernel
Global
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
224
Operating System Layering Strategies
• New operating system: layer on security kernel
– Simple mapping of O/S features to SK features
– Distinctive is lack of backward compatibility
• Compatible operating system (emulation)
– Emulate insecure operating system (ISOS)
– Typically emulator runs in each process
• Renders O/S calls into kernel calls
• Identical operating system (virtual machine)
– Provides isolation, but not sharing, of memory
– Kernel is virtual machine monitor (VMM)
• Principal “objects” are virtual disks, not files
• Subjects – kernel users and VMs
225
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
• Asynchronous attacks and argument validation
226
Cross-Domain Asynchronous Attacks
• Hardware support for cross-domain validation
– Pointer validation is particularly challenging
• Multiprocessor and multiprogramming issues
– Multiple processes can access pointer argument
– Time of check/time of use (TOC/TOU) problem
– Safest to copy parameters to new domain before use
• OS must prevent changes to validation data
– There are no generic solutions
– May require appropriate locks inside OS
– May require total atomic copy of data
• Kernel support for this is valuable aid
• Examine I/O operations: are also asynchronous
227
Synchronization for a Trusted System
• Useful operating system needs synchronization
– Is usual and customary service applications expect
• Need synchronization across access classes
– RVM must insure information flow meets MAC policy
• Mutexes and semaphores imply shared object
– Read and write make not secure across access levels
• Use alternative NOT based on mutual exclusion
• Two kinds of objects are defined for computation
– Eventcount to signal and observe progress
• Primitives advance(E), read(E), await(E,v)
– Sequencer to assign an order to events occurring
228
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
• Asynchronous attacks and argument validation
• Protected subsystems
– Static process, on-demand process, multiple domains
229
Protected Subsystem in Active Process
• DBMS runs as a process
– Inherently no different from a user process
– Normal OS access controls limit access to DBMS
• DBMS must control individual users’ access
– To different files and different portions of files
230
Protected Subsystem on Request
• Subsystem activated as a separate process
– Each time it is needed
• While retaining its own identity
– Separate from that of the invoking process
231
Mutually Suspicious Subsystems
232
Management of SK Rings and Labels
• For a system, privileged services bound to ring
– Static binding for given system architecture
– Kernel is permanently bound to PL0
• Ring bracket (RB) associates object with domain
– RB encoded in 3 ring numbers (RB1,RB2, RB3)
• General trusted system has at least 3 domains
– Kernel, operating system, applications
• Non-discretionary is mandatory policy, i.e., MAC
• Each can be represented by access class label
– Labels can be compared by dominance relation
– Combine confidentiality, integrity, dominance domain
233
Secure System Design & Development
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
• Asynchronous attacks and argument validation
• Protected subsystems
– Static process, on-demand process, multiple domains
• Secure file systems
– Alternate naming structures; unique identifiers
234
MLS Hierarchical File System
235
Security Kernel Objects
• Minimization of kernel
– “Economy of mechanism”
– “Significantly more complicated OS outside kernel”
• Implies kernel cannot be compatible with insecure O/S
• All subjects need system-wide name for objects
– Each subject must be able to identify shared object
– “Flat” naming is classic covert channel example
• Object hierarchy naming
– BLP hierarchy with “compatibility” meets need
– Biba “inverse compatibility” for integrity needed
• Least common mechanism drives reuse
– OS creates “directories” out of objects from kernel
236
Security Kernel Support for Segments
• Segmented instruction execution
– Enforced for code is executing outside the kernel
• All memory addresses a pair of integers [s, i]
– "s" is called the segment number;
– "i" the index within the segment.
• Segment number is process local name (PLSN)
• Systems have similar kernel API to add segment
– Kernel is invoked to “make known” a new segment
• Descriptor table defines process address space
– Is a list of all the segments CPU can address
– Must include code segment and stack segment
237
Secure System Architecture Summary
• Architectural considerations (Gasser chapter 11)
– Applies to development of security kernels
– As well as to their applications
• Operating-system layering
– Promote structured design for kernel assurance
– Operating systems services on kernel
• Asynchronous attacks and argument validation
• Protected subsystems
– Static process, on-demand process, multiple domains
• Secure file systems
– Alternate naming structures; unique identifiers
238
INF523
Introduction to MLS File System:
GARNETS Case Study
Professor Clifford Neuman
Lecture 12
April 13, 2016
GARNETS Example on Security Kernel
• Case study of broad application on top of TCB
• Security kernel TCB minimization severely limits
– Can present only a primitive interface
• Lacks typical OS rich variety of functions
– Argument that high assurance is “unusable”
• MAC enforcement constrains untrusted subjects
– Argue renders application development “impossible”
• Analysis of GARNETS operating system
– Gemini Application Resource and Network Support
– Uses only TCB mechanism to provide interface
– Interface is “friendly” and flexible
240
Standard GARNETS Architecture
241
Design Objectives
1. General purpose file system interfaces
–
Permit application libraries to be ported to interface
2. Both MAC and DAC exclusively from TCB
–
DAC subject on top of kernel provides “strong DAC”
3. File system is multilevel
– Managed by single-level subjects
4. All file system operations are atomic
5. No read locks are used
–
Applications with can “read down” subject to DAC
6. Application access only GARNETS file system
7. GARNETS itself designed to meet Class B2
242
Overview of GARNETS Architecture
Applications
GARNETS
GEMSOS Discretionary
Policy Enforcement
Kernel
Mandatory Policy
Enforcement
Hardware
243
Kernel Exports Segments
• Segmented instruction execution
– Enforced for code executing outside the kernel
• CPU memory addresses are pair of integers [s, i]
– "s" is called the segment number;
– "i" the index within the segment.
• Segment number is process local name (PLSN)
• TCB has API similar to kernel to add segment
– Kernel is invoked to “make known” a new segment
• Descriptor table defines process address space
– Is a list of all the segments CPU can address
– DAC subject needs code segment and stack segment
244
Domains in GARNETS Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
245
Perimeter
Kernel Creates (Domains) Rings
• For a system, privileged services bound to ring
– Static binding for given system architecture
– Kernel is permanently bound to PL0
• Ring bracket (RB) associates object with domain
– RB encoded in 3 ring numbers (RB1,RB2, RB3)
• General trusted system has at least 3 domains
– Kernel, operating system, applications
• Non-discretionary is mandatory policy, i.e., MAC
• Each subject represented by access class labels
– Labels can be compared by dominance relation
– Combine confidentiality, integrity, dominance domain
246
DAC TCB in GARNETS Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
247
Perimeter
DAC TCB Exports DACLS & Msegs
• Segments
– Fundamental storage object
– Loci of mandatory control
– Processes simultaneously and independently share
• DAC Access Control Lists (DACLs)
– A segment containing limited number of ACLs
– Interpretively accessed object exported by TCB
– Building block for GARNETS directories
• Multisegments (“msegs”) exported by DAC TCB
– Collection of zero or more sements
– Segments are accessibly only as elements of msegs
– All its segments hierarchically related to single base
248
GARNETS in the Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
249
Perimeter
GARNETS Creates Files & Directories
• Directories
– Rely only on TCB (not GARNETS) for access control
– DAC access to an object based only on its ACL
• Named multisegments, distinct from files
– Are namable directory entries
– Its segments directly accessed via hardware
• Files interpretively access by applications
• File system built from three parallel trees
– Directory tree of DACLs with mseg for dynamic data
– File tree of DACLs which host file mseg for each file
– Huge mseg with segments that mirror directory tree
250
Gasser: MLS Hierarchical File System
251
Directory Properties
• Single-level directories
– Contains information all at one access class
– Subdirectory creation information is in parent
• Names and creation dates
• Visible to parent-level subjects
• Upgraded directories
– Kernel forces compatibility property to be met
– Dynamic information in upgraded directory itself
• Time of last modification and directory contents
• Visible only at the upgraded access class
252
TCB Subsets for GARNETS
Unclassified
Ring 7
Network Services,
Web server, shell,
Utilities, Libraries,
etc.
Top
Secret
User Interface & Application Processing
Ring 5
File Systems, GARNETS OS / Middleware Services & APIs
Networks, etc.
Ring 3
DAC Services & Storage Management
DAC Policy Enforcement
,
Ring 0
MAC Policy Enforcement
Verifiable Security Kernel
Shared Storage,
Separation, etc.
Segmented Hardware
GEMSOS delivers MLS Sharing “Out of the Box”
among strongly separated partitions
253
Mission
Applications
GARNETS
Operating
System (OS)
DAC TCB
(DTCB) Subset
System
Reference
Monitor
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
254
GARNETS Directory Structure
GM
GM
Directory Components
255
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
• DTD – Directory Tree DACL
– Control access to tree from which directories are built
– ACLs for directory entries
256
Directory Tree DACL Component
GM
DTD
GM
DTD
Directory Components
subirectory
257
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
• DTD – Directory Tree DACL
– Control access to tree from which directories are built
– ACLs for directory entries
• DM – Directory Multisegment
– Dynamic data for directory entries
258
Directory Multisegment Component
GM
GM
DTD
DM
DTD
Directory Components
subirectory
259
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
• DTD – Directory Tree DACL
– Control access to tree from which directories are built
– ACLs for directory entries
• DM – Directory Multisegment
– Dynamic data for directory entries
• FTD – File Tree DACL
– Used to extend the tree
260
File Tree DACL Component
GM
GM
DM
DTD
FTD
DTD
FTD
Directory Components
subirectory
subirectory
261
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
• DTD – Directory Tree DACL
– Control access to tree from which directories are built
– ACLs for directory entries
• DM – Directory Multisegment
– Dynamic data for directory entries
• FTD – File Tree DACL
– Used to extend the tree
• FD – File DACL
– ACLs for file entries in the directory
262
File DACL Component
GM
GM
FTD
DTD
DM
FD
DTD
FTD
Directory Components
Files
subirectory
263
subirectory
Components of GARNETS Directory
• GM – Huge mseg gives directory tree roadmap
• DTD – Directory Tree DACL
– Control access to tree from which directories are built
– ACLs for directory entries
• DM – Directory Multisegment
– Dynamic data for directory entries
• FTD – File Tree DACL
– Used to extend the tree
• FD – File DACL
– ACLs for file entries in the directory
264
GARNETS Directory Structure
GM
GM
FTD
DTD
DM
FD
DTD
FTD
Directory Components
Files
subirectory
265
subirectory
Management of Upgraded Directories
• Initialization is exported to GARNETS interface
– Must be done by subject at upgraded access class
– Cannot be done at access class of parent directory
– For uniformity is same for both normal and upgraded
• Implication for deletion of upgraded directories
– To meet kernel restriction will require trusted subject
• Need not have entire range of system’s access classes
• Range encompasses parent and upgraded
• To limit deletion, GARNETS limits creation
– Users are required to have a “special authorization”
– Some environments might prohibit user creation
266
INF523
(first some discussion of)
MLS Implications in Garnets
(then new topic)
Subversion
Professor Clifford Neuman
Lecture 13
April 20, 2016
OHE 100B
Review of GARNETS Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
268
Perimeter
Named Multisegments (Msegs)
• Msegs are namable directory entries
– A hierarchically structured collection of segments
– Single base segment ACL, for all segments in mseg
– Each segment has an explicit access class
• In contrast to files, not interpretively accessed
– Segments accessed directly by available hardware
– Segments are included in process address space
• Msegs used to contain GARNETS internal data
– Must be protected from less-privileged subjects
– Uses GEMSOS ring mechanisms to insure integrity
– Uses DACLs from DTCB to protects internal data
269
Benefits of Named Msegs
• Avoid unnecessary buffering
–
–
–
–
No per-process file buffer for data
Code is not copied from file into executable segments
Code is store in right size segments, executed directly
Code is not modifiable, so many process can execute
• Promotes sharing of executables
– Reduced use of real random-access memory
– Reduced swapping increases performance
• Direct hardware access reduces context switch
• Application can use for databases and libraries
• Highly efficient IPC and synchronization
270
Single Level Files
• Interpretively accessed as GARNETS interface
– Applications make calls for file operations
– Inside GARNETS are created from segments
• GARNETS manages each individual file
– One ACL is associated with each file
– Maintains attributes, e.g., time of last modification
– Time of last read only updated for same level subjects
• Design rejected multilevel files
– Careful application design eliminates most needs
– Would create incoherent interface
– Not clear how to avoid multilevel (trusted) subjects
271
The Gizillion Problem
• Problem of very large number of access classes
– Must be addressed for flexible untrusted applications
– Potential many access classes in underlying TCB
– GEMSOS: two sets of 16 levels and 96 categories
• TCB minimization limits complex data structures
– Objective is avoiding elaborate constructs
– GEMSOS provides one base object per access class
– Each access class must construct its own data
• Handle previously unencountered access class
– GARNETS subject must create data for applications
– At minimum OS creates application stack at level
272
Alternatives for New Access Classes
• GARNETS administrator creates data structures
–
–
–
–
–
Users requests directory at the new access class
New upgraded directory below system low root
File system data structures at each new access class
Administrator requires access to full range of classes
Depends on timely response by administrators
• Create all possible access classes a priori
– A base directory is always available when needed
– BUT is untenable to created a gizillion bases
• Trusted process to automate creation process
– Designers would fail to meet “untrusted” objective
– Far too complex to meet high assurance
273
GARNETS Gizillion Problem Strategy
• DTCB creates DACL at first occurrence of class
– DTCB has function to locate that DACL
– Has “access class to path” algorithm to base segment
• At first occurrence GARNETS builds a directory
– Per Access Class (PAC) directory at new class
– Location for application “home” directory
• GARNETS can then support non-TCB subjects
• TCB tools install GARNETS bootstrapping code
– Code not located in GARNETS file system.
– In separate data structures at predefined location
274
File System Object Naming
• Alias names for objects supported by GARNETS
– All names must be deleted before object deleted
• Symbolic links are path to target object
– TCB prevents creation of hard links
– Can have links to files and named megs
– Can have links to directories and other links
• Existence of intervening links invisible on access
– Cycles controlled by number of links traversed in path
• Per Access Class (PAC) links
– Link has a field for the access class
– GARNETS finds PAC directory for access class
275
Leveraging Gizillion Solutions
• Supports use of single-level volumes
– Single level file systems distributed on volumes
– Symbolic links permit binding to multilevel structures
• Volumes transparent to application data access
– Volume access class range implies covert channels
– Volume range simplifies physical control of media
• GARNETS supports working directories
– Simplifies naming of subordinate objects
– Multiple working directory employed for volumes
276
GARNETS Self-protection
• GARNETS uses rings properly to be effective
– Applications operate in a less privileged domain
– Interpretively access objects protected, e.g., files
– Internal data structures protected from applications
• GARNETS ring brackets
–
–
–
–
Some directories are dedicated to use by GARNETS
Range of rings of subjects that will be granted access
Apply to all objects in a directory
Permanently set when directory is created
277
Consistency and Concurrency
• File consistency
– Used to address discontinuities in operation
– Permit fine-grained robustness selection
• File System Concurrency Control
– Doesn’t insure total ordering of file system operations
– Each file system object has a version number
• Leverages TCB primitive for atomic updates
– Avoids conflict with real-time properties
– Strict two-phase commit for directory components
– Kernel API can atomically update doubly threaded list
278
Summary of MLS Implications
• GARNETS file system on high assurance TCB
– Represents complex general-purpose application
– Untrusted implementation is MLS context
– Sufficiently flexible for broad spectrum of uses
• File system managed by single-level subjects
–
–
–
–
Leverage symbolic links
Solution to gizillion problem
Employable in single-level volume configurations
Permits upgraded directories and multilevel msegs
• GARNET protects itself and its data structures
– Exploits rings and DACLs from high assurance TCB
279
Network File Service (NFS) Security
• Case study of NFS subversion demonstration
– Running example by US Navy masters student
– Emory A. Anderson, III, for Prof Cynthia Irvine (NPS)
– Shown to Richard Clarke, “first cybersecurity czar”
• First, consider security implications for system
– How deeply rooted are adverse consequences
• Second, explore applicability to other systems
– Address whether attack approach is limited to NFS
– Briefly examine Anderson SSL subversion design
• Follow on – Later NFS case study of mitigation
– Compare to Anderson recommended solution
280
Likely Tool of Professional Attacker
• Subversion is technique of choice [And 1.D]
– Professional distinguished from amateur
• A primary objective is avoiding detection
– Amateur often motivated by desire for notoriety
• Professional often well-funded
– Resources to research and test in closed environment
– Amateur tends to numerous attempts on live target
– Flawless execution reduces risk of detection
• Coordinated attacks are mark of a professional
• Professional will invest and be patient to use
– Subverter is likely different than attacker
281
Demonstration of Subversion
• Obfuscation of artifice not given serious attention
– Would be of utmost importance to professional attack
• Subversion can occur multiple points in lifecycle
• Selected distribution phase for demonstration
– Driven by limited resources and access of student
– Facilitated by NFS on open source Linux system
– Representative of attacker mirror site opportunities
• Closed source not daunting for professional
– May involve reverse engineering application
– Might create a binary patch to insert in binaries
– Entirely within anticipated professional resources
282
Choice of NFS as Suitable Application
• For impact, need readily apparent significance
– NFS is application familiar to typical IT user
– Users understand notion of need to protect data
• Activation needs to be straightforward
– Network interface chosen for ease of explanation
– Internet technology is widely used
• Choose to have remote activation
– Representative of low risk for attacker
– Also supports local activation, e.g., via loopback
– Trigger is a malformed Internet packet
• Study of subversion method benefits student
283
Case System and Activate the Artifice
284
Attacker Uses Artifice for NFS Access
285
End Session by Deactivating Artifice
286
Design Properties of NFS Artifice
• Purpose of artifice to bypass file permissions
– Bypass check for a specified user at will
– Then re-enable the normal system operation
• Exhibits all the characteristics of subversion
– Exception was no attempt for hide or obfuscate
• Artifice is small – eleven C statements
– Small in relation to millions LOC in Linux kernel
– Unlikely to be notice by those in Linux development
• Can be activated and deactivated
– Further complicates attempts to discover existence
• Does not depend on activities of a system user
287
Artifice Functions
• Composed of two parts in two unrelated areas
• Subvert a portion of kernel that receives packets
– Recognize a packet with distinguishing characteristics
– Activation based on trigger known only to subverter
– Extends normal check for packet checksum
• Activation recorded in global variable in kernel
• Subverts Linux file system permission checks
–
–
–
–
Check global kernel variable to see if activated
Grants attacker access to any file in the system
Bypass behavior limited to specified user ID
System functions normally for all other users
288
Artifice Activation
289
Subverted File Permission Checks
290
Separate Design of SSL Subversion
• Secure Sockets Layer (SSL) widespread use
– Secure communications between client and server
– Client and server negotiate session keys
– Encrypt traffic using symmetric encryption algorithm
• Options available to attacker for subversion
– Duplicate all communications and send to attacker
– Weaken key generation mechanism – limit entropy
– Simply send the session keys out to the attacker
• Advantages of exfiltrating session keys
– Attacker is passive and maintains anonymity.
– Subverting either client or server gives total acess
291
NFS Subversion Technical Conclusions
• Practice for showing security inadequate at best
– Penetration tests and add-on third party products
– Layered defenses and security patches irrational
• Bad defense more dangerous than poor security
– Leads to flawed belief system has adequate security
– Can increase risk by more dependence on defense
• Have technology to provide appropriate security
– Evaluation criteria tried and tested
– These approaches have fallen into disfavor
• The need to address subversion is increasing
– Threat sources multiplying and reliance increasing
292
NFS Subversion System Decisions
• Must address subversion for justification of trust
– Irresponsible not to consider when deploying systems
– Otherwise flawed belief system security is adequate
• Nurture a vast industry with add-on applications
– Huge drain on resources for little or no assurance
• Objective of demonstration to raise awareness
– Enable decision maker to understand the problem
• Need to understand motive, means and opportunity
• Consider subversion practicality and consequences
– Make decision makers aware of proven technology
• Verifiable protection technology applied successfully
• Security professionals have a responsibility
293
Recall Study Goals for NFS Subversion
• First, consider security implications for system
– How deeply rooted are adverse consequences
• Second, explore applicability to other systems
– Address whether attack approach is limited to NFS
– Briefly examine Anderson SSL subversion design
• Next – NFS case study of mitigation
– Compare to Anderson recommended solution
• What else can be learned from the demo?
294
Notional Cloud Storage Security
VPN
MULTI- LEVEL SECURE
CLOUD STORAGE
PERSISTENT
STORAGE
VPN
CLIENTS(GM)
CLIENTS (FORD)
295
MLS File Sharing Server for Cloud
• Cloud storage service
– Specific type of cloud computing
– Managed resource is storage
• Needs security as good as enterprise
– Typically replaces services of enterprise environment
– Many of the same vulnerability as self-managed
– Additional vulnerabilities specific to the cloud
• Current solutions are completely ineffective
– Essential problem is construct of shared infrastructure
– Built on low-assurance commodity technology
• Highly vulnerable to software subversion
296
Present-day Vulnerability Examples
297
Security Requirements of cloud
• Three primary cloud security requirements
– Controlled sharing of information
– Cloud isolation
– High Assurance
298
Trap Door Subversion Vulnerability
• Malicious code in platform
–
–
–
–
Software, e.g., operating system, drivers, tools
Hardware/firmware, e.g., BIOS in PROM
Artifice can be embedded any time during lifecycle
Adversary chooses time of activation
• Can be remotely activated/deactivated
– Unique “key” or trigger known only to attacker
– Needs no (even unwitting) victim use or cooperation
• Efficacy and Effectiveness Demonstrated
– Exploitable by malicious applications, e.g., Trojans
– Long-term, high potential future benefit to adversary
– Testing not at all a practical way to detect
299
Alternatives for Controlled Sharing
• Three ways controlled sharing can be facilitated:
• Massive copies of data from all lower levels
– High assurance one-way flow of information
– Light diode interface uses physics for high assurance
• File Caching (Local Active Copy)
– Retain at high level only actually used lower data
– No way to securely make requests for lower data
– Security requires manual intervention
• High assurance segmented virtual memory
300
Massive Copies Approach
Top Secret
Secret
TS
S
S
U
U
Unclassified
U
•Highly inefficient!
•Does not scale!
301
Computer Utility: Predecessor to Cloud
• Computer Utility made security a priority
• Anticipated present day cloud environment
– Controlled sharing was an integral requirement
– Incorporated MAC policy
• Evident from commercial Multics product.
– Consistent with high assurance
• Evident from the BLP Multics interpretation.
• Didn’t gain widespread acceptance
– Before Internet enabled today’s cloud computing
302
Basis to Consolidate Networks
• Foundation: Trusted Computing Base (TCB)
–
–
–
–
The totality of protection mechanisms
within a computer system
including hardware, firmware, and software
Responsible for enforcing a security policy
• The security perimeter is TCB boundary
– Software within the TCB is trustworthy
– Software outside the TCB is untrusted
• Mature technology -- 30 years experience
– Derived from and used for commercial products
• Key choice is assurance: low to very high
303
TCB Subsets –“DAC on MAC”
Various Typical
Applications
High
Assurance
OS
OS Services,
DAC, Audit, &
Identification
MAC Policy
Discretionary and
Application Policies
MAC & Supporting Policies
Class A1 “M-component”
Hardware
304
Use Platform for Controlled Sharing
Top Secret
Shared Storage Server
Top Secret
LINUX
Unclass
Shared Storage Server
Unclass
LINUX
GEMSOS
Hardware
Unclass
Top
Secret
File
Enforces
DAC
Security
Perimeter
Enforces
MAC
File
Shared
File
Policy
TopSecret
IPC Mechanism
Top Secret Network
Unclass Network
305
Motivation to Address Cloud Security
• Cloud storage flexible, cost-effective
• How to implement in multi-level environment?
– Duplicate for each level? Loses advantages.
• Tempting target for attackers
– Can increase privilege
• Want high-assurance, MLS solution
• Want cross-domain sharing (CDS)
– Share resources are core cloud value proposition
– Controlled sharing of up to date information
• Solution: MLS cloud network file service
– Based on high-assurance, evaluated Class A1 TCB
306
Cloud Storage Security Challenges
• Migrate storage from local security domain
– Lose protection of “air gap” between domains
– Dramatically expands opportunity for adversaries
• Common security approaches can’t work
– Power of insidious software subversion attack tools
– Exploding “cloud computing” amplifies impact & risk
• Proven reference validation mechanism (RVM)
– Systematically codified as TCSEC “Class A1”
• Need architecture that leverages trusted RVM
– Use verifiable (i.e., Class A1) trusted computing base
– Want high cloud compatibility, e.g., standard NFS
307
Current Cloud Technology is Vulnerable
• Typically on commodity low-assurance platforms
• NetApp StorageGRID Systems
– On Linux servers or VMware hypervisors
• OnApp Storage cloud
– On various hypervisors, including Xen
• Amazon Elastic Compute Cloud
– On Xen hypervisor to isolate guest virtual machines
• Xen is representative example of low-assurance
– So-called TCB includes untrustworthy Linux, drivers
– Largely unconstrained opportunities for subversion
• NSA said system on partition kernel too complex
308
Review of GARNETS Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
309
Perimeter
Build on MLS File System Foundation
• GARNETS MLS File is untrusted application
– In a layer, in a separate ring, above the TCB
– Creates file management that uses TCB objects
• MLS file system acts like standard file system
– Spans multiple domains protected by the TCB
– Creates one logical name space for all domains
• Run GARNETS instances at multiple domains
– Means no massive copies needed for sharing
– Can read both directories and files of lower domains
• For cloud storage need to add network interface
– Multiple network interfaces for multiple domains
310
Target Secure Cloud Storage
• Operates like standard network file storage
• BUT, verifiable security for
– MAC separation of security domains
Storage
Media
High Assurance
Secure Network
File Storage
Highly
Sensitive
Data
Highly
Sensitive
Data
Low
Sensitivity
Data
Low
Sensitivity
e.g., Open Internet
311
Run NFS on Top of MLS File System
• System challenge is a mostly compatible NFS
– Accept the few intrinsic limitations, e.g., time last read
• Ease of porting NFS wants familiar OS interface
– Requires creating “compatible OS” [GAS 10.7.2]
– Demonstration chose POSIX style interface
– Intended to support Linux source code compatibility
• In contrast, GARNETS is not compatible
– Have to create a Linux system call interface
• Clients access files on server through NSF calls
– Clients are untrusted
– Any client using standard NSF protocol can access
312
NSF Low Domain S/W Instantiation
Various
Applications
Net Support
(e.g., portmap)
Linux Gate
File Services
NFS Daemon
(e.g., cp, rm, ls)
Linux-Based System Call Interface
TCP/UDP
+ IP Stack
+ Data Link
Sockets
Operating
System
(Non-TCB)
DAC TCB Gate
File, memory, clock
GARNETS:
MLS File
System
DAC TCB Gate Library Interface
Distributed
DAC TCB
Kernel Gate
MAC (MLS)
TCB
File, memory, clock
Sockets
MLS Storage Management
Ethernet NICs
Kernel Gate Library Interface
Memory
GTNP (GEMSOS) Clock
CPUs
Network Clients
Label: L
Segments
Volume
Label L
313
DACLs
MSEGs
NMSEGs
Volumes
Disks
For MLS Run Multiple NSF Instances
• Need a separate instance for each level
– Is the only way NFS can be untrusted software
• Clouds often use virtual machine monitor (VMM)
– Have noted low-assurance virtual machine problem
• Secure NFS demo leverages GEMSOS VMM
– FER from NSA describes trusted MLS virtualization
– Is NOT Type I virtualization, i.e,, cannot run binary
– Each NFS instance runs in its own virtual machine
• MLS from TCB cannot be bypassed by VMM
– Can be configured for typical isolation
– In contrast to most VMMs, has controlled sharing
314
FER References GTNP VMM
• Sec 2.3, para 2:
– “The GTNP is intended to support "internal subjects"
– Virtual machines per Section 1.3.2.2.2 of the TNI.
• Sec 2.3.1, page 12:
– Implementing A-components as virtual machines
– Layer between M-NTCB and untrusted VM subjects
• Sec 2.3.1, page 12:
– VM on top of VMM provided by M-component
• Sect 2.3.1.2 Virtual Machine Interface,
– Way to compose other components with GTNP
• Sec 4.2.1.8.2.1, para 2: VM supports users
315
More FER References GTNP VMM
• Sec 4.2.2, para 1:
– Gate with hardware provide a VMM base
• Sec 9.2, page 181:
– Rings support VMs the NTCB MAC partition.
• Sect10.3, subpara 2:
– Multilevel device built as a VM on GNTP VMM
• Sect 10.6, para 1:
– Implement other network partitions as VM
• Appendix C, EPL Entry, page C-2:
– GTNP supports a virtual machine monitor interface
316
NSF High Domain S/W Instantiation
Various
Applications
Linux Gate
Net Support
(e.g., portmap)
File Services
NFS Daemon
(e.g., cp, rm, ls)
Linux-Based System Call Interface
TCP/UDP
+ IP Stack
+ Data Link
DAC TCB Gate
Sockets
Operating
System
(Non-TCB)
File, memory, clock
GARNETS:
MLS File System
DAC TCB Gate Library Interface
Distributed
DAC TCB
Kernel Gate
MAC (MLS)
TCB
File, memory, clock
Sockets
MLS Storage Management
Ethernet NICs
Kernel Gate Library Interface
Memory
GTNP (GEMSOS) Clock
CPUs
Network
Clients
Label: H
DACLs
MSEGs
NMSEGs
Volumes
Disks
Segments
Volume
Label L
317
Segments
Volume
Label H
Can Support Extended Cloud Services
• Usually have metadata server for cloud services
– Maps object names to actual locations in the cloud
• Single level servers in the network can access
– May use FTP to transfer files to/from MLS file server
– Applications NAS protocols can access it files
• BEWARE of sloppy system security engineering
– Can’t initiate information transfers between domains
– TNI provide the network engineering framework
• Separate NIC per domain does not scale well
– Next look at how to use a single hardware interface
– Need equivalent of a TCSEC/TNI multilevel device
318
Security for Untrusted Servers
Commercial
Untrusted
servers
Highly
Sensitive
Data
•Verifiably Secure Platform
(e.g., GEMSOS Class A1 TCB)
• Intel ia32 H/W
Removable
storage
Persistent
Storage
æSec
Storage
Segments
Low
Sensitivity
e.g., Open
Internet
Standard protocols (e.g.,
NAS, FTP), with:
•
•
•
•
•
Strong separation
High integrity
n to n controlled sharing
Trusted Distribution
Multiprocessor scalability
319
Cloud Controlled Sharing Summary
• Key is Mandatory Access Control (MAC)
– Control isolation & sharing between security domains
• Defining properties: global and persistent
– Control information flow (confidentiality)
• Prevents malicious information exfiltration
– Control contaminated modification (integrity)
– Sound mathematical foundation
• Implement with distinct access class labels
– Label domain of information with access class
– User has authorized access classes, i.e., domains
• Supports MAC, viz., multilevel security (MLS)
320
INF527:
Secure System Engineering
MLS Cloud Storage
Professor Clifford Neuman
Lecture 13 continued
Recall Study Goals for NFS Subversion
• First, consider security implications for system
– How deeply rooted are adverse consequences
• Second, explore applicability to other systems
– Address whether attack approach is limited to NFS
– Briefly examine Anderson SSL subversion design
• Next – NFS case study of mitigation
– Compare to Anderson recommended solution
• What else can be learned from the demo?
322
Notional Cloud Storage Security
VPN
MULTI- LEVEL SECURE
CLOUD STORAGE
PERSISTENT
STORAGE
VPN
CLIENTS(GM)
CLIENTS (FORD)
323
MLS File Sharing Server for Cloud
• Cloud storage service
– Specific type of cloud computing
– Managed resource is storage
• Needs security as good as enterprise
– Typically replaces services of enterprise environment
– Many of the same vulnerability as self-managed
– Additional vulnerabilities specific to the cloud
• Current solutions are completely ineffective
– Essential problem is construct of shared infrastructure
– Built on low-assurance commodity technology
• Highly vulnerable to software subversion
324
Present-day Vulnerability Examples
325
Security Requirements of cloud
• Three primary cloud security requirements
– Controlled sharing of information
– Cloud isolation
– High Assurance
326
Trap Door Subversion Vulnerability
• Malicious code in platform
–
–
–
–
Software, e.g., operating system, drivers, tools
Hardware/firmware, e.g., BIOS in PROM
Artifice can be embedded any time during lifecycle
Adversary chooses time of activation
• Can be remotely activated/deactivated
– Unique “key” or trigger known only to attacker
– Needs no (even unwitting) victim use or cooperation
• Efficacy and Effectiveness Demonstrated
– Exploitable by malicious applications, e.g., Trojans
– Long-term, high potential future benefit to adversary
– Testing not at all a practical way to detect
327
Alternatives for Controlled Sharing
• Three ways controlled sharing can be facilitated:
• Massive copies of data from all lower levels
– High assurance one-way flow of information
– Light diode interface uses physics for high assurance
• File Caching (Local Active Copy)
– Retain at high level only actually used lower data
– No way to securely make requests for lower data
– Security requires manual intervention
• High assurance segmented virtual memory
328
Massive Copies Approach
Top Secret
Secret
TS
S
S
U
U
Unclassified
U
•Highly inefficient!
•Does not scale!
329
Computer Utility: Predecessor to Cloud
• Computer Utility made security a priority
• Anticipated present day cloud environment
– Controlled sharing was an integral requirement
– Incorporated MAC policy
• Evident from commercial Multics product.
– Consistent with high assurance
• Evident from the BLP Multics interpretation.
• Didn’t gain widespread acceptance
– Before Internet enabled today’s cloud computing
330
Basis to Consolidate Networks
• Foundation: Trusted Computing Base (TCB)
–
–
–
–
The totality of protection mechanisms
within a computer system
including hardware, firmware, and software
Responsible for enforcing a security policy
• The security perimeter is TCB boundary
– Software within the TCB is trustworthy
– Software outside the TCB is untrusted
• Mature technology -- 30 years experience
– Derived from and used for commercial products
• Key choice is assurance: low to very high
331
TCB Subsets –“DAC on MAC”
Various Typical
Applications
High
Assurance
OS
OS Services,
DAC, Audit, &
Identification
MAC Policy
Discretionary and
Application Policies
MAC & Supporting Policies
Class A1 “M-component”
Hardware
332
Use Platform for Controlled Sharing
Top Secret
Shared Storage Server
Top Secret
LINUX
Unclass
Shared Storage Server
Unclass
LINUX
GEMSOS
Hardware
Unclass
Top
Secret
File
Enforces
DAC
Security
Perimeter
Enforces
MAC
File
Shared
File
Policy
TopSecret
IPC Mechanism
Top Secret Network
Unclass Network
333
Motivation to Address Cloud Security
• Cloud storage flexible, cost-effective
• How to implement in multi-level environment?
– Duplicate for each level? Loses advantages.
• Tempting target for attackers
– Can increase privilege
• Want high-assurance, MLS solution
• Want cross-domain sharing (CDS)
– Share resources are core cloud value proposition
– Controlled sharing of up to date information
• Solution: MLS cloud network file service
– Based on high-assurance, evaluated Class A1 TCB
334
Cloud Storage Security Challenges
• Migrate storage from local security domain
– Lose protection of “air gap” between domains
– Dramatically expands opportunity for adversaries
• Common security approaches can’t work
– Power of insidious software subversion attack tools
– Exploding “cloud computing” amplifies impact & risk
• Proven reference validation mechanism (RVM)
– Systematically codified as TCSEC “Class A1”
• Need architecture that leverages trusted RVM
– Use verifiable (i.e., Class A1) trusted computing base
– Want high cloud compatibility, e.g., standard NFS
335
Current Cloud Technology is Vulnerable
• Typically on commodity low-assurance platforms
• NetApp StorageGRID Systems
– On Linux servers or VMware hypervisors
• OnApp Storage cloud
– On various hypervisors, including Xen
• Amazon Elastic Compute Cloud
– On Xen hypervisor to isolate guest virtual machines
• Xen is representative example of low-assurance
– So-called TCB includes untrustworthy Linux, drivers
– Largely unconstrained opportunities for subversion
• NSA said system on partition kernel too complex
336
Review of GARNETS Architecture
Ring 6
Applications
Ring 5
GARNETS
Ring 3
GEMSOS Discretionary
Policy Enforcement
Ring 0
Kernel
Mandatory Policy
Enforcement
TCB
Hardware
337
Perimeter
Build on MLS File System Foundation
• GARNETS MLS File is untrusted application
– In a layer, in a separate ring, above the TCB
– Creates file management that uses TCB objects
• MLS file system acts like standard file system
– Spans multiple domains protected by the TCB
– Creates one logical name space for all domains
• Run GARNETS instances at multiple domains
– Means no massive copies needed for sharing
– Can read both directories and files of lower domains
• For cloud storage need to add network interface
– Multiple network interfaces for multiple domains
338
Target Secure Cloud Storage
• Operates like standard network file storage
• BUT, verifiable security for
– MAC separation of security domains
Storage
Media
High Assurance
Secure Network
File Storage
Highly
Sensitive
Data
Highly
Sensitive
Data
Low
Sensitivity
Data
Low
Sensitivity
e.g., Open Internet
339
Run NFS on Top of MLS File System
• System challenge is a mostly compatible NFS
– Accept the few intrinsic limitations, e.g., time last read
• Ease of porting NFS wants familiar OS interface
– Requires creating “compatible OS” [GAS 10.7.2]
– Demonstration chose POSIX style interface
– Intended to support Linux source code compatibility
• In contrast, GARNETS is not compatible
– Have to create a Linux system call interface
• Clients access files on server through NSF calls
– Clients are untrusted
– Any client using standard NSF protocol can access
340
NSF Low Domain S/W Instantiation
Various
Applications
Net Support
(e.g., portmap)
Linux Gate
File Services
NFS Daemon
(e.g., cp, rm, ls)
Linux-Based System Call Interface
TCP/UDP
+ IP Stack
+ Data Link
Sockets
Operating
System
(Non-TCB)
DAC TCB Gate
File, memory, clock
GARNETS:
MLS File
System
DAC TCB Gate Library Interface
Distributed
DAC TCB
Kernel Gate
MAC (MLS)
TCB
File, memory, clock
Sockets
MLS Storage Management
Ethernet NICs
Kernel Gate Library Interface
Memory
GTNP (GEMSOS) Clock
CPUs
Network Clients
Label: L
Segments
Volume
Label L
341
DACLs
MSEGs
NMSEGs
Volumes
Disks
For MLS Run Multiple NSF Instances
• Need a separate instance for each level
– Is the only way NFS can be untrusted software
• Clouds often use virtual machine monitor (VMM)
– Have noted low-assurance virtual machine problem
• Secure NFS demo leverages GEMSOS VMM
– FER from NSA describes trusted MLS virtualization
– Is NOT Type I virtualization, i.e,, cannot run binary
– Each NFS instance runs in its own virtual machine
• MLS from TCB cannot be bypassed by VMM
– Can be configured for typical isolation
– In contrast to most VMMs, has controlled sharing
342
NSF High Domain S/W Instantiation
Various
Applications
Linux Gate
Net Support
(e.g., portmap)
File Services
NFS Daemon
(e.g., cp, rm, ls)
Linux-Based System Call Interface
TCP/UDP
+ IP Stack
+ Data Link
DAC TCB Gate
Sockets
Operating
System
(Non-TCB)
File, memory, clock
GARNETS:
MLS File System
DAC TCB Gate Library Interface
Distributed
DAC TCB
Kernel Gate
MAC (MLS)
TCB
File, memory, clock
Sockets
MLS Storage Management
Ethernet NICs
Kernel Gate Library Interface
Memory
GTNP (GEMSOS) Clock
CPUs
Network
Clients
Label: H
DACLs
MSEGs
NMSEGs
Volumes
Disks
Segments
Volume
Label L
343
Segments
Volume
Label H
Can Support Extended Cloud Services
• Usually have metadata server for cloud services
– Maps object names to actual locations in the cloud
• Single level servers in the network can access
– May use FTP to transfer files to/from MLS file server
– Applications NAS protocols can access it files
• BEWARE of sloppy system security engineering
– Can’t initiate information transfers between domains
– TNI provide the network engineering framework
• Separate NIC per domain does not scale well
– Next look at how to use a single hardware interface
– Need equivalent of a TCSEC/TNI multilevel device
344
Security for Untrusted Servers
Commercial
Untrusted
servers
Highly
Sensitive
Data
•Verifiably Secure Platform
(e.g., GEMSOS Class A1 TCB)
• Intel ia32 H/W
Removable
storage
Persistent
Storage
æSec
Storage
Segments
Low
Sensitivity
e.g., Open
Internet
Standard protocols (e.g.,
NAS, FTP), with:
•
•
•
•
•
Strong separation
High integrity
n to n controlled sharing
Trusted Distribution
Multiprocessor scalability
345
Cloud Controlled Sharing Summary
• Key is Mandatory Access Control (MAC)
– Control isolation & sharing between security domains
• Defining properties: global and persistent
– Control information flow (confidentiality)
• Prevents malicious information exfiltration
– Control contaminated modification (integrity)
– Sound mathematical foundation
• Implement with distinct access class labels
– Label domain of information with access class
– User has authorized access classes, i.e., domains
• Supports MAC, viz., multilevel security (MLS)
346
INF523 SUPLEMENTAL MATERIAL
(if time in semester)
Introduction to Crypto Seal Guards
Case Study
Professor Clifford Neuman
Suplemental
Crypto Seal Guard Technology History
• Concept: label cryptographically sealed to data
• Conceived ~1980 for AF Korean Air Intelligence
• GEMSOS uses to meet TCSEC “Label Integrity”
– Gemini Trusted Network Processor (GTNP) (1995)
– Stored data (disk, tape) in Class A1 Evaluation
• GEMSOS uses for “Trusted Distribution”
– Authoritative distribution media crypto sealed
– Only sealed TCB software can be installed and run
• POC applied to packets exchanged by guards
– Each guard is MLS – both a high and low interface
348
GEMSOS Support for Crypto Seals
• GEMSOS used crypto seals to meet Class A1
– To meet Class A1 Label Integrity requirements
– Integral to Trusted Recovery & Trusted Distribution
• GEMSOS publishes security services via APIs:
– Data Sealing Device (and Cryptographic Services)
– Key Management
– Trusted Recovery & Distribution
• GemSeal uses GEMSOS APIs for crypto seals
– Previously evaluated, stable, public interfaces
– Minimal new trusted code
• Generate seal
• Validate integrity/authenticity of sealed packet & label
• Release packet to equivalently labeled destination
349
Overview of Seals for Shared Networks
• Proof of Concept (POC) demonstration done
– Crypto seal release guards
– Preproduction Class A1 MLS platform
• Access low network across system high network
– Controlled interface protects system high data
– Vertical data fusion with reduced footprint
• Benefits of crypto seal release guards
–
–
–
–
–
Swift implementation for MLS systems
Available core enabling technology for MLS
Rapid path to certification and accreditation (C&A)
Supports entire range of security domains
Mature deployed NSA Class A1 TCB and RAMP plan
350
Constraints to Access Lower Networks
High
Network
Multi-Level
Secure
Connection
Low
Network
• Any low connection = Multi-Level
– Must be Multi-Level Secure (MLS)
– Low/Medium assurance ineffective
• Doesn't protect against subversion
• Vulnerabilities unknown (unknowable)
• Isolation obstructs missions
– Vertical data fusion
– Tactical situational awareness
– Timely access to open source data
– Efficient utilization of resources
351
GemSeal POC Uses MLS Technology
• Class A1 TCB - GEMSOS™ security kernel
• Class A1 Ratings Maintenance Plan (RAMP)
• MLS aware crypto seal release guard
– Gemini call it the GemSeal™ concept
• Technology Benefits
– Minimize new trusted code development
– Extensible to gamut of MLS capable systems
• High assurance resists subversion
– Verifies absence of malicious code
– Effective application of science
– Key enabler for demanding accreditation, e. g., PL-5
352
How Guard Seals a Packet
• Packet switched network design, e.g. Internet
• Concept involves multiple guards
– POC has one or more “workstation” guards
– POC has one or more “sensor” guards
– Connected via a common system-high network
• Each guard has both high and low interfaces
• Sealing packets – forwarding from low to high
–
–
–
–
Bind source interface (low) label to each packet
Generate cryptographic seal of packet data + label
“Low-Sealed” packets include packet data + seal
“Low-Sealed” packets via high network interface
353
How Guard Releases a Packet
• Releasing packets – delivering from high to low
– Release ONLY packets with seal-validated labels
– Seal and label are removed before being released
• Only released to interfaces matching labels
– Allows low data to traverse & exit high network
– Concept supports multiple release guards
• Assures integrity of BOTH data AND label
– Packet data is not altered
– Source sensitivity label is authentic for this packet
354
AF Crypto Seal POC Demonstration
Crypto seal release guards connect
low resources across the high
network but protect high data.
Unsealed high data
cannot exit
System
High
Network

GemSeal
Network-ready
device, e.g.
camera
GemSeal
Low
Network
System High
Computer
Low
Computer

Guards validate lowsealed packet seal &
labels before release
to the destination
Guards seal packets
with source network
label & forward them
over high network
355
355
Summary of AF POC Demonstration
• Sensor (video) stream + command and control
– Low sensor to low workstation connectivity
• Uses existing high network infrastructure
– Delivers access to low devices
• For users lacking low network infrastructure
• From controlled interface
• High network data is protected and unchanged
– Guard validates low-sealed packets before release
– Unsealed high packets cannot exit via guard
356
Summary of POC Configuration
• Two untrusted workstations with browsers
– One (“Low”) connected to “workstation guard”
– One (“High”) connected to high network
• One web server
– Connected to low-side of the “sensor guard”
• A “high” Ethernet LAN
– Connected to high-side of both guards
– Also connected to second system high workstation
• The demonstration shows that
– “Low” workstation can browse the “Low” web server
– “High” workstation has no access to “Low” web server
357
Prior Evaluation Aids Accreditors
• Simplify job with reusable accreditation results
– Certify or assess the platform once
– Focus on system-specific additions & configurations
• GTNP provides evaluable TCB platform
– Previously evaluated Class A1 for TNI M-Component
– Class A1 RAMP in place and already proved useful
• Outside of the GTNP trusted computing base
– Most of the application software will be untrusted
– Only cryptographic seal operations need be trusted
• Generate seals & release packets with validated seals
• Customer's certification and accreditation needs
– The verifiably secure MLS TCB and
– The trusted portions of the guard security services
358
POC to Deployable System Summary
• Don’t have to evaluate platform first
– RAMP is already proven
– No formal specification changes anticipated
• First: Evaluate and accredit the parts separately
– Platform (very stable, accredit new hardware ports)
– Crypto Seal implementation (as a trusted application)
– Guard applications themselves evaluated separately
• Supporting policies - audit, DAC, etc.
• Untrusted application pieces, including network stack
• Each protected by security kernel
• Last: refresh platform evaluation + accreditation
– Because already successfully evaluated & accredited
– And it’s already been RAMPed
– And the refresh won’t alter the359TCB specification
Introduction to RECON Guard Security
• Review a classic and seminal paper
– Cite: J. P. Anderson, "On the Feasibility of
Connecting RECON to an External Network,"
Technical Report, J. P. Anderson Co., March 1981
– Often cited for both databases and communications
• RECON is on-line interactive data base
– Citations for both raw and finished intelligence reports
– Also overnight batch and canned query capability
– User may specify which file(s) to search
• Sponsor's security concerns are twofold
– Subject to penetration from external network
– Spillage of sensitive information from internal failure
360
Data Security Protection
• The data base contains two kinds of records
– Those which can be widely distributed
– Those whose distribution is restricted
• Compartmented
• Proprietary
• Originator-controlled
• Operative aspects of the security problem
– Commodity mainframe operating system
– Must be prudently assumed that trapdoors exist
• In some or much of application or operating systems
• May be activated from externally connected users
361
Previously Considered Approaches
• Put two kinds of records in separate systems
– Make entries not deemed "special" accessible
– Protected the sponsor's assets from penetration
– Rejected because of the cost of duplicate facilities
• Multilevel secure operating system
–
–
–
–
In principle, would go far to defeat direct attacks
Could defeat placing trapdoors and Trojan Horses
Produce totally incompatible (with anything!) systems
Very expensive
• Filters added to RECON software to limit access
– Nothing to control internal or external penetration
362
Guard Authenticate Releasability
• Is akin to the problem of "sanitizing" SCI
– For release to activities without proper clearances
• Permit arbitrary queries by all users
– Route query result of uncleared users to sanitizer
– Sanitization officer would manually examine output
• Sanitization officer approach works in principle
– Not practical solution because of excessive delays
– Delays cascade to produce large response times
• Adapted as proposal to solve RECON problem
– Adopt the idea of “sanitization” in a GUARD station
– Automate the identification of releasable citations
363
RECON Guard Technical Approach
364
RECON Guard Concept of Operation
• Consider all citations in one of two cases
– Releasable even if not approved for "special" citations
– Releasable only to approved individuals
• Each RECON entry designated by originator
– Whether (or not) it is releasable to external users
• Create cryptographic checksum for releasable
–
–
–
–
Computed as the data enters the system
A function of the entire record
Computed by a special authentication device
Checksum is appended to the record and stays with it
365
Representation of Basic Capability
A
B
B
A
366
Cryptographic Checksums Properties
• Principle need is assuring checksum not forged
– Good modern crypto algorithm
– Perform checksum functions outside RECON hosts
• Separate entities to create and do Guard functions
• Secret key is known only to checksum devices
– Key is never available within RECON system
– Hardwired on board with crypto processor
– Only method to forge is random guess (brute force)
• Key used for block-chained encipherment
– Excellent error or tampering detection
– Initial variable (IV) is used as half of the “secret”
– A security “kernel” in devices control their operation
367
Process to Create Crypto Checksum
368
Security Properties of Guard
•
•
•
•
•
•
No spill from RECON failure or compromise
No manipulation of RECON will cause release
Will “fail safe” if checksum detached from data
Not protecting against manipulation of data
Not preventing denial of service
Guard system itself defends against its failure
– Advanced design techniques, e.g., formal specs
– Programs placed in read-only memory
– Permits RECON to test guard message w/ loop back
369
INF523: Assurance in Cyberspace as
Applied to Information Security
Final Exam Review
Clifford Neuman
Lecture 14
27 Apr 2016
Course Evaluation
• Please do it today, if you haven’t already
371
Final Exam Details
• ROOM :
• DATE :
• TIME :
• Final will cover everything we’ve covered in the
class
• Including stuff before the midterm
• But emphasis (>50%) on things not on the
midterm
372
Three “Legs” of Security
• Policy – Definition of security for the system
• Mechanisms – Technical, administrative, and physical controls
• Assurance – Evidence that mechanisms enforce policy
Polic y
Statement of requirements that
the security expectations of the
Assurance
Provides justification that the me
through assurance evidence and
evidence
Mechanisms
Executable entities that are desi
to meet the requirements of the
373
Some Lessons Learned
• It is impossible to build “secure” products
without a policy & reference monitor
• And it is really difficult even with them
• Perfect assurance is also impossible
374
Overview of Review for Final Exam
• Review major areas covered in course since
midterm:
–
–
–
–
Testing
Secure Operation
Covert Channels
Formal methods
375
Black Box and White Box Testing
• Black box testing
–
–
–
–
Tester has no information about the implementation
Good for testing independence
Not good for test coverage
Hard to test individual modules
–
–
–
–
–
–
Tester has information about the implementation
Knowledge guides tests
Simplifies diagnosis of problem
Can zero in on specific modules
Possible to have good coverage
May compromise tester independence
• White box testing
376
Layers of Testing
• Module testing
– Test individual modules or subset of system
• Systems integration
– Test collection of modules
• Acceptance testing
– Test to show that system meets requirements
– Typically focused on functional specifications
377
One Definition of Security Testing
• Functional testing: Does system do what it is
supposed to do?
– In the presence of good inputs
– E.g., Can I get access to my data after I log in?
• Security testing: Does the system do what it is
supposed to do, and nothing more?
– For good and bad inputs
– E.g., Can I get access to only my data after I log in?
• No matter what inputs I provide to the system
378
Comprehensive Def’n of Security
Testing
• A process to find system flaws that would lead to
violation of the security policy
– Find flaws in security mechanisms
• I.e., security mechanisms don’t correctly enforce policy
– Find flaws that could allow tampering with or
bypassing security mechanisms
• I.e., flaw in reference monitor
• Focus is on security policy, not system function
• Security testing assumes intelligent adversary
– Test functional and non-functional security
requirements
– Test as if you were an attacker
379
Testing Security Mechanisms
• Must test security mechanisms as if they were
the subject of functional testing
– E.g., test identification and authentication
mechanisms
– Do they correctly enforce the policy?
– What if malicious inputs?
– Do they “fail safe”?
380
What to Test in Security Testing
• Violation of assumptions
– About inputs
• Behavior of system with “bad” inputs
• Inputs that violate type, size, range, …
– About environment
– About operational procedures
– About configuration and maintenance
• Often due to
– Ambiguous specifications
– Sloppy procedures
• Special focus on Trust Boundaries
381
Types of Flaws – Implementation Bugs
• Coding errors
– E.g., use of gets() function and other unchecked
buffers
• Logical errors
– E.g., time of check to time of use (“TOUCTOU”)
– Race condition where, e.g., authorization changes but
Victim access still allowed
Attacker
if (access("file", W_OK) !=
0) { exit(1); }
fd = open("file", O_WRONLY);
write(fd, buffer,
sizeof(buffer));
// After the access check,
symlink("/etc/passwd",
"file");
// Before the open, "file"
points to the password
database
382
Types of Flaws – Design Flaws
•
•
•
•
•
•
Error handling - E.g., failure in insecure states
Transitive trust issues (typical of DAC)
Unprotected data channels
Broken or missing access control mechanisms
Lack of audit logging
Concurrency issues (timing and ordering)
383
What if No Reference Monitor?
• Entire system (“millions of lines of code”)
vulnerable
– Buffer overflow in GUI is as serious as bug in access
control mechanism
• Potentially lots of ways to tamper with or bypass
security mechanisms
• No way to find the numerous flaws in all of that code
• Reference monitor is “small enough to be
verifiable”
– Helps bound testing
384
Limits of Testing
• “Testing can prove the presence of errors, but
not their absence” – Edsger W Dijkstra
• How much testing is enough?
– Undecidable
– Never “enough” because never know if found all bugs
– But resources, including time, are finite
• Testing is usually not precise enough to catch
subtle bugs
• Subversion? Find a trap-door? Forget about it.
• Must prioritize
385
Prioritizing Risks and Tests
• Create security misuse cases
– I.e., threat assessment
• Identify security requirements
– Use identified threats with policy to derive
requirements
• Perform architectural risk analysis
– Where will I get the biggest bang for my buck?
– Trust boundaries are very interesting here
• Build risk-based security test plans
– Test the “riskiest” things
• Perform the (now limited, but focused) testing
386
Misuse Cases
• “Negative scenarios”
– I.e., threat modeling
• Define what an attacker would want
• Assume level of attacker abilities/skill
– Helps determine what steps are possible and risk
– (Chance of your assumption being correct is ≈0, but
still…)
• Imagine series of steps an attacker could take
– Attack-defense tree or requires/provides model
– Or “Unified Modeling Language” (UML)
• Identify potential weak spots and mitigations
387
Static Testing
• Analyze code (and documentation)
– Usually only source code, but sometimes object
– Program not executed
– Testing abstraction of the system
• Code analysis, inspection, reviews,
walkthroughs
– Human techniques often called “code review”
• Automated static testing tools
–
–
–
–
Checks against coding standard (e.g., formatting)
Coding flaws
Potentially malicious code
May also refer to formal proof of code correctness
388
“Lint-like” Tools
• Finds “suspicious” software constructs
–
–
–
–
E.g., Variables being used before being initialized
Divide by zero
Constant conditions
Calculations outside the range of a type
• Language-dependent
• Can check correspondence to style guidelines
389
Limitations of Static Testing
• Lots of false positives and false negatives
• Automated tools seem to make it easy, but it
takes experience and training to use effectively
• Misses many types of flaws
• Won’t find vulnerabilities due to run-time
environment
390
Dynamic Testing
• Test running software in “real” environment
– Contrast with static testing
• Techniques
– Simulation – assess behavior/performance
– Error seeding – bad input, see what happens
• Use extremes of valid/invalid input
• Incorrect and unexpected input sequences
• Altered timing
– Performance monitoring – e.g., real-time memory use
– Stress tests – e.g., abnormally high workloads
391
Fuzzing
• Tool used by both security testers and attackers
• Form of dynamic testing, usually automated
• Provide many invalid, unexpected, often random
inputs to software
– Extreme limits, or beyond limits, of value, size, type, ...
– Can test command line, GUI, config, protocol, format, file
contents, …
• Observe behavior – if unexpected result, a flaw!
– Crashes or other bad exception handling
– Violations of program state (assertions)
– Memory leaks
• Flaws could conceivably be exploited
• Fix, and re-test
392
Fuzzing Methods
• Mutation-based
– Mutate existing test data, e.g., by flipping bits
• Generation-based
– Generate test data based on models of input
– Use a specification
• Black box – no reference to code
– Useful for testing proprietary systems
• White (or gray) box – use code as a guide of
what to test
• Recursive – enumerate all possible inputs
• Replacive – use only specific values
393
Limits of Fuzzing
• Random sample of behavior
• Usually finds only simple flaws
• Best for rough measure of software quality
– If find lots of problems, better re-work the code
• Also good for regression testing, or comparing
versions
• Demonstrates that program handles exceptions
• Not a comprehensive bug-finding tool
• Not a proof that software is correct
394
Limits to Dynamic Testing
• From outside, cannot test all software paths
• Cannot even test all hardware faults
• May not find rare events (e.g., due to timing)
395
Vulnerability Scanning
• Another tool used by attackers and defenders
alike
• Automated
• Look for flaws using database of known flaws
– Contrast with fuzzing
• As comprehensive as database of vulnerabilities
is
• Different types of vulnerability scanners
(example):
–
–
–
–
–
Port scanner (NMAP)
Network vulnerability scanner (Nessus)
Web application scanner (Nikto)
Database (Scuba)
396
Host security audit (Lynis)
Vulnerability Scanning Methods
• Passive – probe without any changes
– E.g., Check version and configuration, “rattle doors”
– Do nothing that might crash the system
• Active – attempt to see if actually vulnerable
– Run exploits and monitor results
– Might disrupt, crash, or even damage target
– Always get explicit permission (signed agreement)
before running active scans
397
Limits of Vulnerability Scanning
• Passive scanning only looks for known
vulnerabilities
– Or potential vulnerabilities (e.g., based on
configuration)
• Passive scanning often simply checks versions
– then reports known vulnerabilities in those versions
– and encourages updating
• Active scanning can crash or damage systems
• Active scanning potentially requires a lot of
“hand-holding”
– Due to unpredictable system behavior
– E.g., system auto-log out
398
Penetration Testing
• Actual attacks on a system carried out with the goal of
finding flaws
– Called a “test”, when used by defenders
– Called an “attack” when used by attackers
• Human, not automated
• Usually goal driven – stop when achieve
• Step-wise (like requires/provides)
– When find one way to achieve a step, go on to next step
• Identifies vulnerabilities that may be impossible for
automated scanning to detect
• Shows how different low-risk vulns can be combined into
successful exploit
• Same precautions as for other forms of active testing
– Explicit permission; don’t interfere with production
399
Flaw-Hypothesis Methodology
• Five steps:
1. Information gathering
–
Become familiar with the system’s design,
implementation, operating procedures, and use
2. Flaw hypothesis
–
Think of flaws the system might have
3. Flaw testing
–
Test for exploitable flaws
4. Flaw generalization
–
Generalize vulnerabilities that can be exploited
5. Flaw elimination (often skipped)
400
Limits of Penetration Testing
• Informal, non-rigorous, semi-systematic
– Depends on skill of testers
• Not comprehensive
– Proves at least one path, not all
– When find one way to achieve a step, go on to next step
• Does not prove lack of path if unsuccessful
• But, performed by experts
– Who are not the system developers
– Who think like attackers
• Tests developer and operator assumptions
– Helps locate shortcomings in design and implementation
401
Overview of Review for Final Exam
• Review major areas covered in course since
midterm:
–
–
–
–
Testing
Secure Operation
Covert Channels
Formal methods
402
Secure Operation
•
•
•
•
•
Secure distribution
Secure installation and configuration
Patch management
System audit and integrity monitoring
Secure disposal
403
Secure Distribution
• Problem: Integrity of distributed software
– How can you “trust” distributed software?
– Watch out for subversion!
• Is this the actual program from the vendor?
• … or did someone substitute or tamper with it?
404
Checksums
• Compare hashes on downloaded files with
published value (e.g., on developer’s web site)
– If values match, good to go
– If values do not match, don’t install!
• Often download site different from publisher
– So checksum is control on distribution
• Use good hash algorithms
– MD5 – compromised (can reliably make collisions)
– SHA-1 – no demonstration of compromise, but feared
– SH-256 (aka SHA-2) still OK
405
Cryptographic Signing
• Solves checksum reliability problems?
• Typically uses PKI cryptography
• Signing algorithm:
– Calculate checksum (hash) on object
– Encrypt checksum using signer’s private key
– Attach seal to object (along with certificate of signer)
• Verification algorithm:
– Calculate checksum on object
– Decrypt encrypted checksum using signers’ public
key
– Compare calculated and decrypted checksums
406
Cryptographic Signing
Source: Wikipedia
407
Cryptographic Signing
• Solves checksum reliability problems?
• Typically uses public/private key cryptography
• Signing algorithm:
– Calculate checksum (hash) on object
– Encrypt checksum using signer’s private key
– Attach seal to object (along with certificate of signer)
• Verification algorithm:
– Calculate checksum on object
– Decrypt encrypted checksum using signers’ public
key
– Compare calculated and decrypted checksums
– Must also check signer’s certificate
408
Do You Trust the Certificate?
• You trust a source because the calculated
checksum matches the checksum in the seal
• Certificate contains signer’s public key
• You use public key to decrypt seal
• How do you know that signer is trustworthy?
• Certificates (like for SSL), testify as to signer
identity
• Based on credibility of certificate authority
• But what if fake certificate?
– E.g., Stuxnet
409
Secure Distrib in High-Assurance
System
• E.g., GTNP FER (page 142)
– Based on cryptographic seals and data encryption. All
kernel segments are encrypted and sealed.
Formatting information on distributed volumes is
sealed but not encrypted. Keys to check seals and
decrypt are shipped separately [i.e., sent out of band; no
certification authority].
– Hardware distribution through authenticator for each
component, implemented as cryptographic seal of
unique identifier of component, such as serial number
of a chip or checksum on contents of a PROM
[Physical HW seal and checked by SW tool]
410
More on GTNP Secure Distribution
• Physical seal on HW to detect tampering
• Install disk checks HW “root of trust” during
install
– Will only install on specific system
• System Integrity checks at run-time
• Multi-stage boot:
– PROM checks checksum of boot loader
– Boot loader checks checksum of kernel
411
Secure Installation and Configuration
• Evaluated, high-assurance systems come with
documentation and tools for secure
configuration
• Lower-assurance systems have less guidance
• Usually informal checklists
– Benchmarks
– Security Technical Implementation Guides (STIGs)
• Based on “best practices”
– E.g., “change default admin password”
– No formal assessment of effectiveness
• Not based on security policy model
412
STIGS
• Security Technical Implementation Guides
(STIGs)
• E.g., https://web.nvd.nist.gov/view/ncp/repository
– (Need SCAP tool to read them)
• Based on “best practices”
• Not based on security policy model
413
Configuration Management Systems
• Centralized tools and databases to manage
configs
• Ideally:
– Complete list of systems
– Complete list of software
– Complete list of versions
•
•
•
•
•
Logs status and changes
Can automatically push out patches/changes
Can detect unauthorized changes
E.g., Windows group policy management
For more info: https://www.sei.cmu.edu/productlines/frame_report/config.man.htm
414
Certification and Accreditation
• Evaluated systems are certified
– Under specific environmental criteria
– (e.g., for TCSEC, criteria listed in Trusted Facility
Manual)
• But environmental criteria must be satisfied for
accreditation
– E.g., security only under assumption that network is
physically isolated
– If instead use public Internet, cannot be accredited
415
Operational Environment and Change
• Must “configure” environment
• Not enough to correctly install and configure
a system if the environment is out of spec
• What if the system and environment start out
correctly configured, but then change?
• Just as bad!
416
Maintenance
• System is installed and
configured correctly
• Environment satisfies
requirements
• Will they stay that way?
• Maintenance needs to
1. Preserve known, secure
configuration
2. Permit necessary configuration
changes
• E.g., patching
417
Patch Management
• All organizations use low-assurance systems
• Low-assurance systems have lots of bugs
• A “patch” is a security update to fix vulnerabilities
– Maybe to fix bugs introduced in last patch
• Constant “penetrate-and-patch” cycle
– Must constantly acquire, test, and install patches
• Patch management:
– Strategy and process of determining
• what patches should be applied,
• to which programs and systems, and
• when
418
Risk of Not Applying Patches
• Ideally, install patches ASAP
• Risk goes way up when patches are not installed
– System then has known vulnerabilities
– “Assurance” of system is immediately very low
– Delay is dangerous – live exploits often within hours
419
Patch Management Tradeoffs
• Delay means risk
• But patches may break applications
– Custom applications or old, purchased applications
• Patches may even break the system
– Microsoft, for example, “recalls” patches
– (Microsoft Recalls Another Windows 7 Update Over
Critical Errors http://www.techlicious.com/blog/faulty-windows-7-updatekb3004394/)
• Must balance the two risks
– Sad fact: Security often loses in these battles
– Must find other mitigating controls
420
Patch Testing and Distribution
• Know what patches are available
• Know what systems require patching
• Test patches before installing
– On non-production systems
– Test as completely as possible with operational
environ.
• Distribute using signed checksum
– Watch out for subversion, even inside the
organization
421
Preserve Known, Secure Configuration
• Two steps:
1. Document that installation and initial
configuration are correct
– Don’t forget environment
– Update documentation as necessary after patching
2. Periodically check that nothing has changed in
system (or environment)
–
Compare results of check to documentation
422
System Audit and Integrity Monitoring
• Static audit: scan systems and note
discrepancies
–
–
–
–
–
Missing patches
Mis-configurations
Changed, added, or deleted system files
Changed, added or deleted applications
Added or deleted systems!
• Dynamic system integrity checking
– Same as static, but continuous
• Example: Tripwire (http://www.tripwire.com/)
423
Tripwire
• Used to create checksums of
–
–
–
–
–
user data,
executable programs,
configuration data,
authorization data, and
operating system files
• Saves database
• Periodically calculates new checksums
• Compares to database to detect unauthorized or
unexpected changes
424
Continuous Monitoring
• Static audit is good, but systems may be out of
compliance almost immediately
• Goal: Real-time detection and mediation
– Sad reality: minutes to days to detect, maybe years to
resolve
• Need to automate monitoring
• See, e.g.,
– SANS Whitepaper:
http://www.sans.org/reading-room/whitepapers/analyst/continuous-monitoring-is-needed35030
– NIST 800-137 Information Security Continuous
Monitoring (ISCM) for Federal Information Systems
and Organizations
http://csrc.nist.gov/publications/nistpubs/800-137/SP800-137-Final.pdf
425
Inventory of Systems and Software
• IT operations in constant state of flux
– New services, legacy hardware and software, failure
to follow procedures and document changes
• Make a list of authorized systems, software, and
versions (and patches)
– Create baseline
– Discovery using administrative efforts, active and
passive technical efforts
• Regularly scheduled scans to look for deviations
– Continuously update as new approved items added or
items deleted
426
Other things to Monitor
•
•
•
•
•
System configurations
Network traffic
Logs
Vulnerabilities
Users
• To manage workload:
– Determine key assets
– Prioritize alerts
427
Secure Disposal Requires Attention
• Delete sensitive data on systems before
disposal
– Not always obvious where media is
• E.g., copy machines have hard drives
http://www.cbsnews.com/news/digital-photocopiers-loaded-with-secrets/
• E.g., mobile phones not properly erased
http://www.theguardian.com/technology/2010/oct/12/mobile-phones-personal-data
– 50% of second-hand mobile phones
contain personal data
428
Secure Disposal
• User proper disposal techniques
– E.g., shred drives or other storage
media for best results
– Degaussing of magnetic media not enough
– SSDs even harder to erase
429
Overview of Review for Final Exam
• Review major areas covered in course since
midterm:
–
–
–
–
Testing
Secure Operation
Covert Channels
Formal methods
430
Covert Channels – Better Definition
• Given a nondiscretionary (mandatory) security
policy model M and its interpretation I(M) in an
operating system, any potential communication
between two subjects I(Sh) and I(Si) of I(M) is
covert if and only if any communication between
the corresponding subjects Sh and Si of the
model M is illegal in M.
– Source: C.-R. Tsai, V.D. Gligor, and C.S. Chandersekaran, “A formal
method for the identification of covert storage channel in source
code”, 1990
431
Observations
• Covert channels are irrelevant for DAC policies
because Trojan Horse can leak information via
valid system calls and system can’t tell what is
illegitimate
– Covert channel analysis only useful for trusted
systems
• A system can correctly implement (interpret) a
mandatory security policy model (like BLP) but
still not be secure due to covert channels (violates
metapolicy)
– E.g., protects access to objects but not to shared
resources
• Covert channels apply to integrity as much as
432
secrecy
Two Types of Covert Channels TCSEC
• Storage channel “involves the direct or indirect
writing of a storage location by one process [i.e.,
a subject of I(M)] and the direct or indirect
reading of the storage location by another
process.”
• Timing channel involves a process that “signals
information to another by modulating its own use
of system resources (e.g., CPU time) in such a
way that this manipulation affects the real
response time observed by the second process.”
– Source: TCSEC
433
Some Attributes Used in Covert
Channels
•
•
•
•
•
Timing: amount of time a computation took
Implicit: control path a program takes
Termination: does a computation terminate?
Probability: distribution of system events
Resource exhaustion: is some resource
depleted?
• Power: how much energy is consumed?
• Any time SL can detect varying results that
depend on actions by SH, that could form a
covert channel
434
Side Channel vs. Covert Channel
• Covert channel
– Intentional use of available channel
– Intention to conceal its existence
• Side channel
– Unintentional information leakage due to
characteristics of the system operation
– E.g., malicious VM gathering information about
another VM on the same HW host
• Share CPU, RAM, cache, etc.
• This really can happen:
Yinqian Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2012. Cross-VM side channels
and their use to extract private keys. In Proc. of the 2012 ACM Conference on Computer and
Communications Cecurity (CCS '12). ACM, New York, NY, USA, 305-316.
DOI=10.1145/2382196.2382230
435
Covert Channels in the Real World
• Cloud IaaS covert channel
– Side channel on the previous slide combined with
encoding technique (anti-noise) and synchronization
– Trojan horse on sending VM can signal another VM
• Trojan horse in your network stealthily leaking
data
– Hidden in fields of packet headers
– Hidden in timing patterns of packets
436
Note the Implicit Mandatory Policy
• May enforce only DAC inside the system
• But still have mandatory policy with two
clearances:
– Inside, “us”
– Outside, “them”
• Covert channel exfiltrates data from “us” to
“them”
• So covert channels of interest for security even
in systems that use DAC policy internally
437
Covert Storage Channels Conditions
Several conditions must hold for
there to be a covert storage channel:
1. Both sender and receiver must have
access to some attribute of a shared
object
2. The sender must be able to modify
the attribute
3. The receiver must be able to observe
(reference) that attribute
4. Must have a mechanism for initiating
both processes and sequencing their
accesses to the shared resource
438
Structure of a Covert Channel
• Sender and receiver must synchronize
• Each must signal the other that it has read or
written the data
• In storage channels, 3 variables, abstractly:
– Data variable used to carry data
– Sender-receiver synchronization variable (ready)
– Receiver-sender synchronization variable (finished)
• Write-up is allowed, so may be legitimate data flow
• In timing channels, synchronization variables
replaced by observations of a time reference
439
Example of Synchronization
• Processes H, L not allowed to communicate
– But they share a file system
• Communications protocol:
– H sends a bit by creating a file called 0 or 1, then a
second file called send
• H waits until send is deleted before repeating to send another
bit
– L waits until file send exists, then looks for file 0 or 1;
whichever exists is the bit
• L then deletes 0, 1, and send and waits until send is
recreated before repeating to read another bit
• Creation and deletion of send are the
synchronization variables
440
HW 3 Synchronization Solution
•
•
•
•
READ (S, O): if object O exists and LS ≥ LO, then return its current value;
otherwise, return a zero
WRITE (S, O, V): if object exists O and LS ≤ LO, change its value to V;
otherwise, do nothing
CREATE (S, O): if no object with name O exists anywhere on the system,
create a new object O at level LS ; otherwise, do nothing
DESTROY (S, O): if an object with name O exists and the LS ≤ LO, destroy
it; otherwise, do nothing
• How to synchronize the channel?
• Two other files, FS, FR
441
HW 3: SH and SL Synchronization
#
SH
SL
1
Create(FR); Write(FR, “nd”);
Setup
Create(FS); Write(FS, “nosig”);
Read(FS);
Read returns 0
if L(FS) = H
2
[F0 stuff]
SH starts when
it has data
If “nosig”, Delete(FS); Wait;
Goto 1;
If not 0 then SH
has not signaled
yet, try again
3
Create(FS); Write(FS, “sig”);
Read(FS);
SH creates FS is
signal to SL.
Write fails if
L(FS) = L
[F0 stuff]
If 0, then data
ready
4
If not “sig”, Wait; Goto 3;
If write failed,
try again
Delete FS; Write(FR, “d”); Goto
1
Reset sender
synch, signal SH
5
Read(FR);
6
If not “d”, Wait; Goto 5;
Block until
receiver signals
7
Write(FR, “nd”); Goto 2;
Reset receiver
synch, continue
442
Covert Channel Characteristics
• Existence: Is a channel present?
• Bandwidth: Amount of information that can be
transmitted (bits per second)
• Noisiness: How much loss or distortion in the
channel?
443
Noisy vs. Noiseless Channels
• Noiseless: covert channel uses
resource available only to sender
and receiver
• Noisy: covert channel uses resource
available to others as well as to
sender and receiver
– Extraneous information is “noise”
– Receiver must filter noise to be able to
read sender’s “signal”
444
Objectives of Covert Channel Analysis
1. Detect all covert channels
– Not generally possible
– Find as many as possible
2. Eliminate them
– By modifying the system implementation
– Also may be impossible, or impractical
3. Reduce bandwidth of remaining channels
– E.g., by introducing noise or slowing the time reference
4. Monitor any that still exceed the acceptable
bandwidth threshold
– Look for patterns that indicate channel is being used
– I.e., intrusion detection
445
Noise and Filters
• If can’t eliminate channel, try to
reduce bandwidth by
introducing noise
• But filters and encoding can be
surprisingly effective
– Need a lot of carefully designed
noise to degrade channel
bandwidth
– Designers often get this wrong
• And added noise may
significantly reduce system
performance
446
HW 3 Solution: SRRM Example: Unix
• Unix files have these attributes:
– Existence, size, owner, group, access permissions
(others?)
• Unix file operations to create, delete, open, read,
write, chmod operations (others?)
• Homework: Fill in the shared resource matrix
– Differs by Unix version and settings
– Here’s One read
row, forwrite
Linux delete create
existence
Ø
Ø
R (if
noclobber
is set)
M
RM
447
open
chmod
R
Ø
Shared Resource Matrix
Methodology
Summary
• SRMM comprehensive but incomplete
– How to identify shared resources?
– What operations access them and how?
• Incompleteness a benefit
– Allows use at different stages of software engineering
life cycle
• Incompleteness a problem
– Makes use of methodology sensitive to particular
stage of software development
Computer Security: Art and Science
July 1, 2004
©2002-2004 Matt Bishop
Slide #17-448
Techniques for Mitigation of Covert
Channels
1. Require processes to state resource needs in
advance
– Resources stay allocated for life of process
– No signal possible
2. Devote uniform resources to each process
– No signal possible
3. Inject randomness into allocation, use of
resources
– Noise overwhelms signal
• All waste resources
• Policy question: Is the inefficiency preferable to
the covert channel?
449
Overview of Review for Final Exam
• Review major areas covered in course since
midterm:
–
–
–
–
Testing
Secure Operation
Covert Channels
Formal methods
450
Formal Methods
• Formal means mathematical
• Tools and methods for reasoning about
correctness
– Correctness means system design satisfies some
properties
– Security, but also safety and other types of properties
• Useful way to think completely, precisely, and
unambiguously about the system
– Help delimit boundary between system and
environment
– Characterize system behavior under different
conditions
– Identify assumptions
451
– Identify necessary invariant properties
Informal vs. Formal Specifications
• Informal
Always? What about before
the system is (re)initialized?
– Human language, descriptive
– E.g., “The value of variable x will always be less than 5”
– Often vague, ambiguous, self-contradictory, incomplete,
imprecise, and doesn’t handle abstractions well
• All of which can easily lead to unknown flaws
– But, relatively easy to write
• Formal
– Mathematical
– E.g., ∀t.∀x. (t>= x Ʌ (sys_init(x))) x(t) < 5
– Easily handles abstractions, concise, non-ambiguous,
precise, complete, etc.
– But, requires lots of training and experience to do right
452
Formal vs. “Informal” Verification
• “Informal” verification:
– Testing of various sorts
• Finite, can never can be complete, only demonstrates
cases
• Formal verification:
– Application of formal methods to “prove” a design
satisfies some requirements (properties)
• A.k.a. “demonstrating correctness”
– Can “prove” a system is secure
• I.e., that the system design satisfies some properties
that are the definition of “security” for the system
• I.e., that a system satisfies the security policy
453
Steps in Security Formal Verification
1. Develop FSPM (e.g., BLP)
2. Develop Formal Top-Level Spec (FTLS)
– Contrast with Descriptive Top-Level Specification
(DTLS)
• Natural language, not mathematical, specification
3. Proof (formal or informal) that FTLP satisfies
FSPM
4. (Possibly intermediate specs and proofs)
– At different levels of abstraction
5. Show implementation “corresponds” to FTLS
– Code proof beyond state of the art (but see
https://sel4.systems/)
– Generally informal arguments
– Must show how every part of code
fits
454
Attributes of Formal Specifications
• States what system does, but not how
– I.e., like module interfaces from earlier this semester
– Module interfaces are (probably informal)
specifications
• Precise and complete definition of effects
– Effects on system state
– Results returned to callers
– All side-effects, if any
• Not the details of how
– Not how the data is stored, etc.
– I.e., abstraction
• Formal specification language is not code
455
Formal Top-Level Specification
• Represents interface of the system
–
–
–
–
In terms of exceptions, error messages, and effects
Must be shown to accurately reflect TCB interface
Include HW/FW operations, if affect state at interface
TCB “instruction set” consists of HW instructions
accessible at interface and TCB calls
• Describe external behavior of the system
– precisely,
– unambiguously, and
– in a way amenable to computer processing for
analysis
– Without describing or constraining implementation
456
Observation
• Many ways to specify the same system
• Not every way is equally good
• If pick less good way, may create lots of
complexity
• E.g., consider how to specify a FIFO queue
1. Infinite array with index of current head and tail
•
Not very abstract – specifies “how”
2. Simple, recursive, add and remove functions and
axioms
•
E.g., ∀x. remove(add(x,EMPTY)) = x
• The first is tedious to reason with
– Lots of “overhead” to keep track of indexes
• The second is easy and highly automatable
457
HW 4 Solution
• Write a formal spec for seating in an airplane:
• An airplane has 100 seats (1..100)
• Every passenger gets one seat
• Any seat with a passenger holds only one
passenger
• The state of a plane P is a function [N -> S]
– Maps a passenger name to a seat number
• Two functions: assign_seat and deassign_seat
• Define the functions
• Show some lemmas that demonstrate correctness
458
Observations
• Specification tells what happens, not how
• State is the seats on an airplane
• Requirements can be lemmas or invariants
– Every passenger gets one seat
– Any seat with a passenger holds only one passenger
• Define axioms to add and delete passengers
– Add and delete change the state of the system
• Similar to BLP model
– Had to prove every transition preserved SSC and *property for the state of the system
459
One Possible HW 4 Solution
• Types and Constants:
–
–
–
–
–
N : type (of unique passenger name)
n0 : N (represents an empty seat)
S : type = {s: 1 <= s  s <= 100} (of seat number)
A : type (of airplane seating function) [S -> N]
UA : type = {a:A | ( (x,y:st): x ≠ y  a(x) ≠ n0 => a(x)
≠ a(y)}
A passenger can only have one seat, stated as an
invariant (which I still have to prove)
• Variables:
– nm : var N (a passenger)
– ap : var A (an airplane function)
– st : var S (a seat number)
460
Support Axioms
• Does a seat already have a person?
– Define predicate seatOccupied?:[A x S -> bool]
– Axiom: seatOccupied?(ap,st) iff ap(st) ≠ n0
• Does a person already have a seat?
– Define predicate hasSeat?:[A x N -> bool]
– Axiom: hasSeat?(ap,nm) iff  st: ap(st) == nm
461
Main Axioms
• assignSeat : [UA x S x N -> UA]
• Axiom: assignSeat(ap,st,nm) =
if seatOccupied?(ap,st) and
hasSeat?(ap,nm) then ap with
[ap(st)=nm]
else ap
• deassignSeat : [UA x S -> UA]
• Axiom: deassignSeat(ap,st) =
ap with [ap(st)=n0]
462
Proof Obligations
• Need to prove invariant for all state transitions
a:A | ( (x,y:st): x ≠ y  a(x) ≠ n0 => a(x) ≠ a(y)
– (each passenger gets only one seat)
• deassignSeat obviously preserves the invariant
• Why does assignSeat preserve the invariant?
– hasSeat? check
• Now to prove the second requirement:
– Any seat with a passenger holds only one passenger
• But this is “built-into” the airplane function that maps
seats to names
– A function is set of ordered pairs in which each x-element
has only one y-element associated with it
463
Back to the Review
464
Formal Verification is Not Enough
• Formal verification complements, but does not
replace testing (informal verification)
• Requires abstraction which
– May leave out important details (stuff missing)
– May make assumptions that code does not support
(extra stuff)
• Even if “proven correct”, may still not be correct
• “Beware of bugs in the above code; I have only
proved it correct, not tried it.” -Knuth
465
Millen: PDP 11/45 Proof of Correctness
• Proof of correctness for PDP 11/45 security
kernel
• Correctness defined as proper implementation
of security policy model (BLP)
• Security policy model defined as set of axioms
– Axioms are propositions from which properties are
derived
– E.g., in BLP, SSC and *-property
• Proof is that all operations available at the
interface of the system preserve the axioms
• Also considered covert storage channels
– Method did not address timing channels
466
Millen: PDP 11/45 Proof of Correctness
• Security policy model defined as set of axioms
– Simple security condition
• If a subject has “read” access to an object, level of
subject dominates level of object
– *-property
• If a subject has “read” access to one object and “write”
access to a second object, level of second object
dominates level of first object
– Tranquility principle for object levels
• Level of active object will not be changed
– Exclusion of read access to inactive objects
– Rewriting (scrubbing) of objects that become active
467
Layers of Specification and Proof
• Four stages
• Each stage more detailed and closer to machine
implementation than the one before
1. FSPM (BLP)
2. FTLS – The interface of the system
–
–
Includes OS calls and
PDP 11/45 instructions available outside kernel
–
Semantics of language must be well-understood
3. Algorithmic specification – High-level code that
represents machine language
4. Machine itself: Running code and HW
468
Why Four Proof Stages?
• Simplify proof work
• Big jump from machine to FSPM
– FSPM has subjects, objects, *-property, …
– Machine has code and hardware
• Intermediate layers are closer to each other
• First prove FTLS is valid interpretation of FSPM
• Then further proofs only need to show that lower
stages implement FTLS
– Lower-level proofs don’t need abstractions of subjects
and objects and *-property
469
GEMSOS A1 Formal Verification
Process
• FSPM, FTLS written in InaJo specification
language
• BLP BST proven using FDM theorem prover
– FSPM was not “pure” BLP, but the GEMSOS
interpretation of BLP
• Conformance of FTLS to model also proven
• FTLS also used for code correspondence and
covert storage channel analysis
470
Value of Formal Verification Process
• “Provided formulative and corrective guidance to
the TCB design and implementation”
• I.e., just going through the process helped
prevent and fix errors in the design and
implementation
• Required designers/developers to use clean
designs
– So could be more easily represented in FTLS
– Prevents designs difficult to evaluate and understand
471
GEMSOS TCB Subsets
• Ring 0: Mandatory security kernel
• Ring 1: DAC layer
• Policy enforced at TCB boundary is union of
subset policies
TCB Boundary
DAC layer
Security kernel (MAC)
472
Reference
monitor
Each Subset has its own FTLS and
Model
• Each subset was verified through a separate
Model and FTLS
• Separate proofs, too
• TCB specification must reflect union of subset
policies
473
Where in SDLC?
• Model and FTLS written when interface spec
written
• Preliminary model proofs, FTLS proofs, and
covert channel analysis performed when
implementation spec and code written
• Code correspondence, covert channel
measurements, and final proofs performed when
code is finished
• Formal verification went on simultaneously with
development
474
Goal of GEMSOS TCB Verification
• To provide assurance that TCB implements the
stated security policy
• Through chain of formal and informal evidence
– Statements about TCB functionality
– Each at different levels of abstraction
•
•
•
•
•
Policy
Model
Specification
Source
TCB itself (hardware and software)
– Plus assertions that each statement is valid wrt next
more abstract level
475
Code Correspondence
• Three parts:
1. Description of correspondence methodology
2. Account of non-correlated source code
3. Map between elements of FTLS and TCB code
• FTLS must accurately describe the TCB
• TCB must be valid interpretation of FTLS
• All security-relevant functions of TCB must be
represented in FTLS
– Prevent deliberate or accidental “trap door”
476