Download Example: Data Mining for the NBA - The University of Texas at Dallas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Carrier IQ wikipedia , lookup

Computer and network surveillance wikipedia , lookup

Multilevel security wikipedia , lookup

Information privacy law wikipedia , lookup

Computer security wikipedia , lookup

Mobile security wikipedia , lookup

Security-focused operating system wikipedia , lookup

Data remanence wikipedia , lookup

Cybercrime countermeasures wikipedia , lookup

Fault tolerance wikipedia , lookup

Trusted Computing wikipedia , lookup

Next-Generation Secure Computing Base wikipedia , lookup

Transcript
Data and Applications Security
Developments and Directions
Dr. Bhavani Thuraisingham
The University of Texas at Dallas
Building Trusted Applications on
Untrusted Platforms for
Mission Assurance
Outline
 Mission Assurance
 Our Approach
 Acknowledgement
 References
Mission Assurance
 Mission Assurance includes the disciplined application of system
engineering, risk management, quality, and management principles to
achieve success of a design, development, testing, deployment, and
operations process. Mission Assurance's ideal is achieving 100% customer
success every time. Mission Assurance reaches across the enterprise,
supply base, business partners, and customer base to enable customer
success.
 The ultimate goal of Mission Assurance is to create a state of resilience that
supports the continuation of an agency's critical business processes and
protects its employees, assets, services, and functions. Mission Assurance
addresses risks in a uniform and systematic manner across the entire
enterprise.
 Mission Assurance is an emerging cross-functional discipline that demands
its contributors (project management, governance, system architecture,
design, development, integration, testing, and operations) provide and
guarantee their combined performance in use.
Mission Assurance
 The United States Department of Defense 8500-series of policies has three
defined mission assurance categories that form the basis for availability and
integrity requirements. A Mission Assurance Category (MAC) is assigned to
all DoD systems. It reflects the importance of an information system for the
successful completion of a DoD mission. It also determines the requirements
for availability and integrity.
 MAC I systems handle information vital to the operational readiness or
effectiveness of deployed or contingency forces. Because the loss of MAC I
data would cause severe damage to the successful completion of a DoD
mission, MAC I systems must maintain the highest levels of both integrity
and availability and use the most rigorous measure of protection.
Mission Assurance
 MAC II systems handle information important to the support of deployed and
contingency forces. The loss of MAC II systems could have a significant
negative impact on the success of the mission or operational readiness. The
loss of integrity of MAC II data is unacceptable; therefore MAC II systems
must maintain the highest level of integrity. The loss of availability of MAC II
data can be tolerated only for a short period of time, so MAC II systems must
maintain a medium level of availability. MAC II systems require protective
measures above industry best practices to ensure adequate integrity and
availability of data.
 MAC III systems handle information that is necessary for day-to-day
operations, but not directly related to the support of deployed or contingency
forces. The loss of MAC III data would not have an immediate impact on the
effectiveness of a mission or operational readiness. Since the loss of MAC III
data would not have a significant impact on mission effectiveness or
operational readiness in the short term, MAC III systems are required to
maintain basic levels of integrity and availability. MAC III systems must be
protected by measures considered as industry best practices.
Problem
 The Office of the Deputy Assistant Secretary of Defense (Information
and Identity Assurance) has stated that “the Department of
Defense's (DoD) policy, planning, and war fighting capabilities are
heavily dependent on the information technology foundation
provided by the Global Information Grid (GIG)
 However, the GIG was built for business efficiency instead of
mission assurance against sophisticated adversaries who have
demonstrated intent and proven their ability to use cyberspace as a
tool for espionage and criminal theft of data. GIG mission assurance
works to ensure the DoD is able to accomplish its critical missions
when networks, services, or information are unavailable, degraded,
or distrusted.”
Problem
 To meet the needs of mission assurance challenges, President’s
cyber plan (CNCI) has listed the area of developing multi-pronged
approaches to supply chain risk management as one of the
priorities. CNCI states that the reality of global supply chains
presents significant challenges in thwarting counterfeit, or
maliciously designed hardware and software products.
 To overcome such challenges and support successful mission
assurance we need to design flexible and secure systems whose
components may be untrusted or faulty. Our objective is to achieve
the secure operation of mission critical systems constructed from
untrusted, semi-trusted and fully trusted components for successful
mission assurance.
Problem
 In order to fight the global war on terror the DoD, federal agencies, coalition
partners and first responders, among others have to use systems whose
components may be attacked and untrustworthy. These components may
include operating systems, database systems, application, networks and
middleware. Yet in doing so, one must ensure mission assurance in that the
system has to detect such attacks and be flexible so that it can provide
overall security for the missions.
 Furthermore, it is stated “the Defense Science Board task force assessed the
Department of Defense’s (DoD) dependence on software of foreign origin and
the risks involved. The task force considered issues with supply chain
management; techniques and tools to mitigate adversarial threats; software
assurance within current DoD programs; and assurance standards within
industry, academia, and government.”
 One major challenge is to provide mission assurance by developing flexible
designs of secure systems whose components may be untrustworthy or
faulty.
DoD Mission Assurance Policy
 To ensure mission assurance in the presence of untrusted components and
systems, DoD’s policy consists of the following:
-
“(i) To provide uncompromised and secure military systems to the
warfighter by performing comprehensive protection of Critical Physical
Infrastructure (CPI) through the integrated and synchronized application
of Counter-intelligence, Intelligence, Security, systems engineering, and
other defensive countermeasures to mitigate risk, and
-
(ii) To minimize the chance that the Department’s warfighting capability
will be impaired due to the compromise of elements or components
being integrated into DoD systems by foreign intelligence, foreign
terrorists, or hostile elements through the supply chain or system
design.”
 Our goal is to provide solutions that would implement the above policy of the
DoD.
Motivation
 Various Air Force mission critical systems, including DCGS (Distributed Common
Ground Systems), AWACS (Air Borne Warning and Control System) and Layered
Sensing - a major research initiative at AFRL investigating techniques for total situation
awareness will benefit from the solutions we will provide. Consider the AWACS system
that may operate in a hostile environment.
 In our experimental research for next generation AWACS for AFRL we designed an
infrastructure, operating system, data manager and multi-sensor integration (MSI)
application using commercial off-the-shelf technologies. Track data captured from
sensors are intergraded and fused by the MSI module and stored and managed by the
data manager. The middleware and operating systems provide services such as
interprocess communication and memory management to support the data manager and
MSI application.
 The sensors may be attacked in a hostile environment. The COTS products for the
middleware, operating system and data manager could be corrupted. The application
subsystem may be attacked. Failure of any of the components could crash the entire
system with disastrous and possibly fatal consequences. Therefore it is important for
the system to operate even in the midst of component failures and attacks.
Principles
 Addressing the challenges of protecting applications and
data when the underlying platforms cannot be fully trusted
dictates a comprehensive defense strategy. Such a strategy
requires the ability to address new threats that are smaller
and more agile and may arise from the components of the
computing platforms.
 Consequently, the Observe, Orient, Decide and Act (OODA)
loop] on which conventional defense approaches are based
needs to be expanded to anticipate threats. Our strategy,
based on the three tenets shown in Figure 1, recognizes and
addresses such a need. We emphasize that the adversaries
can be components of the underlying platforms, in addition to
the usual external adversaries.
Principles
 The Three Tenets of our Approach
 Move out of band and Confuse the Adversary
Make what is critical and associated security elements less
accessible to the adversary
 Make it more difficult for the adversary to determine the target
 Detect, React, Adapt
Be ready to move mission critical tasks to uncompromised
components within the same system or to other systems

 Focus on what is critical
 Reduce scope of what to protect
 Minimize number of security elements
Principles
Based on these tenets, we have identified a preliminary set of
technical principles that applied in combination guide the
development of secure environments for applications and
data.
The first technical principle, which we refer to as Increase Trust
by Limiting and Isolating Functionality (abbreviated as
Isolation Principle), directly follows from the first two tenets.
This somewhat counter-intuitive claim of “less is more” is also
based on well established security principles, such as the
Principles of Least Privilege and Economy of Mechanism.
Principles
 The second technical principle, which we refer to as Adaptive Multiple
Independent Layers of Security (abbreviated as Independence Principle),
applies the Isolation Principle both statically and dynamically by dividing the
system software into several layers. These layers as well as the layering
organization should be amenable to dynamic changes of the various system
components. The Independence Principle directly follows from the third
tenet.
 The third technical principle, which we refer to as Artificial and Natural
Diversity (abbreviated as Diversity Principle), directly follows from the
second tenet. An example of artificial diversity approach is represented by
compiler tools that generate obfuscated code (e.g., SharpToolbox.com lists
26 obfuscator tools). By natural diversity we mean the independently
developed software systems that have similar functionality (e.g., Windows
and Linux at OS level). To maximize diversity in system software, we apply
the Diversity Principle to both implementation (mainly artificial diversity) and
design (mainly natural diversity) of systems.
Architecture
 Our overall architectural strategy is based on our technical principles. Our strategy addresses
protection at various levels within a same system from the application layer down to the OS,
Virtual Machine Monitor (VMM), and hardware. It is our contention that single-layer solutions
typically improve one layer by pushing performance and security limitations to other layers, with
questionable overall results. Key aspects of our strategy are the following:
 Layered ATI Architecture. Conceptually, we organize the (untrusted) system software
into layers, and for each layer, we propose the separation of a small, verifiable abstract
trusted interface (ATI) from the functionality-rich remaining untrusted software
components at that layer. This multi-layer separation can be seen as a generalization of
the MILS (multiple independent levels of security) model, where the implementation of
the ATI at each layer is analogous to the single-layer MILS separation kernel. We prefix a
layer’s ATI with a two-letter acronym: HW-ATI for the hardware layer, VM-ATI for the
virtual machine and hypervisor layer, OS-ATI for the operating system layer, MW-ATI for
the middleware layer, DB-ATI for the database management system layer, and AP-ATI if
an application is exporting a trusted application programmer interface. This is an
application of the Isolation and Independence Principles.
 Natural and Artificial Diversity. The ATI architecture does not
mandate a uniquely defined ATI at each layer. There may be
Architecture
 Natural and Artificial Diversity. The ATI architecture does not
mandate a uniquely defined ATI at each layer. There may be
multiple, overlapping ATIs at any layer. Furthermore, concrete
diverse implementations may exist for the same or different ATIs
(e.g., Intel TXT and AMD SVM at the hardware layer). One of the
main design goals of the ATI Architecture is the active incorporation
of diverse designs and implementations at each layer. For diverse
implementations, we will use artificial diversity tools (e.g., compiler
tools for creating different program representations as discussed in
 For diverse designs, we will rely on natural diversity of different
system components (e.g., variants of Unix and possibly Windows at
the OS layer). This natural diversity offers protection beyond the
implementation-level protection of artificial diversity tools.
Architecture
 The Abstract Trusted Interface (ATI) Architecture
 HW-ATI (Intel TXT, AMD SVM, secure co-processors)
 HW-Untrusted
 VM-ATI (Xen, VMware)
 VMM-Untrusted
 OS-ATI (seL4 microkernel)
 OS-Untrusted
 MW-ATI (secure comm.)
 Middleware-Unt1
 DB-ATI (co-DBMS)
 DBMS-Untrusted
 AP-ATI
 Application-Untrusted
Architecture
 Composition of ATIs. The main constraint of the implementation of an ATI is that it may
only use the facilities provided by ATIs of lower layers.
 The advantage of this restriction is that by utilizing only trusted components, the
implementation of ATI of higher layers remains trusted. The requirement is that the ATIs
at each layer must have specific functionality, such that the resulting ATI implementation
can be formally verified.
 This follows from the Isolation Principle. The ATI architecture is flexible and dynamically
reconfigurable. As new software or hardware components are verified to be secure at
each layer, that layer’s ATI may be expanded by incorporating the new components.
Conversely, if a new vulnerability is discovered in some implementation of ATIs, the
affected components (and higher level components that use this implementation) should
be excluded from the trusted side. Until the problem is fixed, the system falls back to
reduced ATI functionality that remains trusted.
Architecture
 In real-time computing, when a precise computation is under the risk
of missing a deadline, alternative modules implemented with
reduced computational precision in an approach called Imprecise
Computation can still complete a simplified task within the time
constraint. Analogously, we will explore alternative designs that use
different underlying ATIs for a given critical functionality (perhaps
more limited than the original one). When a full-functionality
application is affected by attacks or newly discovered vulnerabilities,
these alternative design and implementations based on different
ATIs provide secure critical
Threat Model
 Threats from hardware. Once hardware is compromised, many properties can
be compromised. For example, if an adversary has access to the system bus
or CPU chip connectors, then any application code and critical data (e.g.,
cryptographic keys) can be compromised. Although the introduction of
trusted platform modules (TPM) alleviates the problem to some extent, we
must bear in mind that TPM were designed to deal with software attacks,
rather than hardware attacks.
 Therefore, trustworthiness of hardware may be deemed as the last line of
defense. Nevertheless, novel hardware architectures could better support
secure computing from the perspectives of security, performance, and
usability. For certain hardware devices such as expensive co-processors, we
may assume they are tamper-resistant or even tamper-proof; for low-end
hardware devices we may only assume they are fail-stop.
Threat Model
 Threats from the hypervisor. If the hypervisor is
compromised, then the guest OS or even the applications and
data can be compromised. It would be desirable that only a
tiny portion of hypervisor or microkernel is to be trusted.
 Threats from the OS. There are a spectrum of threats that can
be launched at the OS level to attack application and data:
 Attacks exploiting OS vulnerabilities: For example, an
attacker could gain access to a memory region from which
critical secrets and passwords may be extracted.
 Attacks exploiting backdoors in the OS to passively tap
system states: such attacks can be launched by the
developer of some components of the OS (e.g., a third-party
device driver).
Threat Model
 Corrupted OS: Threats could originate from attacks to services provided by
the OS such as:
-
Attacks against the file system: The attacker can read, tamper with
application binaries, or replace an application with code that simply
prints out its sensitive data. S/he might also launch a replay attack by
reverting a file to an earlier version, e.g., replacing a patched application
with an earlier version that contains a buffer overflow.
-
Attacks against inter process communication:
Although the OS may not interfere with program execution, it might be
able to do so when a new process is started. For example, when a
process forks, it might initialize the child’s memory with malicious code
instead of the parent’s, or set the starting instruction pointer to a
different location. Signal delivery also presents an opportunity for a
malicious OS to interfere with program control flow, since the standard
implementation involves the OS redirecting a program’s execution to a
signal handler.
Threat Model
-
Attacks against memory management: The attacker may deliberately
manipulate memory management so as to cause frequent context
switches that may be exploited to launch side-channel attacks against
cryptographic algorithms (for extracting cryptographic keys).
-
Attacks against the clock: A malicious OS could speed up or slow down
the system clock, which could allow it to subvert expiration mechanisms
that use clocks.
-
Attacks against randomness: A malicious OS can manipulate the
randomness that is often demanded by cryptographic applications.
-
Attacks against I/O and trusted path: An application’s input and output
paths to the external environment go through the OS, including display
output and user input. The OS can observe traffic across these
channels, capturing sensitive data as it is displayed on the screen, or as
the user types it in (e.g., passwords). It could also send fake user input
to a protected application, or display malicious output, such as a fake
password entry windows.
Threat Model
 Threats from the middleware. Trusted communications is a
fundamental assumption in distributed applications. For
example, if messages arrive corrupted or subtly altered, the
application would be unable to provide the correct answer or
take the right action. Also, if the attacker can prevent some
key players from receiving certain data, the decision-making
process can be undermined.
 Threats from DBMS. Since data are often managed through
DBMS, a malicious DBMS could undermine data integrity,
confidentiality, and availability. Because a DBMS is often a
large and complex software system, the security assurance of
DBMS itself is questionable.
Trusted Architecture Components
 Step 1: Evaluation of several design alternatives towards the definition of an
abstract HW-ATI as the interface to HW-LSK. The goal of the abstract HW-ATI
is to accommodate the functionality variations in the concrete alternatives
such as Intel/TXT and AMD/SVM, among others. Other candidate HW-LSKs
(implementations of HW-ATI) include: An on-die secure co-processor (part of
a system-on-chip).
 Special functions integrated into the processor’s pipeline for confidentiality
and integrity.
 Special hardware features for tracking critical or sensitive information flow to
dynamically shepherd the legitimacy of an application’s execution path.
 An isolated, dedicated core of a multi-core processor for interpreting
hypervisor calls and managing physical resources under a virtualized
computing environment.
 A trusted platform module (TPM) that securely stores master keys.
 In addition to the evaluation of security strength of the above potential techniques, a
feasible secure component implemented at the hardware level must minimize the impact
of design complexity as well as keep the performance degradation under a reasonable
constraint. We will analyze such implications and demonstrate them using simulation or
Trusted Architecture Components
 Step 2: Development of higher level ATIs and LSKs. For example, we plan to
define an instance of VM-ATI based on the HW-ATI, and evaluate the several
concrete implementations of VM-LSK (see the next section), built on the HWLSKs (see the examples above). Using the multi-core alternative as an
illustrative example, one “secure core” can be completely and securely
isolated from the external network (i.e., creating a secure sandbox) while the
other (untrusted) cores run service applications. All system level interactions
such as I/O operations or memory allocation, however, will be monitored and
scrupulously carried out by the isolated core.
 There are several examples of more tightly coupled interactions between
higher level LSKs and HW-LSK. For example, instead of running the VM-LSK
in a shared memory space, it can be isolated to execute in a protected
memory region only accessible by the secure core. This complete, physical
isolation of the VM-LSK memory from all other hypervisor functions and
guest operating systems avoids potential tampering by malicious
hypervisors and untrusted OS’s.
Trusted Microkernels and Hypervisors
 Step 1: Given our assumption that OS’s cannot be trusted, we will
consider a VMM as a natural component of the LSK. At the VMM
level of the platform, the step 1 is also the evaluation of design
alternatives towards the definition of an abstract VM-ATI including
the following:
- Verified secure microkernel interfaces
- Secure subsets of current hypervisors
- Custom execution environments (
 Although the assumption that an entire hypervisor can be trusted is
quite common today, we do not consider it a valid alternative for the
future. This is due to a combination of several trends: (1) the lack of
a formal proof for any of the current hypervisors; (2) the expected
growth of hypervisor code in general, which appears to outpace the
capability increase of formal methods; and (3) already published
vulnerabilities in Xen and malicious hypervisors
Trusted Microkernels and Hypervisors
 Step 2: Choose a small set of winning designs and implementations for the definition of
VM-ATI and the construction of corresponding VM-LSKs that implement the VM-ATI. The
candidate platforms (include the L4 microkernel and a refactoring of hypervisors such as
Xen and VMware (by implementing VM-ATI using their facilities). The alternative choices
will be evaluated by studying the different vulnerabilities in design and implementation.
In terms of natural diversity, we are particularly interested in implementations that have
independent failure modes and vulnerabilities, since their combination can improve
overall system security and availability. The VM-ATI and VM-LSKs will be used in our
research at the hardware layer, as mentioned in the previous section. At the
hardware/hypervisor interface, the performance penalty imposed by the mapping
between VM-LSKs (e.g., subset of Xen) and HW-LSKs (e.g., SVM/TXT) will determine the
suitability of each HW-LSK for the implementation of VM-LSKs.
 Step 3: Adopt the best combination of HW-ATI/VM-ATI and HW-LSK/VM-LSK from the
evaluation studies. (By best combination we mean both performance and independence
of failure modes.) These layers will be experimentally evaluated along the dimensions
mentioned in step 1 (e.g., performance and verifiability) through their use by higher
layers (MW-ATI, DB-ATI, and applications).
Trusted Operating Systems and Software
 A major “security bottleneck” in an untrusted system are the many ways a
malicious OS kernel may damage both application software and data. Since
both the typical OS code base (on the order of millions of lines of code) and
APIs (on the order of thousands of kernel calls) have grown beyond the
reach of formal verification methods, we must work on subsets of OS
functionality.
 Step 1: Combine a supply-side study with a demand-driven study to
converge on the design choices of appropriate OS-ATIs and OS-LSKs that
implement them. The supply-side study (a bottom-up approach) will start
from verified microkernel APIs such as seL4, and carefully add small
increments of critical functionality. This work will be done in cooperation
with the development of VM-ATI/VM-LSK combinations outlined in the
previous section. The demand-driven study (a top-down approach) will start
from concrete trusted component from higher layers (applications, DBMS, or
middleware) to determine the most useful facilities
Trusted Operating Systems and Software
 Step 2: The OS-ATI chosen in step 1 is implemented and incorporated into
OS-LSKs by carefully adding the desired functionality on top of underlying
trusted components (concrete implementations of VM-ATI and HW-ATI). This
work will proceed by careful refactoring of current OS kernels (including
production kernels and custom kernels developed for secure applications)
combined with the development from scratch of appropriate ATI
functionality. Since the total size of OS-ATI/OS-LSK combinations is limited
by the projected capabilities of formal verification methods, we anticipate a
good implementation to be achievable within a reasonable time.
 Step 3: Use the top-down study to define the ATIs and their implementations
(LSKs) at the application , DBMS, and middleware layers. These concrete
implementations (MW-LSK, DB-LSK, AP-LSK) will provide the concrete
evaluation drivers of the OS-ATI design and OS-LSK implementation, along
the dimensions mentioned above. Again, the choice of designs and
implementations that have independent vulnerabilities will enhance the
overall security and resilience of Layered Separation Kernels as a whole.
Secure Middleware/Data
 Due to the relative advanced state of trusted communication services and the increasing
capabilities of emerging systems, our work at the middleware level will focus on the
evaluation of existing techniques and software tools for incorporation into MW-ATI.
 Data represent a critical asset for systems such as the GIG. Current applications rely on
the underlying DBMS and OS for querying and storing data. For instance, authentication
and access control are often performed solely by the DBMS, whereas buffer
management and persistent storage are jointly supported by the DBMS and the OS (e.g.,
shared memory and I/O are controlled by the OS kernel). However, the DBMS cannot be
trusted, for two main reasons: first, it is a complex software system that cannot be
formally verified, and therefore may contain security vulnerabilities; second, it is
provided by third parties that are not fully trusted. A compromised DBMS may leak
confidential data, or may prevent applications from computing correct results (e.g., the
compromised DBMS introduces errors in the execution plan or drops query results). Our
goal is to protect the confidentiality and integrity of the data used in mission-critical
applications.
Secure Data
 Securing data must address two equally-important aspects:
 On-line protection is concerned with providing a correct and complete view of the data to
applications. This aspect is crucial for the decision-making process in critical
applications. An important part of on-line protection is access control enforcement,
which must be performed by a trusted component in order to prevent leakage of critical
data to unauthorized users. In addition, malicious processes (e.g., a compromised virtual
memory manager) must be prevented from eavesdropping on the data contents.
 Off-line protection is concerned with securely storing the data, even when it is not used
by applications. This is a challenging aspect, since malicious entities have many
opportunities to attack the data, either through the DBMS or through raw access at the
OS level. As a matter of fact, most DBMSs rely on the underlying un-trusted OS to
handle storage of data to disk, hence disclosing sensitive data. Data leakage must be
prevented even in the worst-case scenario when the attacker gains unrestricted physical
access to the storage media (e.g., by physically removing the disk).
 We will address these requirements by developing a secure DB-ATI layer, which consists
of trusted computing and storage modules. We also refer to the implementation of DBATI as Co-DBMS (a term used interchangeably with DB-LSK) to emphasize that it is a
component to be coupled with a conventional DBMS to achieve high assurance data
Secure Data
 We will address these requirements by developing a secure DB-ATI layer, which consists
of trusted computing and storage modules. We also refer to the implementation of DBATI as Co-DBMS (a term used interchangeably with DB-LSK) to emphasize that it is a
component to be coupled with a conventional DBMS to achieve high assurance data
management and to isolate the critical data from a potentially malicious DBMS. The DBATI (Figure 7) acts as an intermediate tier between critical applications and the un-
trusted DBMS. Applications send data access requests to the DB-ATI, together with
runtime context information. Such runtime-specific data allows the Co-DBMS to verify
that the application has the proper access credentials, and to distinguish between
separate application instances/sessions (e.g., distinct encryption keys for each
application instance can be handled transparently within the secure DB-ATI layer). From
the system’s point of view, the Co-DBMS will run in user space managed by OS-LSK and
using only lower-level ATIs (MW-ATI, OS-ATI, VM-ATI, and HW-ATI). Although our main
objective is to use the DB-ATI in conjunction with the LSK platform, we emphasize that
the DB-ATI layer can also be deployed in other settings, such as:
 Trusted OS: if the entire OS is trusted (e.g., an in-house developed OS kernel) a custom
version of the Co-DBMS can be deployed directly on top of the custom OS. Thus,
protection against an un-trusted DBMS engine can be achieved with lower performance
overhead.
Secure Data
 Although our main objective is to use the DB-ATI in conjunction with the LSK platform,
at the DB-ATI layer can also be deployed in other settings, such as:
 Trusted OS: if the entire OS is trusted (e.g., an in-house developed OS kernel) a custom
version of the Co-DBMS can be deployed directly on top of the custom OS. Thus,
protection against an un-trusted DBMS engine can be achieved with lower performance
overhead.
 Trusted virtual machine environments: in scenarios in which a (presumed secure) VMM
provides secure data management facilities, the DB-ATI can be deployed on top of
untrusted OS and remain protected by the secure VMM.
 The Co-DBMS will interface with applications and DBMS in a transparent manner,
without requiring significant changes to the existing infrastructure. We plan to
encapsulate the DB-ATI functionality within a self-contained module, uniformly
accessible by applications through an ODBC-like driver (part of DB-LSK).
Secure Data
 In a typical client-server application, the Co-DBMS will mediate SQL calls from the
application to a database server. Since the DB-ATI layer resides within the client, CoDBMS can support full-fledged SQL data access. On the other hand, in a large-scale
distributed environment setting, situations arise where data need to be re-located. For
instance, to reduce access time, data may be cached at remote sites. In other cases,
computational constraints may require data transfer (e.g., certain processing steps may
only be performed by specialized hardware.) In such scenarios, it is necessary to design
mechanisms that protect the data regardless of their physical location. This is addressed
by developing light-weight self-contained secure DB-LSK implementations that will be
packaged together with the data. Such sticky DB-LSK components will still provide
standardized interfaces to the data, but may only support a restricted subset of data
access operations.
Secure Data
 The Co-DBMS will ensure correctness and completeness of database query results, and
will secure storage by encrypting and signing the data before it is sent to the storage
media. In addition, the Co-DBMS will verify (through efficient provable data possession
protocols) that the data are not discarded due to a compromised OS or due to hardware
failures. The trustworthiness of the Co-DBMS can be guaranteed due to several factors:
-
the DB-ATI will support standard specialized primitives for which it is possible to
perform formal verification (e.g., encryption, digital signatures);
-
whenever it is viable, we will push down the DB-ATI functionality down to
appropriate lower-level trusted ATI. This approach has two advantages: (i) code
executed by specialized hardware is robust to ubiquitous attacks on standard
computer architectures (e.g., buffer overflow); (ii) the operations implemented at
lower ATI levels are faster.
Secure Data
 The functionality provided by the secure DB-ATI will include:
-
Access control: this fundamental security primitive must be executed on a trusted
component, and will be implemented in software.
-
Encrypting and signing data: such standard cryptographic primitives are essential
for secure storage and will be implemented in hardware, where possible. In certain
cases, if the MW-ATI layer supports such primitives, the DB-ATI can invoke these
services directly.
-
Processing and indexing on top of encrypted data: these primitives are essential
for query processing (e.g., comparisons on encrypted values), and will be
implemented in software, using techniques such as functional encryption and
oblivious RAM
-
Provable data possession primitives: a compromised OS may attempt to delete the
data stored on disk. We will explore efficient methods (i.e., with low I/O and
computational overhead) to check the integrity of storage on un-trusted media
Secure Applications
 Complex software applications typically implement a mixture of security-critical and
security-irrelevant functionality. For example, a sink for a wireless sensor network might
implement a database for storing confidential data collected from sensors as well as
implementing a graphical user interface. Unfortunately, due to the monolithic design of
many of these applications, vulnerabilities in the security-irrelevant portion can lead to a
compromise of the security-critical component. For instance, a buffer overflow
vulnerability in the graphical user interface could be leveraged by an attacker to take
control of the process and thereby access the secure database. Since it is generally not
feasible to formally verify all components of large, production-level applications, some
form of secure program partitioning is needed.
 Our architecture design addresses this issue by dividing each system layer into a
functionality-rich untrusted component and a smaller, functionality-limited abstract
trusted interface (ATI). Application code in our architecture must therefore be likewise
partitioned to isolate its non-critical components from its security-critical components,
and to obtain trusted services exposed by the secure database system (DB-ATI),
middleware (MW-ATI), and operating system (OS-ATI) layers.
Benefits of the approach
 From the application point of view, our approach provides an end-to-
end solution in protecting mission-critical applications and data
from untrusted components at each layer of system software. This
is achieved by a combination of small trusted interfaces at each
layer (Isolation Principle), independently implemented trusted
components for the trusted interfaces (Independence Principle), and
variety of implementations (Diversity Principle), which is a powerful
innovation at the system level.
Benefits of the approach
 From the application point of view, our approach provides an end-to-
end solution in protecting mission-critical applications and data
from untrusted components at each layer of system software. This
is achieved by a combination of small trusted interfaces at each
layer (Isolation Principle), independently implemented trusted
components for the trusted interfaces (Independence Principle), and
variety of implementations (Diversity Principle), which is a powerful
innovation at the system level.
Relevance to DoD Mission Assurance
 Our research will be directly applicable to DoD’s Mission Assurance
Objectives, as the systems that will be developed based on our
principles will ensure security even if the components are
compromised.
 For example, consider the AWACS experimental system discussed
in our scenario. Attacks may be at the sensor level, the application
level or at any of the component level. Our approach will ensure that
the system will be in operation even if the components such as
middleware or the operating systems are attacked. We believe that
the theories, tools and technologies to be developed under this
project can be applied to build the next generation of mission
assurance technologies for the military.
Acknowledgement
 Investigation of Mission Assurance Carried out jointly with
 Purdue University (Purdue)
 Georgia Institute of Technology (GATech)
 University of Texas at Dallas (UTD)
 University of Texas, San Antonio (UTSA)
 University of Dayton (UDayton)
 University of California, Irvine (UCI)
References
 Mission Assurance, http://en.wikipedia.org/wiki/Mission_assurance
 Thuraisingham, et al, UTDCS-03-10, Securing the Execution
Environment Applications and Data from Multi-Trusted Components,
The University of Texas at Dallas, Technical Report, 2010.