Download AFOSR-review-2008-UTD - The University of Texas at Dallas

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Information Operation
across Infospheres:
Assured Information Sharing
Prof. Bhavani Thuraisingham
Prof. Latifur Khan
Prof. Murat Kantarcioglu
Prof. Kevin Hamlen
The University of Texas at Dallas
Prof. Ravi Sandhu
UT San Antonio
June 2008
Architecture
Data/Policy for Coalition
Export
Data/Policy
Export
Data/Policy
Export
Data/Policy
Component
Data/Policy for
Agency A
Component
Data/Policy for
Agency C
Component
Data/Policy for
Agency B
Trustworthy Partners
Semi-Trustworthy Partners
Untrustworthy Partners
Our Approach
• Integrate the Medicaid claims data and mine the data;
next enforce policies and determine how much
information has been lost (Trustworthy partners);
Prototype system
• Trust for Peer to Peer Networks
• Apply game theory and probing to extract information
from semi-trustworthy partners
• Conduct information operations (defensive and
offensive) and determine the actions of an untrustworthy
partner.
• Examine RBAC and UCON for coalitions (UT San
Antonio)
• Funding: AFOSR 300K; Texas Enterprise Funds 150K
for students; 60K+ for faculty summer support; 45K+ for
postdoc
Accomplishments to date
• FY06: Presented at 2006 AFOSR Meeting
- Investigated the amount of information lost due to
policy enforcement – Considered release factor
- Preliminary research on RBAC/UCON; Game
theory approach, Defensive operations
• FY07: Presented at 2007 AFOSR Meeting
- Initial prototype
- Penny for P2P Trust, Some results on applying
Game Theory, Data mining for Code blocker (with Penn
State), RBAC/UCON-based model
• FY08 : 2008 AFOSR Meeting
- Enhanced prototype – integration into Intelligence
Community’s Blackbook environment, Incentive based
information sharing, Defensive and offensive operations
Policy Enforcement Prototype
Dr. Mamoun Awad (postdoc) and students
Coalition
Architectural Elements
of the Prototype
•Policy Enforcement Point (PEP):
•Enforces policies on requests sent by the Web Service.
•Translates this request into an XACML request; sends it to the PDP.
•Policy Decision Point (PDP):
•Makes decisions regarding the request made by the web service.
•Conveys the XACML request to the PEP.
Policy Files:
 Policy Files are written in XACML policy language. Policy Files specify rules for
“Targets”. Each target is composed of 3 components: Subject, Resource and Action;
each target is identified uniquely by its components taken together. The XACML request
generated by the PEP contains the target. The PDP’s decision making capability lies in
matching the target in the request file with the target in the policy file. These policy files
are supplied by the owner of the databases (Entities in the coalition).
Databases:
The entities participating in the coalition provide access to their databases.
UTSA Research
• Investigated specifying RBAC policies in OWL
(Web Ontology Language)
• Developed a model called ROWLBAC
• Investigating the enfacement of UCON in OWL
or OWL-like language
• Prototype in development
• Goal is to specify and reason about security
policies using semantic web-based specification
languages and reasoning engines
• Paper to be presented at ACM SACMAT June
2008
• Collaboration between UTSA-UTD-UMBC-MIT
Publications and Plans
• Some Recent Publications:
• Assured Information Sharing: Book Chapter on Intelligence and Security
Informatics, Springer, 2007
• Data Mining for Malicious Code Detection, Journal of Information Security
and Privacy, Accepted 2007
• Enforcing Honesty in Assured Information Sharing within a Distributed
System, Proceedings IFIP Data Security Conference, July 2007
• Confidentiality, Privacy and Trust Policy Management for Data Sharing,
IEEE POLICY, Keynote address, June 2007 (Proceedings)
• Data Mining for Security Applications, Keynote talk at Intelligence and
Security Informatics Conference, June 2008
• Centralized Reputation in Decentralized P2P Networks, ACSAC 2007
• ROWLBAC, to be presented at ACM SACMAT June 2008
• Also units on assured information sharing on courses we teach at AFCEA
(November 2007, April 2008, May 2008)
• Plans:
• This research was instrumental in developing ideas for the Assured
Information Sharing MURI. The first two parts will be transitioned into the
MURI work led by UMBC. Will investigate opportunities for Data mining for
Botnet research with UIUC. Will also develop white paper on offensive
operations
Distributed Information
Exchange
• Multiple, sovereign parties wish to cooperate
– Each carries pieces of a larger information
puzzle
– Can only succeed at their tasks when
cooperating
– Have little reason to trust or be honest with
each other
– Cannot agree on single impartial governing
agent
– No one party has significant clout over the
rest
– No party innately has perfect knowledge of
Game Theory
• Studies such interactions through
mathematical representations of gain
– Each party is considered a player
– The information they gain from each
other is considered a payoff
– Scenario considered a finite
repeated game
• Information exchanged in discrete
‘chunks’ each round
• Situation terminates at a finite yet
unforeseeable point in the future
– Actions within the game are to either
lie or tell the truth
Withdrawal
• Much of the work in this area only
considers sticking with available actions
– I.e. Tit-for-tat: Mimic other player’s
moves
• All players initially play this game with
each other
– Fully connected graph
– Initial level of trust inherent
• As time goes on, players which deviate
are simply cut-off
– Player that is cut-off no longer receives
The Payoff Matrix
Enforcing Honest Choice
• Repeated games provide opportunity for
enforcement
– Choice of telling the truth must be
beneficial
• The utility (payoff) of decisions made:
• Note that
when
Experimental Setup
• We created an evolutionary game in
which players had the option of selecting
a more advantageous behavior
• Available behaviors included:
– Our punishment method
– Tit-for-Tat
– ‘Subtle’ liep (a )  f (a )
f (a )
• Every 200 rounds, 
behaviors
are reevaluated
select
i
i
n
i 0
i
Results
Conclusions
• Experiments confirm our behaviors
success
– Equilibrium of behavior yielded both a
homogenous choice of TruthPunish and
truth told by all agents
– Rigorous despite wide fluctuations in
payoff 
• Notable Observations
– Truth-telling cliques (of mixed behaviors)
rapidly converged to TruthPunish
– Cliques, however, only succeeded when
the ratio of like-minded helpful agents