Download No Slide Title

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Business intelligence wikipedia , lookup

Configurable Network Computing wikipedia , lookup

Transcript
Internet2 Middleware Update
Topics
 Shibboleth Next Steps
• Core Shib
• Shib-based Components
• Shib-enriched applications
 InCommon and InQueue
 Signet
 Middleware Diagnostics
 Meetings
 NMI
 Opportunities
Federations and PKI
 The rough differences are payload format (SAML vs X.509) and typical
length of validity of assertion (real-time vs long-term)
 Federations use enterprise-oriented PKI heavily and make end-user PKI
both more attractive and more tractable.
 The analytic framework (evaluation methodologies for risk in applications
and strength of credentials) developed for PKI is useful for federations.
 The same entity can offer both federation and PKI services
 PKI-oriented infrastructure (e.g. FBCA) can be leveraged in support of
federations
 Federations work because they don’t have to scale to a global level 
Shib Update
 Project formation - Feb 2000 Stone Soup
 OpenSAML release – July 2002
 Shib v1.0 April 2003
 Shib v1.1 July 2003
 V1.2 April 2004
 V1.3 1Q05 – portal services, e-Auth certified, WS-Fed,
etc
 OpenSAML 2.0 – relatively soon
 Refactored Shib 2.0 – 3Q05?
Shib happens
 Core Shib
• V 1.3 1Q 2005
• SAML 2.0 and Shib 2.0
• E-Auth and MS WS-Fed
 Shib-related Components
• System and end-user attribute release policy GUI
• Federation meta-data management
• Next-generation WAYF
 Shib-enriched applications
•
•
•
•
Digital repositories
Sakai, OKI, Chandler, etc. (inter vs intra institutional)
Collaboration and communication apps – Sympa, video, etc
Globus
Shib management
 Modeled after Apache/Mozilla/Winsock
 Technical area – a meritocracy
• IdP = origin
• Service Provider = target
• GUI, Federation, etc. to follow
 Project management – a small coordinating group, focus
is on core shib components and recommendations to
advisory group
 Advisory group – a broad priority and investment
mechanism
Globus – Shib integration
 Initially, an NSF-funded NMI project that will allow GT4 to
allow Grid proxies to be Shib targets (do SAML), so that local
campus credentials can get access to Grids
 The approach (thought not the code) apparently is backwardly
compatible to GT2 and GT3
 What really needs to happen in Grid – federation integration?
• Native Shib in Globus and Signet to manage authority
• Schema registry
• Trust coordination among federations
InCommon federation
 Federation operations – Internet2
 Federating software – Shibboleth 1.1 and above
 Federation data schema - eduPerson200210 or later and
eduOrg200210 or later
 Became fully operational mid-September, with several
early entrants shaping the policy issues.
 Precursor federation, InQueue, has been in operation for
about six months and will feed into InCommon;
approximately 150 members
 http://www.incommonfederation.org
InCommon Principles
 Support the R&E community in inter-institutional
collaborations
 InCommon itself operates at a high level of security and
trustworthiness
 InCommon requires its participants to post their relevant
operational procedures on identity management, privacy, etc
 InCommon will be constructive and help its participants move
to higher levels of assurance as applications warrant
 InCommon will work closely with other national and
international federations
InCommon Uses
 Institutional users acquiring content from popular
providers (Napster) and academic providers (Elsevier,
JSTOR, EBSCO, Pro-Quest, etc.)
 Institutions working with outsourced service providers,
e.g. grading services, scheduling systems
 Inter-institutional collaborations, including shared courses
and students, research computing sharing, etc.
InCommon Management
 Operational services by I2
• Member services
• Backroom (CA, WAYF service, etc.)
 Governance
• Executive Committee - Carrie Regenstein - chair (Wisconsin), Jerry
Campbell, (USC), Lev Gonick (CWRU), Clair Goldsmith (Texas System),
Mark Luker (EDUCAUSE),Tracy Mitrano (Cornell), Susan Perry (Mellon),
Mike Teetz, (OCLC), David Yakimischak (JSTOR).
• Project manager – Renee Frost (Internet2)
 Membership open to .edu and affiliated business partners
(Elsevier, OCLC, Napster, Diebold, etc…)
 Contractual and policy issues were not easy and will evolve
 Initially an LLC; likely to take 501(c)3 status in the long term
InCommon participants
 Two types of participants:
• Higher ed institutions - .edu-ish requirements
• Service providers – partners sponsored by higher ed institutions,
e.g. content providers, outsourced service providers (WebAssign,
Roomschedulers, etc)
 Participants can function in roles of credential/identity
providers and resource/service providers
• Higher ed institutions are primarily credential providers, with the
potential for multiple service providers on campus
• Service providers are primarily offering a limited number of
services, but can serve as credential providers for some of their
employees as well
InCommon pricing
 Goals
• Cost recovery
• Manage federation “stress points”
 Prices
• Application Fee: $700 (largely enterprise I/A, db)
• Yearly Fee
– Higher Ed participant: $1000 per identity management system
– Sponsored participant: $1000
– All participants: 20 Resourceproviderids included; additional
resourceproviderids available at $50 each per year, available in bundles
of 20
Federal government
 Federal E-Authentication has a number of pilots under
way. One of them is now Shib.
 Phase 1 and Phase 2 efforts funded, with deliverables
due over the next six months
• Policy framework comparison submitted Oct 7
• Technical interop demonstrated October 14
• Policy discussions and applications meetings next month
 Potential phase 3 and 4 would include working on a
federal federation and peering with Higher Ed and other
federations.
WS-Fed and Shib
 Verbal agreements to build WS-Fed interoperability
• Contract work commissioned by Microsoft, executed by Shib core
development; contracts executed by mid-September, but work
likely not til Spring
• WS-Federation + Passive Requestor Profile + Passive Requestor
Interoperability Profile
 Discussions broached, by Microsoft, in building Shib
interoperabilty into WS-Fed
 Devils in the details
• Can WS-Fed-based SPs work in InCommon without having to
muck up federation metadata with WS-Fed-specifics?
• All the stuff besides WS-Fed in the WS-* stack
Diagnostics
 Complex, impossible problem that needs to be broken
down into simple, impossible problems
 A result of dumb users coping with fine-grain access
controls riding on top of a non-diagnostic network layer…
 Lots of parts to the solution
•
•
•
•
•
•
A measurable, manageable network (e2e perf)
Desktop diagnostic tools (e.g .Surfnet detective)
Common event records
ccBay (eddY) to query event records
Applications that understand diagnostics
Interrealm policies and tools
Setting some limits
 Focus is on operational problems, not install/config time
errors
 Focus is on diagnostics, not services; e.g. network
security diagnostics focuses on problems that network
security can insert into an end-end transaction,or
information that network security can provide to an endend diagnostic service rather than the activities in
identifying and diagnosing network security problems
 Trouble ticket systems, knowledgebases out of scope
 Scoping our work is critical to reduce an utterly
impossible problem into a set of smaller, intractable ones.
Identifying the customers
 End-user
• Surfnet detective as a sample desktop
– Accent is on network layer tools right now
– Can be significantly extended with central logging capabilities
 Diagnostician
• Set of tools, including component specific tools and access to
compound tools
• Role-based controls for interdomain operations
• Several types of diagnosticians – general, domain specific
 Developers
• Design diagnostics into their tools
• Output error logs in CER
Policy Dimensions

Riddled with privacy issues, especially in interdomain
instances
•

Riddled with archival issues
•
•

Federated and role based access controls, with anonymization
mechanisms may help
Massive amounts of data
Data stored is data subpoenable
Riddled with lack of standards
•
Massive embedded base of log files in ad hoc formats
Operational needs
 “Manual” messaging systems such as noc lists need to
be entered into discoverable and processable formats
 Registration of active tests, so that their effects can be
included in diagnostics
 Benchmarks are essential for establishing normal
behavior
Classes of problems
 Simple – single component oriented problems, best
addressed by component diagnostic services– e.g.
SunOne directory manager console
 Compound – problems that span multiple systems,
creating the need for threaded analysis
 Interdomain – problems that span multiple domains,
creating the need for broad standards, privacy and
security tools, etc.
Classes of diagnostic tools
 Tool specific – e.g. SunOne directory console
 Domain specific – e.g. tracing a failed authorization
among the systems integrated with Signet (an authority
engine) on a campus
 General diagnostic tool –
• e.g. ccBay tracking an inability for an enrolled student to complete
an office-hour videoconference with a remote faculty member
• e.g. ccBay using netlogger and syslogs
Core middleware diagnostic model
 Normalizers
 Collectors
 Filters
 Tools
Common Event Record
XML based event descriptor
Base Event
Nested Sub-Schemas
<org.Internet2.Middleware.ccBay.Event Version = "0.1">
<Record>
<Tag STRING/> <!--Optional -->
<TimeStart TIMESTAMP/> <!-- Required -->
<TimeEnd TIMESTAMP/> <!-- Optional -->
<ServerHostname FQDN/> <!-- Optional -->
<ServerIP IP_ADDRESS/> <!-- Required -->
<CollectorHostname FQDN/> <!-- Optional -->
<CollectorIP IP_ADDRESS/> <!-- Required -->
<CollectorName STRING/> <!-- Required -->
<CollectorVersion FLOAT/> <!-- Required -->\
<WarnLevel STRING/> <!-- Required -->
<EventMessage STRING/> <!-- Required -->
<org.Internet2.Middleware.ccBay.ProcessEvent/> <!-- Optional -->
<org.Internet2.Middleware.ccBay.NetworkEvent/> <!-- Optional -->
</Record>
</org.Internet2.Middleware.ccBay.Event>
Common Event Record Cont.
A combination of four sub-schemas can
describe a single event
Base Event
•
•
•
•
Nested Sub-Schema Events
System – Host-wide information, i.e. /var/log/messages.
Application – Specific service or program logs, i.e. Shibboleth.
Security – Security data and events, i.e. Snort.
Network – Network connection data, i.e. NetFlow.
Common Event Record Cont.
 XML Sub-Schema Nesting to describe the four type of
events
Base Event
Base Event
ProcessEvent
SystemEvent
ProcessEvent
System Event
Application Event
Base Event
Base Event
NetworkEvent
SecurityEvent
NetworkEvent
Security Event
Network Event
Illustration of nested
XML elements.
Common Event Record Cont.
Normalization Example
• Raw entry from /var/log/messages:
Jul 29 15:07:27 cmu1 sshd[11157]: Failed password for illegal user Administrator from
::ffff:192.168.2.6 port 4324 ssh2
• Is represented as,
Base Event
Process Event
System Event
Common Event Record Cont.
Raw entry normalized as XML:
<org.Internet2.Middleware.ccBay.Event Version="0.1">
<Record>
<TimeStart>Jul 29 15:07:27</TimeStart>
<ServerHostname>cmu1</ServerHostname>
<ServerIP>192.168.2.2</ServerIP>
<CollectorHostname>cmu1</CollectorHostname>
<CollectorIP>192.168.2.2</CollectorIP>
<CollectorName>ccbay-slogd</CollectorName>
<CollectorVersion>0.1</CollectorVersion>
<WarnLevel>info</WarnLevel>
<EventMessage>Failed password for illegal user Administrator from ::ffff:192.168.2.6 port 4324 ssh2</EventMessage>
<org.Internet2.Middleware.ccBay.ProcessEvent>
<Record>
<ProcessName>sshd</ProcessName>
<ProcessID>11157</ProcessID>
<org.Internet2.Middleware.ccBay.SystemEvent>
<Record>
<Facility>User</Facility>
</Record>
</org.Internet2.Middleware.ccBay.SystemEvent>
</Record>
</org.Internet2.Middleware.ccBay.ProcessEvent>
</Record>
</org.Internet2.Middleware.ccBay.Event>
Pilot Objectives
• Study the normalization strategies of the diagnostic data
• Provide simple operators to manipulate data
• Collect and distribute the data in a highly flexible manner via piping of
diagnostic data streams
• Enable basic forensic applications
• Leverage resources of other initiatives where possible to achieve a
common goal
• Build an ultra modular architecture where the impact of its evolution is
minimized
Pilot Design
• Highly modular architecture utilizing standard building blocks.
• Focus on a simple and extensible lightweight design.
• Utilize existing libraries, utilities, modules and standards instead of
existing systems.
• We have looked at many, but there is no existing software that does all we
need.
• Use Python as a full-featured development language.
• Great language for development of pilot and beyond.
• Widely included with Linux distributions.
• Works well in the Windows environment.
Pilot Architecture
• Normalization agents to convert raw event and logging data to
ccBay XML documents.
• Each event is a file, more efficient approach needed beyond pilot.
• For pilot, normalization agents for Linux syslogd/klogd data, Windows system
events, Snort alerts, NetFlow streams and Windows WMI application events.
• Forwarding agents move XML documents between storage
agents.
• Transfer of XML files to storage agent using SCP.
• Simple design for pilot, not a production approach.
Meetings
 IdM CAMP in San Diego
 CAMP Med – early Feb in Tempe, Arizona
 Next TF-EMC2 meeting – where and when??????
 Spring I2 Member Meeting – May 2-5 in Washington, D.C.
 Advanced CAMP sometime late spring
 Liberty Alliance – end Jan in Palo Alto, March in Dublin
NMI releases
 Release 6 end-of-December 04
 Surprisingly good (compared to anything else) PR
mechanism
 Open to all middleware
• Permis
• Spocp
• A-Select
 Is there a map of the known world emerging
Opportunities
 Digital content initiative
 Real-time communication (not collaboration)
 Open source sturm and drang
 Salsa and its work groups
• Net-auth
• Netc
– IPv6 security tools
– Firewalls and multi-homing
– Etc
• Avoiding the future hell