Download Slide 1

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Cluster analysis wikipedia , lookup

Transcript
High Productivity Computing
Large-scale Knowledge Discovery:
Co-evolving Algorithms and Mechanisms
Steve Reinhardt
Principal Architect
Microsoft
Prof. John Gilbert, UCSB
Dr. Viral Shah, UCSB
Microsoft Proprietary
Context for Knowledge Discovery
From Debbie Gracio and Ian Gorton, PNNL Data Intensive Computing Initiative
Microsoft Proprietary
Knowledge Discovery (KD) Definition
• Data-intensive computing: when the acquisition and
movement of input data is a primary limitation on
feasibility or performance
• Simple data mining: searching for exceptional values
on elemental measures (e.g., heat, #transactions)
• Knowledge discovery: searching for exceptional
values on associative/social measures (e.g., most
between, belonging to greatest number of valuable
reactions)
Microsoft Proprietary
Today’s Biggest Obstacle in the KD Field
• Lack of fast feedback
between domain experts
and infrastructure/tool
developers about good
usable scalable KD software
platforms
• Need to accelerate the rate
of learning about both good
KD algorithms and good KD
infrastructure
Need to get good (not perfect)
scalable platforms in use to co-evolve
towards best approaches and
algorithms
Domain experts want:
• Good infrastructure that works
• … and scales greatly and runs fast
• Flexibility to develop/tweak
algorithms to suit their needs
• Algorithms with strong math basis
But don’t know
• The best approach or algorithms
Infrastructure developers want:
• Clear audience for what they develop
• Architecture that copes with client,
cluster, cloud, GPU, and huge data
But don’t know
• The best approach
Microsoft Proprietary
Candidate Approaches
Ad hoc
“Visitor”
Sparse-matrix-based
Description
Build each algorithm from
ground up
Tailor actions at key points in
graph traversal
Cast graphs as sparse
matrices and use sparse
linear algebraic operations
Example
Metis
Boost Graph Library, Pregel
KDT
Pros
•
•
•
•
•
Fast on single node, since
highly tailored
Fast, since tailored
Extensible to out-ofmemory formats (Pregel)
•
•
Cons
•
•
•
•
Notes
•
Unclear math basis
Devpt is time-consuming,
since no common kernels
Scaling is hard
Poor use of local memory
hierarchy
•
•
Not at domain-expert
level
•
•
Proven math basis
Built-in tolerance for
high cluster latency
Good use of local
memory hierarchy
Extensible to out-ofmemory formats
Unclear math basis
Each alg may need to
cope with high cluster
latency
Poor use of local memory
hierarchy
•
Mind-bender without
good graph API on top
Not at domain-expert
level
•
Graph layer at domainexpert level
Microsoft Proprietary
KDT Layers: Enable overloading with various technologies
Community
Detection
kdt.
Elementary
Mode Analysis
Betweenness Centrality
All Pairs Shortest Path
Barycentric Clustering
Parallel/distributed operations
(constructors,
SpGEMM, SpMV, SpAdd, SpGEMM semi-rings, I/O)
All Pairs Shortest
Path
(Cray XMT)Parallel/distributed operations
scipy.
…
…
Local
Local
Local
Local
Local
Local
Local
SpGEMM
Local
Local
(in-memory
(Star-P)SpRef/
or out-of-memory
(DryadLINQ-based))
SpGEMM
constructors SpGEMM
SpGEMM
SpMV
SpAdd
on semiI/O
SpAsgn
(GPU)
rings
(GPU)
Microsoft Proprietary
DryadLINQ: Query + Plan + Parallel Execution
• Dryad
– Distributed-memory coarse-grain run-time
– Generalized MapReduce
– Using computational vertices and
communication channels to form a
dataflow execution graph
data plane
sched
Files, TCP, FIFO, Network
NS
Job manager control plane
• LINQ (Language INtegrated Query)
– A query-style language interface to Dryad
– Typical relational operators (e.g., Select, Join,
GroupBy)
• Scaling for histogram example
– Input data 10.2TB, using 1,800 cluster nodes, 43,171
execution-graph vertices spawning 11,072
processes, creating 33GB output data in 11.5
minutes of execution
Microsoft Proprietary
V
V
V
PD
PD
PD
cluster
Star-P Bridges Scientists to HPCs
Star-P enables domain experts to use
parallel, big-memory systems via
productivity languages
(e.g., the M language of MATLAB)
Knowledge discovery scaling with Star-P
• Kernels to 55B edges between 5B
vertices, on 128 cores (consuming 4TB
memory)
• Compact applications to 1B edges on
256 cores
MATLAB
Microsoft Proprietary
Next Steps
• Get prototypes available for early experience and
feedback
– in-memory and out-of-memory targets of KDT
– with graph layer
– likely exposed via Python library interface
Microsoft Proprietary
© 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista, Windows 7, and other product names are or may be registered trademarks
and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft
Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment
on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation.
MICROSOFT MAKES NO WARRANTIES, EXPRESS,
IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Microsoft Proprietary