Download Data - Jim Gray

Document related concepts
no text concepts found
Transcript
Where The Rubber Meets the Sky
Giving Access to Science Data
Jim Gray
Microsoft Research
[email protected]
Http://research.Microsoft.com/~Gray
Alex Szalay
Johns Hopkins University
[email protected]
Talk at
U. Waterloo & U. Penn
Fall 2004
1
•Promised Abstract:
On-Line Science:
The World-Wide Telescope as a Prototype for the New Computational Science.
Computational science has historically meant simulation; but, there is an increasing role for
analysis and mining of online scientific data. As a case in point, half of the world's astronomy
data is public. The astronomy community is putting all that data on the Internet so that the
Internet becomes the world's best telescope: it has the whole sky, in many bands, and in detail
as good as the best 2-year-old telescopes. It is useable by all astronomers everywhere. This is
the vision of the Virtual Observatory -- also called the World Wide Telescope (WWT). As one
step along that path I have been working with the Sloan Digital Sky Survey and CalTech to
federate their data in web services on the Internet, and to make it easy to ask questions of the
database (see http://skyserver.sdss.org and http://skyquery.net). This talk explains the rationale
for the WWT, discusses how we designed the database, and talks about some data mining
tasks. It also describes computer science challenges of publishing, federating, and mining
scientific data, and argues that XML web services are key to federating diverse data sources.
Actual Abstract:
I have been working with some astronomers
for the last 6 years
trying to apply DB technology to science problems.
These are some lessons I learned.
2
Outline
• New Science
– X-Info for all fields X
– WWT as an example
– Big Picture
– Puzzle
– Hitting the wall
– Needle in haystack
– Mohamed and the mountain
• Working cross disciplines
• Data Demographics and Data Handling
• Curation
Experiments &
Instruments
Other Archives facts
Literature
Simulations
facts
?
questions
answers
3
New Science Paradigms
• Thousand years ago:
science was empirical
describing natural phenomena
• Last few hundred years:
theoretical branch
using models, generalizations
2
.
4G
c2
a
 a   3   a 2
 
• Last few decades:
a computational branch
simulating complex phenomena
• Today:
data exploration (eScience)
unify theory, experiment, and simulation
using data management and statistics
– Data captured by instruments
Or generated by simulator
– Processed by software
– Scientist analyzes database / files
4
The Big Picture
Experiments &
Instruments
Other Archives
Literature
questions
facts
facts
?
answers
Simulations
The Big Problems
•
•
•
•
•
•
Data ingest
Managing a petabyte
Common schema
How to organize it?
How to reorganize it?
How to coexist with others?
•
•
•
Data Query and Visualization tools
Support/training
Performance
– Execute queries in a minute
– Batch (big) query scheduling
5
Experiment Budgets ¼…½ Software
Software for
• Instrument scheduling
• Instrument control
• Data gathering
• Data reduction
• Database
• Analysis
• Visualization
Millions of lines of code
Repeated for experiment
after experiment
Not much sharing or learning
Let’s work to change this
Identify generic tools
• Workflow schedulers
• Databases and libraries
• Analysis packages
• Visualizers
6
• …
What X-info Needs from us (cs)
(not drawn to scale)
Miners
Scientists
Science Data
& Questions
Data Mining
Algorithms
Plumbers
Database
To store data
Execute
Queries
Question &
Answer
Visualization
Tools
7
Data Access Hitting a Wall
Current science practice based on data download
(FTP/GREP)
Will not scale to the datasets of tomorrow
•
•
•
•
You can GREP 1 MB in a second
You can GREP 1 GB in a minute
You can GREP 1 TB in 2 days
You can GREP 1 PB in 3 years.
•
•
•
•
You can FTP 1 MB in 1 sec
You can FTP 1 GB / min (~1$)
… 2 days and 1K$
… 3 years and 1M$
• Oh!, and 1PB ~5,000 disks
• At some point you need
indices to limit search
parallel data search and analysis
• This is where databases can help
8
Information Avalanche
• In science, industry, government,….
– better observational instruments and
– and, better simulations
producing a data avalanche
Image courtesy
C. Meneveau & A. Szalay @ JHU
• Examples
– BaBar: Grows 1TB/day
2/3 simulation Information
1/3 observational Information
– CERN: LHC will generate 1GB/s .~10 PB/y
– VLBA (NRAO) generates 1GB/s today
– Pixar: 100 TB/Movie
BaBar, Stanford
P&E Gene Sequencer From
http://www.genome.uci.edu/
• New emphasis on informatics:
– Capturing, Organizing,
Summarizing, Analyzing, Visualizing
9
Space Telescope
Next-Generation Data Analysis
• Looking for
– Needles in haystacks – the Higgs particle
– Haystacks: dark matter, dark energy,
turbulence, ecosystem dynamics
• Needles are easier than haystacks
• Global statistics have poor scaling
– Correlation functions are N2, likelihood techniques N3
• As data and computers grow at Moore’s Law,
we can only keep up with N logN
• A way out?
– Relax optimal notion (data is fuzzy, answers are approximate)
– Don’t assume infinite computational resources or memory
10
• Requires combination of statistics & computer science
Smart Data: Unifying DB and Analysis
• There is too much data to move around
Do data manipulations at database
– Build custom procedures and functions into DB
Move Mohamed to the mountain,
– Unify data Access & Analysis
not the mountain to Mohamed.
– Examples
• Statistical sampling and analysis
• Temporal and spatial indexing
• Pixel processing
• Automatic parallelism
• Auto (re)organize
• Scalable to Petabyte datasets
11
Outline
• New Science
• Working cross disciplines
– How to help?
– 20 questions
– WWT example
– Alternative: CS Process Models
• Data Demographics and Data Handling
• Curation
Experiments &
Instruments
Other Archives facts
Literature
Simulations
facts
?
questions
answers
12
How to Help?
• Can’t learn the discipline before you start
(takes 4 years.)
• Can’t go native – you are a CS person
not a bio,… person
• Have to learn how to communicate
Have to learn the language
• Have to form a working relationship with
domain expert(s)
• Have to find problems that leverage your skills
13
Working Cross-Culture
A Way to Engage With Domain Scientists
• Communicate in terms of scenarios
• Work on a problem that gives 100x benefit
– Weeks/task vs hours/task
• Solve 20% of the problem
– The other 80% will take decades
• Prototype
• Go from working-to-working, Always have
– Something to show
– Clear next steps
– Clear goal
• Avoid death-by-collaboration-meetings.
14
Working Cross-Culture -- 20 Questions:
A Way to Engage With Domain Scientists
• Astronomers proposed 20 questions
• Typical of things they want to do
• Each would require a week or more in old way
(programming in tcl / C++/ FTP)
• Goal, make it easy to answer questions
• This goal motivates DB and tools design
15
The Virtual Observatory
• Premise: most data is (or could be online)
• The Internet is the world’s best telescope:
– It has data on every part of the sky
– In every measured spectral band:
optical, x-ray, radio..
– As deep as the best instruments (2 years ago).
– It is up when you are up
– The “seeing” is always great
– It’s a smart telescope:
links objects and data to literature
• Software is the capital expense
– Share, standardize, reuse..
16
Why Is Astronomy Special?
• Almost all literature online and public
ADS:
http://adswww.harvard.edu/
CDS:
IRAS 25m
http://cdsweb.u-strasbg.fr/
• Data has no commercial value
2MASS 2m
– No privacy concerns, freely share results with others
– Great for experimenting with algorithms
DSS Optica
• It is real and well documented
– High-dimensional
– Spatial, temporal
(with confidence intervals)
IRAS 100m
• Diverse and distributed
– Many different instruments from
many different places and
many different times
WENSS 92cm
• The community wants to share the data
World Wide Telescope: federate all data.
• There is a lot of it (soon petabytes)
ROSAT ~keV
NVSS 20cm
17
GB 6cm
The 20 Queries
Q11: Find all elliptical galaxies with spectra that have an
anomalous emission line.
Q12: Create a grided count of galaxies with u-g>1 and r<21.5
over 60<declination<70, and 200<right ascension<210,
on a grid of 2’, and create a map of masks over the
same grid.
Q13: Create a count of galaxies for each of the HTM triangles
which satisfy a certain color cut, like 0.7u-0.5g-0.2i<1.25
&& r<21.75, output it in a form adequate for
visualization.
Q14: Find stars with multiple measurements and have
magnitude variations >0.1. Scan for stars that have a
secondary object (observed at a different time) and
compare their magnitudes.
Q15: Provide a list of moving objects consistent with an
asteroid.
Q16: Find all objects similar to the colors of a quasar at
5.5<redshift<6.5.
Q17: Find binary stars where at least one of them has the
colors of a white dwarf.
Q18: Find all objects within 30 arcseconds of one another
that have very similar colors: that is where the color
ratios u-g, g-r, r-I are less than 0.05m.
Q19: Find quasars with a broad absorption line in their
spectra and at least one galaxy within 10 arcseconds.
Return both the quasars and the galaxies.
Q20: For each galaxy in the BCG data set (brightest color
galaxy), in 160<right ascension<170, -25<declination<35
Also some good queries at:
count of galaxies within 30"of it that have a photoz18
within
http://www.sdss.jhu.edu/ScienceArchive/sxqt/sxQT/Example_Queries.html
0.05 of that galaxy.
Q1: Find all galaxies without unsaturated pixels within 1' of a
given point of ra=75.327, dec=21.023
Q2: Find all galaxies with blue surface brightness between
and 23 and 25 mag per square arcseconds, and 10<super galactic latitude (sgb) <10, and declination
less than zero.
Q3: Find all galaxies brighter than magnitude 22, where the
local extinction is >0.75.
Q4: Find galaxies with an isophotal surface brightness (SB)
larger than 24 in the red band, with an ellipticity>0.5, and
with the major axis of the ellipse having a declination of
between 30” and 60”arc seconds.
Q5: Find all galaxies with a deVaucouleours profile (r¼ falloff
of intensity on disk) and the photometric colors
consistent with an elliptical galaxy. The deVaucouleours
profile
Q6: Find galaxies that are blended with a star, output the
deblended galaxy magnitudes.
Q7: Provide a list of star-like objects that are 1% rare.
Q8: Find all objects with unclassified spectra.
Q9: Find quasars with a line width >2000 km/s and
2.5<redshift<2.7.
Q10: Find galaxies with spectra that have an equivalent width
in Ha >40Å (Ha is the main hydrogen spectral line.)
Two kinds of SDSS data in an SQL DB
(objects and images all in DB)
300M Photo Objects ~ 400 attributes
10B rows overall
400K Spectra
with
~30 lines/
Spectrum
100 M rows
19
An easy one: Q7:
Provide a list of star-like objects that are 1% rare.
• Found 14,681 buckets,
first 140 buckets have 99%
time 104 seconds
• Disk bound, reads 3 disks at 68 MBps.
Select cast((u-g) as int) as ug,
cast((g-r) as int) as gr,
cast((r-i) as int) as ri,
cast((i-z) as int) as iz,
count(*)
as Population
from stars
group by
cast((u-g) as int), cast((g-r) as int),
cast((r-i) as int), cast((i-z) as int)
order by count(*)
20
Then What?
1999. 20 Queries were a way to engage
–
–
Needed spatial data library
Needed DB design
2000. Built website to publish the data
2001. Data Loading (workflow scheduler).
2002. Pixel web service evolved to
2003. SkyQuery federation evolved to
2004. Now focused on spatial data library.
Conversion to OR DB (put analysis in DB).
21
Alternate Model
• Many sciences are becoming
information sciences
• Modeling systems
needs new and better languages.
• CS modeling tools can help
– Bio, Eco, Linguistic, …
• This is the process/program centric view
rather than my info-centric view.
22
Outline
• New Science
• Working cross disciplines
• Data Demographics and Data Handling
– Exponential growth
– Data Lifecycle
– Versions
– Data inflation
– Year 5
– Overprovision by 6x
Experiments &
Instruments
– Data Loading
– Regression Tests
Other Archives facts
facts
– Statistical subset
Literature
• Curation
?
Simulations
questions
answers
23
Information Avalanche
• In science, industry, government,….
– better observational instruments and
– and, better simulations
producing a data avalanche
Image courtesy
C. Meneveau & A. Szalay @ JHU
• Examples
– BaBar: Grows 1TB/day
2/3 simulation Information
1/3 observational Information
– CERN: LHC will generate 1GB/s .~10 PB/y
– VLBA (NRAO) generates 1GB/s today
– Pixar: 100 TB/Movie
BaBar, Stanford
P&E Gene Sequencer From
http://www.genome.uci.edu/
• New emphasis on informatics:
– Capturing, Organizing,
Summarizing, Analyzing, Visualizing
24
Space Telescope
Q: Where will the Data Come From?
A: Sensor Applications
• Earth Observation
– 15 PB by 2007
• Medical Images & Information + Health Monitoring
– Potential 1 GB/patient/y  1 EB/y
• Video Monitoring
– ~1E8 video cameras @ 1E5 MBps
 10TB/s  100 EB/y
 filtered???
• Airplane Engines
– 1 GB sensor data/flight,
– 100,000 engine hours/day
– 30PB/y
• Smart Dust: ?? EB/y
http://robotics.eecs.berkeley.edu/~pister/SmartDust/
http://www-bsac.eecs.berkeley.edu/~shollar/macro_motes/macromotes.html
25
Instruments: CERN – LHC
Peta Bytes per Year
Looking for the Higgs Particle
• Sensors:
~1 GB/s (~ 20 PB/y)
• Events
100 MB/s
• Filtered
10 MB/s
• Reduced
1 MB/s
CERN Tier 0
Data pyramid:
100GB : 1TB : 100TB : 1PB : 10PB
26
Like all sciences,
Astronomy Faces an Information Avalanche
• Astronomers have a few hundred TB now
– 1 pixel (byte) / sq arc second ~ 4TB
– Multi-spectral, temporal, … → 1PB
• They mine it looking for
1000
100
new (kinds of) objects or
more of interesting ones (quasars),
density variations in 400-D space
correlations in 400-D space
•
•
•
•
Data doubles every year
Data is public after 1 year
So, 50% of the data is public
Same access for everyone
10
1
0.1
1970
1975
1980
1985
1990
1995
2000
CCDs
Glass
27
Moore’s Law in Proteomics
Courtesy of Peter Berndt, Roche Center for Medical Genomics (RCMG)
Roche Center for Medical Genomics (RCMG):
number of mass-spectra acquired for proteomics
doubled every year since first mass spectrometer
deployed.
Count of Spectra
Proteomics MS Data Generation
500000.
R2=0.96
200000.
100000.
50000
20000
1998
1999
2000
2001
2002
Year
28
2003
Data Lifecycle
• Raw data → primary data → derived data
• Data has bugs:
– Instrument bugs
– Pipeline bugs
• Data comes in versions
– later versions fix known bugs
– Just like software (indeed data is software)
• Can’t “un-publish” bad data.
Level 1
calibrated
Level 0
raw
instrument
or
simulator
pipeline
pipeline
other
data
Level 2
derived
other
data
29
Data Inflation – Data Pyramid
Level 2
Level 1A
Grows X TB/year
~ .4X TB/y
compressed
(level 1A in NASA terms)
Derived data products ~10x smaller
But there are many.
L2≈L1
• Publish new edition each year
– Fixes bugs in data.
– Must preserve old editions
– Creates data pyramid
• Store each edition
– 1, 2, 3, 4… N ~ N2 bytes
• Net: Data Inflation: L2 ≥ L1
Level 1A
4 editions of 4 Level 2 products
E4
E3
time
E2
E1
4 editions of
level 1A data
(source data)
4 editions of level 2 derived data products. Note that each derived product is
small, but they are numerous. This proliferation combined with the data
pyramid implies that level2 data more than doubles the total storage volume.
30
180
The Year 5 Problem
Yearly Demand
160
Depreciated Inflated Demand
• Data arrives at R bytes/year
• New Storage & Processing
– Need to buy R units in year N
• Data inflation means
Yearly Demand ( R )
140
80
60
40
20
0
– Need to buy NR units
0
• Capital expense
peaks at year 5
• See 6x Over-Power slide next
4
6
8
10
8
10
Yearly Capital Cost
4.0
3.5
Marginal Capital Cost
60%/year price decline
2
Year
• Depreciate over 3 years
• Moore’s law:
Naive Demand
100
~N2R
– After year 3
need to buy N2R + (N-3)2R
Inflated Demand
120
3.0
2.5
2.0
1.5
1.0
0.5
0.0
0
2
4
6
Year
31
6x Over-Power Ratio
• If you think you need X raw capacity,
then you probably need 6X
• Reprocessing
• Backup copies
• Versions
• …
• Hardware is cheap,
Your time is precious.
PubDB
3.6TB
DR2C
1.8TB
DR2M
1.8TB
DR2P
1.8TB
DR3C
2.4TB
DR3M
2.4TB
DR3P
2.4TB
32
Data Loading
• Data from outside
– Is full of bugs
– Is not in your format
• Advice
– Get it in a “Universal Format”
(e.g. Unicode CSV)
– Create Blood-Brain barrier
Quarantine in a “load database”
– Scrub the data
•
•
•
•
Cross check everything you can
Check data statistics for sanity
Reject or repair bad data
Generate detailed bug reports
(needed to send rejection upstream)
– Expect to reload many times
Automate everything!
LOAD
Export
EXP
Check CSV
CHK Build Task DBs
BLD Build SQL Schema
SQL Validate
VAL
Backup
BCK
Detach
DTC
PUBLISH
Publish
PUB
Cleanup
CLN
Test
Test Uniqueness
Uniqueness
Of
Of Primary
Primary Keys
Keys
FINISH
FIN
Test the unique
Key in each table
Test
Test
Foreign
Foreign Keys
Keys
Test for consistency
of keys that link tables
Test
Test
Cardinalities
Cardinalities
Test consistency of
numbers of various
quantities
Test
Test
HTM
HTM IDs
IDs
Test
Test parent-child
parent-child
consistency
consistency
Test the Hierarchical
Triamgular Mesh IDs
used for spatial
indexing
33
Ensure that all parents
and children and linked
Performance Prediction & Regression
• Database grows exponentially
• Set up response-time requirements
– For load
– For access
• Define a workload to measure each
• Run it regularly to detect anomalies
• SDSS uses
– one-week to reload
– 20 queries with response of 10 sec to 10 min.
34
Data Subsets
For Science and Development
• Offer 1GB, 10GB, …, Full
subsets
• Wonderful tool for you
– Debug algorithms
• Good tool for scientists
– Experiment on subset
– Not for needle in haystack,
but good for global stats
• Challenge: How make
statistically valid subsets?
– Seems domain specific
– Seems problem specific
– But, must be some general
concepts.
35
Outline
•
•
•
•
New Science
Working cross disciplines
Data Demographics and Data Handling
Curation
– Problem statement
– Economics
– Astro as a case in point
Experiments &
Instruments
Other Archives facts
Literature
Simulations
facts
?
questions
answers
36
Problem Statement
• Once published,
scientific data needs to be available forever,
so that the science can be reproduced/extended.
• What does that mean?
NASA “level 0”
– Data can be characterized as
• Primary Data: could not be reproduced
• Derived data: could be derived from primary data.
– Meta-data: how the data was collected/derived
is primary
• Must be preserved
• Includes design docs, software, email, pubs, personal
notes, teleconferences,
37
The Core Problem: No Economic Model
• The archive user is not yet born.
How can he pay you to curate the data?
• The Scientist gathered data for his own purpose
Why should he pay (invest time) for your needs?
• Answer to both: that’s the scientific method
• Curating data
(documenting the design, the acquisition and the processing)
Is difficult and there is little reward for doing it.
Results are rewarded, not the process of getting them.
• Storage/archive NOT the problem (it’s almost free)
• Curating/Publishing is expensive,
MAKE IT EASIER!!! (lower the cost)
38
Publishing Data
Roles
Traditional
Emerging
Authors
Scientists
Collaborations
Publishers
Journals
Project web site
Curators
Libraries
Data+Doc Archives
Archives
Archives
Digital Archives
Consumers Scientists
Scientists
39
Changing Roles
• Exponential growth:
–
–
–
–
Projects last at least 3-5 years
Project data online during project lifetime.
Data sent to central archive only at the end of the project
At any instant, only 1/8 of data is in central archives
• New project responsibilities
– Becoming Publishers and Curators
– Larger fraction of budget spent on software
• Standards are needed
– Easier data interchange, fewer tools
• Templates are needed
– Much development duplicated, wasted
40
Schema (aka metadata)
• Everyone starts with the same schema
<stuff/>
Then the start arguing about semantics.
• Virtual Observatory: http://www.ivoa.net/
• Metadata based on Dublin Core:
http://www.ivoa.net/Documents/latest/RM.html
• Universal Content Descriptors (UCD):
http://vizier.u-strasbg.fr/doc/UCD.htx
Captures quantitative concepts and their units
Reduced from ~100,000 tables in literature to ~1,000 terms
• VOtable – a schema for answers to questions
http://www.us-vo.org/VOTable/
• Common Queries:
Cone Search and Simple Image Access Protocol, SQL
• Registry: http://www.ivoa.net/Documents/latest/RMExp.html
still a work in progress.
41
What SDSS is Doing: Capture the Bits
(preserve the primary data)
• Best-effort documenting data and process
Documents and data are hyperlinked.
• Publishing data: often by UPS
(~ 5TB today and so ~5k$ for a copy)
• Replicating data on 3 continents.
• EVERYTHING online (tape data is dead data)
• Archiving all email, discussions, ….
• Keeping all web-logs & query logs.
• Now we need to figure out how to
42
organize/search all this metadata.
The OGIS model
Data
Management
Producer
Ingest
Archive
Access
Consumer
Administer
43
Ingest Challenges
•
•
•
•
•
•
•
Push vs Pull
What are the gold standards?
Automatic indexing, annotation, provenance.
Auto-Migration (Format conversion)
Version management
How capture time varying sources
Capture “dark matter” (encapsulated data)
– Bits don’t “rust” but applications do.
44
Access Challenges
• Archived information “rusts” if it is not
accessed. Access is essential.
• Access costs money – who pays?
• Access sometimes uses IP, who pays?
• There are also technical problems:
– Access formats are different from the storage
formats.
• migration?
• emulation?
• Gold Standards?)
45
Archive Challenges
• Cost of administering storage:
– Presently 10x to 100x the hardware cost.
• Resist attack: geographic diversity
• At 1GBps it takes 12 days to move a PB
• Store it in two (or more) places online (on disk).
A geo-plex
• Scrub it continuously (look for errors)
• On failure,
– use other copy until failure repaired,
– refresh lost copy from safe copy.
• Can organize the copies differently
(e.g.: one by time, one by space)
46
The Midrange Paradox
• Large archives are curated
– Curated by projects
• Small archives are appendices to papers
– Curated by journals
• Medium-sized archives are in limbo
– No place to register them
– No one has mandate to preserve them
• Example:
– Your website with your data files
– Small scale science projects
– Genbank gets the sequence
but not the software or analysis that produced it.
47
Summary
• New Science
– X-Info for all fields X
– WWT as an example
– Big Picture
– Puzzle
– Hitting the wall
– Needle in haystack
– Move queries to data
• Working cross disciplines
– How to help?
– 20 questions
– WWT example
– Alt: CS Process Models
• Data Demographics
– Exponential growth
– Data Lifecycle
– Versions
– Data inflation
– Year 5 is peak cost
– Overprovision by 6x
– Data Loading
– Regression Tests
– Statistical subset
• Curation
– Problem statement
– Economics
– Astro as a case in point
48
Call to Action
• X-info is emerging.
• Computer Scientists can help in many ways.
– Tools
– Concepts
– Provide technology consulting to the community
• There are great CS research problems here
– Modeling
– Analysis
– Visualization
– Architecture
49
References
http://SkyServer.SDSS.org/
http://research.microsoft.com/pubs/
http://research.microsoft.com/Gray/SDSS/ (download personal SkyServer)
Extending the SDSS Batch Query System to the National Virtual Observatory Grid,
M. A. Nieto-Santisteban, W. O'Mullane, J. Gray, N. Li, T. Budavari, A. S. Szalay, A. R. Thakar, MSR-TR-2004-12, Feb. 2004
Scientific Data Federation,
J. Gray, A. S. Szalay, The Grid 2: Blueprint for a New Computing Infrastructure, I. Foster, C. Kesselman, eds, Morgan Kauffman,
2003, pp 95-108.
Data Mining the SDSS SkyServer Database,
J. Gray, A.S. Szalay, A. Thakar, P. Kunszt, C. Stoughton, D. Slutz, J. vandenBerg, Distributed Data & Structures 4:
Records of the 4th International Meeting, pp 189-210, W. Litwin, G. Levy (eds),, Carleton Scientific 2003, ISBN 1-894145-13-5,
also MSR-TR-2002-01, Jan. 2002
Petabyte Scale Data Mining: Dream or Reality?,
Alexander S. Szalay; Jim Gray; Jan vandenBerg, SIPE Astronomy Telescopes and Instruments, 22-28 August 2002, Waikoloa,
Hawaii, MSR-TR-2002-84
Online Scientific Data Curation, Publication, and Archiving,
J. Gray; A. S. Szalay; A.R. Thakar; C. Stoughton; J. vandenBerg, SPIE Astronomy Telescopes and Instruments, 22-28 August
2002, Waikoloa, Hawaii, MSR-TR-2002-74
The World Wide Telescope: An Archetype for Online Science,
J. Gray; A. Szalay,, CACM, Vol. 45, No. 11, pp 50-54, Nov. 2002, MSR TR 2002-75,
The SDSS SkyServer: Public Access To The Sloan Digital Sky Server Data,
A. S. Szalay, J. Gray, A. Thakar, P. Z. Kunszt, T. Malik, J. Raddick, C. Stoughton, J. vandenBerg:,
ACM SIGMOD 2002: 570-581 MSR TR 2001 104.
The World Wide Telescope,
A.S., Szalay, J., Gray, Science, V.293 pp. 2037-2038. 14 Sept 2001. MS-TR-2001-77
Designing & Mining Multi-Terabyte Astronomy Archives: Sloan Digital Sky Survey,
A. Szalay, P. Kunszt, A. Thakar, J. Gray, D. Slutz, P. Kuntz, June 1999, ACM SIGMOD 2000, MS-TR-99-30,
50
How to Publish Data: Web Services
• Web SERVER:
– Given a url + parameters
– Returns a web page (often dynamic)
Your
program
Web
Server
• Web SERVICE:
•
– Given a XML document (soap msg)
– Returns an XML document (with schema)
– Tools make this look like an RPC.
Your
• F(x,y,z) returns (u, v, w)
program
– Distributed objects for the web.
– + naming, discovery, security,..
Data
In your
Internet-scale
address
distributed computing space
Web
Service
51
Global Federations
• Massive datasets live near their owners:
– Near the instrument’s software pipeline
– Near the applications
– Near data knowledge and curation
• Each Archive publishes a (web) service
– Schema: documents the data
– Methods on objects (queries)
• Scientists get “personalized” extracts
• Uniform access to multiple Archives
– A common global schema
52
The Sloan Digital Sky Survey
• Goal
– Create the most detailed map
of the Northern Sky to-date
• 2.5m telescope
– 3 degree field of view
• Two surveys in one
The University of Chicago
Princeton University
The Johns Hopkins University
The University of Washington
New Mexico State University
University of Pittsburgh
Fermi National Accelerator Laboratory
US Naval Observatory
The Japanese Participation Group
The Institute for Advanced Study
Max Planck Inst, Heidelberg
Sloan Foundation, NSF, DOE, NASA
– 5-color images of ¼ of the sky
– Spectroscopic survey of a million
galaxies and quasars
• Very high data volume
– 40 Terabytes of raw data
– 10 Terabytes processed
– All data public
53
SkyServer
• A multi-terabyte database
• An educational website
– More than 50 hours of educational exercises
– Background on astronomy
– Tutorials and documentation
http://skyserver.sdss.org/
– Searchable web pages
• Easy astronomer access
to SDSS data.
• Prototype eScience lab
• Interactive visual tools for
data exploration
54
Demo SkyServer
•
•
•
•
atlas
education project
Mouse in pixel space
Explore an object
(record space)
• Explore literature
• Explore a set
• Pose a new question
55
SkyQuery (http://skyquery.net/)
• Distributed Query tool using a set of web services
• Many astronomy archives from
Pasadena, Chicago, Baltimore, Cambridge (England)
• Has grown from 4 to 15 archives,
now becoming international standard
•SELECT
Allows
querieso.r,
like:o.type,
o.objId,
t.objId
FROM SDSS:PhotoPrimary o,
TWOMASS:PhotoPrimary t
WHERE XMATCH(o,t)<3.5
AND AREA(181.3,-0.76,6.5)
AND o.type=3 and (o.I - t.m_j)>2
56
Demo SkyQuery Structure
• Portal is
– Plans Query (2 phase)
– Integrates answers
– Is itself a web service
• Each SkyNode publishes
– Schema Web Service
– Database Web Service
Image
Cutout
SDSS
INT
SkyQuery
Portal
FIRST
2MASS
57
MyDB: eScience Workbench
• Prototype of bringing analysis to the data
• Everybody gets a workspace (database)
– Executes analysis at the data
– Store intermediate results there
– Long queries run in batch
– Results shared within groups
• Only fetch the final results
• Extremely successful – matches work patterns
58
National Center Biotechnology Information
(NCBI) A good Example
• PubMed:
– Abstracts and books and..
• GenBank:
– All Gene sequences deposited
– BLAST and other searches
– Website to explore data and literature
• Entrez:
– unifies many databases
with literature (books, journals,..)
– Organizes the data
59
Publishing Data
Roles
Authors
Publishers
Curators
Consumers
Traditional
Scientists
Journals
Libraries
Scientists
Emerging
Collaborations
Project www site
Bigger Archives
Scientists
• Exponential growth:
– Projects last at least 3-5 years
– Data sent upwards only at the end of the project
– Data will never be centralized
• More responsibility on projects
– Becoming Publishers and Curators
– Often no explicit funding to do this (must change)
• Data will reside with projects
– Analyses must be close to the data (see later)
• Data cross-correlated with Literature and Metadata
60
Making Discoveries
• Where are discoveries made?
– At the edges and boundaries
– Going deeper, collecting more data, using more colors….
• Metcalfe’s law: quadratic benefit
– Utility of computer networks grows as the
number of possible connections: O(N2)
• Data Federation: quadratic benefit
– Federation of N archives has utility O(N2)
– Possibilities for new discoveries grow as O(N2)
• Current sky surveys have proven this
– Very early discoveries from SDSS, 2MASS, DPOSS
61
The OGIS model
Data
Management
Producer
Ingest
Archive
Access
Consumer
Administer
62
Jim’s Model of Library Science 
• Alexandria
• Gutenberg
•
(Melvil)
Dewey Decimal
• MARC
(Henriette Avram)
• Dublin Core
Yes, I know there have been other things.
63
Dublin Core
Elements
–
–
–
–
–
–
–
–
–
–
–
–
–
–
Elements+
Title
Creator
Subject
Description
Publisher
Contributor
Date
Type
Format
Identifier
Source
Language
Coverage
Rights
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
Audience
Alternative
TableOfContents
Abstract
Created
Valid
Available
Issued
Modified
Extent
Medium
IsVersionOf
HasVersion
IsReplacedBy
Replaces
IsRequiredBy
Requires
IsPartOf
HasPart
IsReferencedBy
References
IsFormatOf
HasFormat
ConformsTo
Spatial
Temporal
Mediator
DateAccepted
DateCopyrighted
DateSubmitted
EducationalLevel
AccessRights
BibliographicCitation
Encoding
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
–
LCSH (Lb. Congress Subject Head)
MESH (Medical Subject Head)
DDC (Dewey Decimal Classification)
LCC (Lb. Congress Classification)
UDC (Universal Decimal Classification)
DCMItype (Dublin Core Meta Type)
IMT (Internet Media Type)
ISO639-2 (ISO language names)
RFC1766 (Internet Language tags)
URI (Uniform Resource Locator)
Point (DCMI spatial point)
ISO3166 (ISO country codes)
Box (DCMI rectangular area)
TGN (Getty Thesaurus of Geo Names)
Period (DCMI time interval)
W3CDTF (W3C date/time)
RFC3066 (Language dialects)
Types
–
–
–
–
–
–
–
–
–
–
–
–
Collection
Dataset
Event
Image
InteractiveResouce
Service
Software
Sound
Text
PhysicalObject
StillImage
MovingImage
64
Ingest Challenges
•
•
•
•
•
•
•
Push vs Pull
What are the gold standards?
Automatic indexing, annotation, provenance.
Auto-Migration (Format conversion)
Version management
How capture time varying sources
Capture “dark matter” (encapsulated data)
– Bits don’t “rust” but applications do.
65
Access Challenges
• Archived information “rusts” if it is not
accessed. Access is essential.
• Access costs money – who pays?
• Access sometimes uses IP, who pays?
• There are also technical problems:
– Access formats are different from the storage
formats.
• migration?
• emulation?
• Gold Standards?)
66
Archive Challenges
• Cost of administering storage:
– Presently 10x to 100x the hardware cost.
• Resist attack: geographic diversity
• At 1GBps it takes 12 days to move a PB
• Store it in two (or more) places online (on disk).
A geo-plex
• Scrub it continuously (look for errors)
• On failure,
– use other copy until failure repaired,
– refresh lost copy from safe copy.
• Can organize the copies differently
(e.g.: one by time, one by space)
67
The Midrange Paradox
• Large archives are curated
– Curated by projects
• Small archives are appendices to papers
– Curated by journals
• Medium-sized archives are in limbo
– No place to register them
– No one has mandate to preserve them
• Example:
– Your website with your data files
– Small scale science projects
– Genbank gets the sequence
but not the software or analysis that produced it.
68