Download PPT - Department of Computer Science

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cluster analysis wikipedia , lookup

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
Database Systems Research on Large-Scale
Data Mining: SQL vs MapReduce
Carlos Ordonez
University of Houston
USA
Reference:
Ordonez, C, Garcia-Garcia, J, Database Systems Research on Data Mining,
Proc. ACM SIGMOD 2010, p.1253-1254 (tutorial).
Global Outline
1. Data mining models and algorithms
1.1 Data set
1.2 Data Mining Models
1.3 Data Mining Algorithms
2.Processing alternatives
2.1 Inside DBMS: SQL
2.2 Outside DBMS: MapReduce
3. Storage and Optimizations
3.1 Layouts: Horizontal and Vertical
3.2 Optimizations: Algorithmic and Systems
2/60
1.1 Data set
• Data set
with n records
• Each
has attributes: numeric, discrete or both
(mixed)
• Focus of the tutorial, d dimensions
• Generally,
• High d makes problem mathematically more
difficult
• Extra column G/Y
3/60
Common data mining models
[DLR1977,RSS]
• Unsupervised:
– math: simpler
– task: clustering, dimensionality reduction
– models: KM, EM, PCA/SVD, FA
– statistical tests overlap both
• Supervised
– math: tuning and validation than unsupervised
– task: classification, regression
– models: decision trees, Naïve Bayes, Bayes,
linear/logistic regression, SVM, neural nets
4/60
Data mining models characteristics
• Multidimensional
– tens, hundreds of dimensions
– feature selection and dimensionality reduction
• Represented & computed with matrices & vectors
– data set: set of vectors or set of records; all
numeric, mixed attributes
– model: numeric=matrices, discrete: histograms
– intermediate computations: matrices and
histograms
5/60
Why is it hard? Many matrices
6/60
Data Mining Algorithms
[ZRL1996,SIGMOD]
• Model computation & scoring data set
• Behavior with respect to data set X:
– one pass, few passes
– multiple passes, convergence, bigger issue
(most algorithms)
• Time complexity:
• Research issues:
– preserve time complexity in SQL/MapReduce
– incremental learning
7/60
2. Processing alternatives
2.1 Inside DBMS (SQL)
2.2 Outside DBMS (MapReduce)
(brief review of processing in C, external packages)
8/60
2.1 Inside DBMS
• Assumption:
– data records are in the DBMS; exporting slow
– row-based storage (not column-based)
• Programming alternatives:
– SQL and UDFs: SQL code generation (JDBC),
precompiled UDFs. Extra: SP, embedded SQL, cursors
– Internal C Code (direct access to file system and mem)
• DBMS advantages:
– important: storage, queries, security
– maybe: recovery, concurrency control, integrity,
transactions
9/60
Inside DBMS
SQL code: CREATE + SELECT, Consider Layout
[CDDHW2009,VLDB]
• CREATE TABLE
– Row storage: Clustered (to group rows of
pivoted tables), Block size (for large tables)
– Index: primary (gen. for pk, critical for joins),
secondary (may help joins & searches)
• SELECT
– Basic mechanism to write queries; standard
across DBMSs, arbitrarily complex queries,
layout: A(i,j,v), B(i,j,
including arithmetic expressions Vertical
A*B: SELECT A.i, B.j
, sum(A.v * B.v)
FROM A JOIN B ON A.j = B.i
GROUP BY A.i, B.j
10/60
Inside DBMS
Physical Operators
[DG1992,CACM] [SMAHHH2007,VLDB] [WH2009,SIGMOD]
• Serial DBMS (one CPU, maybe RAID):
– table Scan
– join: hash join, sort merge join, nested loop
– external merge sort
• Parallel DBMS (shared-nothing):
– even row distribution, hashing
– parallel table scan
– parallel joins: large/large (sort-merge, hash);
large/short (replicate short)
– distributed sort
11/60
Inside DBMS
User-Defined Function (UDF)
• Classification:
– Scalar UDF
– Aggregate UDF
– Table UDF
• Programming:
– Called in a SELECT statement
– C code or similar language
– API provided by DBMS, in C/C++
– Data type mapping
12/60
Inside DBMS
UDF pros and cons
• Advantages:
– arrays and flow control
– flexibility in code writing and no side effects
– No need to modify DBMS internal code
– In general, simple data types
• Limitations:
– OS and DBMS architecture dependent, not portable
– No I/O capability, no side effects
– Null handling and fixed memory allocation
– Memory leaks with arrays (matrices): fenced/protected mode
13/60
Inside DBMS
Aggregate UDF (skipped scalar UDF)
[JM1998,SIGMOD]
•
•
•
•
•
•
•
Table scan
Memory allocation in the heap
GROUP BY extend their power
Also require handling nulls
Advantage: parallel & multithreaded processing
Drawback: returns a single value, not a table
DBMSs: SQL Server, PostgreSQL,Teradata,
Oracle, DB2, among others
• Useful for model computations
14/60
Inside DBMS
Table UDF
[BRKPHK2008,SIGMOD]
• Main difference with aggregate UDF: returns a
table (instead of single value)
• Also, it can take several input values
• Called in the FROM clause in a SELECT
• Stream: no parallel processing, external file
• Computation power same as aggregate UDF
• Suitable for complex math operations and
algorithms
• Since result is a table it can be joined
• DBMS: SQL Server ,DB2, Oracle,PostgreSQL
15/60
Inside DBMS
Internal C code
[LTWZ2005,SIGMOD], [MYC2005,VLDB] [SD2001,CIKM]
• Advantages:
–
–
–
–
access to file system (table record blocks),
physical operators (scan, join, sort, search)
main memory, data structures, libraries
hardware optimizations: multithreading, multicore,
caching RAM, caching LI/L2
• Disadvantages:
– requires careful integration with rest of system
– not available to end users and practitioners
– may require exposing functionality with DM language
or SQL
16/60
Outside DBMS
MapReduce
[DG2008,CACM]
• Parallel processing; simple; shared-nothing
• Functions are programmed in a high-level programming
language (e.g. Java, Python); flexible.
• <key,value> pairs processed in two phases:
– map(): computation is distributed and evaluated in
parallel; independent mappers
– reduce(): partial results are combined/summarized
• Can be categorized as inside/outside DBMS, depending on
level of integration with DBMS
17/60
Outside DBMS
MapReduce Files and Processing
• File Types:
– Text Files: Common storage (e.g. CSV files.)
– SequenceFiles: Efficient processing
– Custom InputFormat (rarely used.)
• Processing:
–
–
–
–
Points are sorted by “key” before sending to reducers
Small files should be merged
Partial results are stored in file system
Intermediate files should be managed in SequenceFiles for
efficiency
18/60
Outside DBMS
Packages, libraries, Java/C++
[ZHY2009,CIDR] [ZZY2010,ICDE]
• Statistical and data mining packages:
– exported flat files; proprietary file formats
– Memory-based (processing data records,
models, internal data structures)
• Programming languages:
– Arrays
– flexibility of control statements
• Limitation: large number of records
• Packages: R, SAS, SPSS, KXEN,Matlab, WEKA
19/60
3. Storage and Optimizations
• Storage layouts:
– Horizontal: n rows, d dim columns
– Vertical: dn rows, 1 dim column
• Optimizations:
– algorithmic: general
– systems-oriented: SQL and MapReduce
20/60
Storage layout: Horizontal/Vertical
Horizontal
Vertical
Limitation with high d (max columns). No problems with high d.
Default layout for most algorithms.
Requires clustered index.
SQL arithmetic expressions and UDFs. SQL aggregations, joins, UDFs.
Easy to interpret.
Difficult to interpret.
Suitable for dense matrices.
Suitable for sparse matrices.
Complete record processing
UDF: detect point boundaries
n rows, d columns
Fast n I/Os
dn rows, few (3 or 4) columns
Slow dn I/Os (n I/Os clustered)
21/60
Optimizations: Example of data set
horizontal layout
n=5, d=3 and G/Y
i
1
2
3
4
5
X1 X2
1.7 8.2
3.4 10.5
9.3 12.2
5.7 7.3
2.5 13.3
X3
4.3
1.0
2.5
8.8
3.2
G
1
0
0
0
1
22/60
Optimization: Naïve Bayes Example
Horizontal layout
• NB
– one pass
– Gaussian, sufficient statistics (NLQ)
• Example in:
– SQL
– UDF
– MapReduce
Data Structures
public double N;
public double[] L;
public double[] Q;
23/60
Naïve Bayes
SQL (optimized)
/*Inserting into NLQ
INSERT INTO NLQ
SELECT
g
,sum(1.0) AS Ng
N */
,sum(X1) AS L_X1
*/
,sum(X2) AS L_X2
,sum(X3) AS L_X3
,sum(power(X1,2))
/* Q */
,sum(power(X2,2))
,sum(power(X3,2))
FROM X
GROUP BY g;
*/
/*
/* L
AS Q_X1
AS Q_X2
AS Q_X3
/*Inserting into NB */
INSERT INTO NB
SELECT
g
,Ng/T.Nglobal
/* pi */
,L_X1/Ng
/* C */
,L_X2/Ng
,L_X3/Ng
,Q_X1/Ng-power(L_X1/Ng,2)
/* R */
,Q_X2/Ng-power(L_X2/Ng,2)
,Q_X3/Ng-power(L_X3/Ng,2)
FROM NLQ,(
SELECT SUM(Ng) AS Nglobal
NLQ)T;
FROM
24/60
Naïve Bayes
Aggregate UDF (optimized, 1 pass)
public void Init() {
nbnlq = new NBNLQ();
int h;
nbnlq.N = 0;
for (h = 1; h <= nbnlq.d; h++)
{
nbnlq.L[h] = 0;
nbnlq.Q[h] = 0;
}
}
public void Merge(udf_nb_train_d3 thread) {
int i, h;
nbnlq.d = thread.nbnlq.d;
nbnlq.N += thread.nbnlq.N;
for (h = 1; h <= nbnlq.d; h++)
{
nbnlq.L[h] += thread.nbnlq.L[h];
nbnlq.Q[h] += thread.nbnlq.Q[h];
}
}
public void Accumulate(Xd3 X) {
int h;
if (!X.IsNull)
{
nbnlq.d = X.getD();
nbnlq.N += 1.0;
for (h = 1; h <= nbnlq.d; h++) // L,Q
{
nbnlq.L[h] += X.getColumn(h);
nbnlq.Q[h] += X.getColumn(h) *
X.getColumn(h);
}
}
}
public SqlString Terminate() {
for (h = 1; h <= nbnlq.d; h++)
{
result.Append("C" + h + "=");
result.Append(nbnlq.L[h] / nbnlq.N);
result.Append(",");
}
for (h = 1; h <= nbnlq.d; h++) {
result.Append("R" + h + "=");
result.Append(nbnlq.Q[h] / nbnlq.N Math.Pow( nbnlq.L[h] /nbnlq.N, 2));
result.Append(",");
}
}
25/60
MapReduce
Optimized
public static class NBHMapper()
{
context.write(key,val);
}
public static class NBHCombiner() {
for (DoubleArrayWritable val : values) {
n++; x = (DoubleWritable[]) val.toArray();
for (int h = 1; h <= d; h++) {
attr = x[h - 1].get();
L[h] += attr; Q[h] += attr * attr;
}
}
_val_array[1].set(n);
for (int h = 1; h <= d; h++) {
_val_array[1+h].set(L[h]);
}
for (int h = 1; h <= d; h++) {
_val_array[1+d+h].set(Q[h]);}
}
public static class NBHReducer(){
for (DoubleArrayWritable val : values) {
x = (DoubleWritable[]) val.toArray();
n += x[1].get();
for (int h = 1; h <= d; h++) {
L[h] += x[1+ h].get();}
for (int h = 1; h <= d; h++) {
Q[h] += x[1+d+h].get();}
}
each_row = "N=" + n;
each_row += ";C=";
for (int h = 1; h <= d; h++) {
each_row += L[h]/n + ",";}
each_row += ";R=";
for (int h = 1; h <= d; h++) {
each_row += Q[h] / n - Math.pow((L[h] / n),
2) + ",";}
}
26/60
3.2 Optimizations
Algorithmic & Systems
• Algorithmic
– 90% research, many efficient algorithms
– accelerate/reduce computations or convergence
– database systems focus: reduce I/O
– approximate solutions
• Systems (SQL, MapReduce)
– Platform: parallel DBMS server vs cluster of
computers
– Programming: SQL/C++ versus Java
27/60
Algorithmic
[ZRL1996,SIGMOD]
• Implementation: data set available as flat file,
binary file required for random access
• May require data structures working in main
memory and disk
• Programming not in SQL: C/C++ are preferred
languages, although Java becoming common
• MapReduce is becoming popular
• Assumption d<<n: n has received more attention
• Issue: d>n produces numerical issues and large
covariance/correlation matrix (larger than X)
28/60
Algorithmic Optimizations
[STA1998,SIGMOD] [ZRL1996,SIGMOD][O2007,SIGMOD]
• Exact model computation:
– summaries: sufficient statistics (Gaussian pdf),
histograms, discretization
– accelerate convergence, reduce iterations
– faster matrix operations: * +
• Approximate model computation:
– Sampling: efficient in time O(s)
– Incremental:
• math: escape local optima (EM), reseed
• database systems: favor table scan
29/60
Systems Optimizations
DBMS
[O2006,TKDE], [ORD2010,TKDE]
• SQL query optimization
– mathematical equations as queries
– Turing-complete: SQL code generation and
programming language
• UDFs as optimization
– substitute key mathematical operations
– push processing into RAM memory
30/60
Systems Optimizations
DBMS SQL query
[O2004,DMKD]
•
•
•
•
Denormalization
Issue: Query rewriting (optimizer falls short)
Index depends on layout
Horizontal layout:
– indexed by i
– d may be an issue, thus vertical partition
• Vertical layout:
– storage: clustered by point
– indexing by subscript
– Use specific join algorithm
31/60
Systems Optimizations
DBMS SQL query
[O2006,TKDE] [OP2010,TKDE],[OP2010,DKE] ,[MC2002,ICDM]
• Join:
– denormalized storage: model, intermediate tables
– favor hash joins over mrg-srt: both tables PI on i
– secondary indexing for join: sort-merge join
• Aggregation (compression):
– push group-by before join: watch out nulls and
high cardinality columns like point i
• synchronized table scans: several SELECTs on
same table; examples: unpivoting; 2+ models
• Sampling: O(s), random access, truly random; error
32/60
Systems Optimization
DBMS UDF
[HLS2005,TODS] [O2007,TKDE]
• UDFs can substitute SQL code
– UDFs can express complex math computations
– Scalar UDFs: vector operations
• Aggregate UDFs: compute data set summaries in
parallel
• Table UDFs: stream model; external temporary
file
33/60
MapReduce Optimizations
[ABASR2009,VLDB] [CDDHW2009,VLDB] [SADMPPR2010,CACM]
• Data set
– keys as input, partition data set
– text versus sequential file
– loading into file system may be required
• Parallel processing
– high cardinality keys: i
– handle skewed distributions
– reduce row redistribution in Map( )
• Main memory processing
34/60
MapReduce
Processing Optimizations
[DG2008,CACM] [FPC2009,PVLDB] [PHBB2009,PVLDB]
•
•
•
•
•
•
Modify Block Size
Disable Block Replication
Delay reduce()
Tune M and R (memory allocation and number)
Several M use the same R
Avoid full table scans by using subfiles (requires
naming convention)
• combine() in map() to shrink intermediate files
• SequenceFiles as input with custom data types.
35/60
MapReduce
Issues
•
•
•
•
Loading, converting to binary may be necessary
Input key generally OK if high cardinality
Skewed map key distribution
Key redistribution (lot of message passing)
36/60
SQL vs MapReduce
Processing & I/O Bottleneck (bulk load)
[PPRADMS2009,SIGMOD] [O2010,TKDE]
Import and Model Computation Times for SQL and MR (times in secs).
n x 1M
Import
1
18
2
41
4
81
8
147
16
331
SQL
Build
4
4
9
18
41
Total Import
22
48
45
94
90
185
165
367
372
730
MR*
Build
38
59
91
153
285
Total
86
153
276
520
1015
*MR times include conversion into a SequenceFile.
37/60
Systems optimizations
SQL vs MR (optimized versions, run same hardware)
Task
SQL
UDF
MR
Speed: compute model
1
2
3
Speed: score data set
1
3
2
Programming flexibility
3
2
1
Process non-tabular data
3
2
1
Loading speed
1
1
2
Ability to add optimizations
2
1
3
Manipulating data key distribution
1
2
3
Immediate processing
(push=SQL,pull=MR)
2
1
3
38/60
Research Issues
Both: SQL and MapReduce
[BFR1998,KDD], [CFB1999,ICDE] [SADMPPR2010,CACM]
• Fast data mining algorithms solved? Yes, but not
considering data sets are stored in a DBMS
• SQL and MR have many similarities: shared-nothing
• Fast load/unload interfaces between both systems;
tighter integration
• General tradeoffs in speed and programming:
horizontal vs vertical layout
• Incremental algorithms
– one pass (streams) versus parallel processing
– reduce passes/iterations
39/60
Research Issues on Each
[ABASR2009,VLDB], [CDDHW2009,VLDB [CKLRSS2009,VLDB]
• DBMS:
– C++/Java libraries generating SQL code, pushing processing: Oracle,
Teradata, SAS, KXEN
– Internal C code: commercial DBMSs, open-source?
– Study aggregate UDFs for complex models; extend Table UDF
support: I/O bottleneck, streams
– Extend SQL with more DM primitives and constructs or forget
extending SQL for DM?
– Specialized DBMS, middleware: SciDB, RIOT
• MapReduce:
– SQL+MapReduce: Greenplum, Aster, Teradata
– MapReduce only: Mahout
– MapReduce for query processing and data mining: especially joins,
aggregations OK
40/60
Thank you… Q&A
• DBMS Group at UH:
– Carlos Garcia-Alvarado
– Ahamd Qwasmeh
– Sasi K. Pitchaimalai
– Mario Navas
– Zhibo Chen
– Rengan Xu
References
•
•
•
•
•
•
•
•
•
•
[ABASR2009,VLDB] A. Abouzeid, K. Bajda-Pawlikowski, D. Abadi, A. Silberschatz, and A. Rasin. HadoopDB: an
architectural hybrid of MapReduce and DBMS technologies for analytical workloads. Proc. VLDB Endow., pages
922-933, 2009.
[BFR1998,KDD] P. Bradley, U. Fayyad, and C. Reina. Scaling clustering algorithms to large databases. In ACM
KDD Conference, pages 9-15, 1998.
[BRKPHK2008,SIGMOD] J.A Blakeley, V. Rao, I. Kunen, A. Prout, M. Henaire, and C. Kleinerman. .NET
database programmability and extensibility in microsoft SQL server. In ACM SIGMOD, pages 1087-1098. 2008.
[CFB1999,ICDE] S. Chaudhuri, U. Fayyad, and J. Bernhardt. Scalable classification over SQL databases. ICDE,
00:470, 1999.
[CDHHL1999,KDD] J. Clear, D. Dunn, B. Harvey, M.L. Heytens, and P. Lohman. Non-stop SQL/MX primitives for
knowledge discovery. In ACM KDD Conference, pages 425-429, 1999.
[CDDHW2009,VLDB] J. Cohen, B. Dolan, M. Dunlap, J. Hellerstein, and C. Welton. MAD skills: New analysis
practices for big data. In VLDB Conference, pages 1481-1492, 2009.
[CKLRSS2009,VLDB] A demonstration of SciDB: a science-oriented DBMS. In VLDB Conference, pages 15341537,2009.
[DG2008,CACM] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Commun.
ACM, 51(1):107-113,2008.
[DLR1977,RSS] A.P. Dempster, N.M. Laird, and D. Rubin. Maximum Likelihood from Incomplete Data via the EM
Algorithm. Journal of The Royal Statistical Society, 39(1):1-38, 1977.
[DM2006,SIGMOD] A. Deshpande and S. Madden. MauveDB: supporting model-based user views in database
systems. In SIGMOD Conference, pages 73-84, 2006.
References
•
•
•
•
•
•
•
•
•
•
•
[DG1992,CACM] D. DeWitt, J. Gray. Parallel database systems: the future of high performance database systems. In
Communications of the ACM, 35(6): 85-98, 1992.
[DNPT2006,SAC] A. Dorneich, R. Natarajan, E.P.D. Pednault, and F. Tipu. Embedded predictive modeling in a
parallel relational database. In SAC, pages 569-574, 2006.
[FPC2009,PVLDB] E. Friedman, P. Pawlowski, and J. Cieslewicz. SQL/MapReduce: A practical approach to selfdescribing, polymorphic, and parallelizable user-defined functions. PVLDB, 2(2):1402-1413, 2009.
[GO2010,DKE] J. García-García, C. Ordonez: Extended aggregations for databases with referential integrity issues.
Data Knowl. Eng. 69(1): 73-95 (2010).
[GCBLRVPP1997,JDMKD] J. Gray and S. Chaudhuri and A. Bosworth and A. Layman and D. Reichart and M.
Venkatrao and F. Pellow, and H. Pirahesh. Data cube: A relational aggregation operator generalizing group-by, crosstab, and sub-totals. J. Data Mining and Knowledge Discovery., 1(1):29-53,1997.
[HLS2005,TODS] Z. He, B. S. Lee, and R. Snapp. Self-tuning cost modeling of user-defined functions in an objectrelational DBMS. ACM Trans. Database Syst., 30(3):812-853, 2005.
[JM1998,SIGMOD] M. Jaedicke and B. Mitschang. On parallel processing of aggregate and scalar functions in
object-relational DBMS. In ACM SIGMOD Conference, pages 379-389, 1998.
[LTWZ2005,SIGMOD] C. Luo, H. Thakkar, H. Wang, and C. Zaniolo. A native extension of SQL for mining data
streams. In ACM SIGMOD, pages 873-875, New York, NY, USA, 2005.
[MC2002,ICDM] B.L. Milenova and M.M. Campos. O-cluster: Scalable clustering of large high dimensional data
sets. In Proc. IEEE ICDM Conference, page 290, Washington, DC, USA, 2002.
[MYC2005,VLDB] B.L. Milenova, J. Yarmus, and M.M. Campos. SVM in Oracle database 10g: Removing the
barriers to widespread adoption of support vector machines. In VLDB Conference, 2005.
[NCFB2001,ICDE] A. Netz, S. Chaudhuri, U. Fayyad, and J. Berhardt. Integrating data mining with SQL databases:
OLE DB for data mining. In IEEE ICDE Conference, pages 379-387, 2001.
References
•
•
•
•
•
•
•
•
•
•
•
•
[O2004,DMKD] C. Ordonez. Horizontal aggregations for building tabular data sets. In ACM SIGMOD Data Mining
& Knowledge Discovery Workshop (DMKD), pages 35-42, 2004.
[O2006,TKDE] C. Ordonez. Integrating K-means clustering with a relational DBMS using SQL. IEEE Transactions
on Knowledge and Data Engineering (TKDE), 18(2):188-201, 2006.
[O2007,SIGMOD] C. Ordonez. Building Statistical Models and Scoring with UDFs. In SIGMOD Conference, pages
1005-1016, 2007.
[O2010,TKDE] C. Ordonez. Statistical Model Computation with UDFs. IEEE Transactions on Knowledge and Data
Engineering (TKDE), 2010
[OP2010,TKDE] C. Ordonez, S.K. Pitchaimalai. Bayesian Classifiers Programmed in SQL. IEEE Transactions on
Knowledge and Data Engineering (TKDE), 22(1):139-144, 2010.
[OP2010,DKE] C. Ordonez, S.K. Pitchaimalai. Fast UDFs to Compute Sufficient Statistics on Large Data Sets
exploiting Caching and Sampling, Data and Knowledge Engineering Journal (DKE), 2010.
[OG2008,DSS] C. Ordonez, J. García-García: Referential integrity quality metrics. Decision Support Systems 44 (2):
495-508 (2008)
[O2003,JLINUX] M. Owens. Embedding an SQL database with SQLite. Linux J., 2003(110):2, 2003.
[PHBB2009,PVLDB] B. Panda, J. Herbach, S. Basu, and R.J. Bayardo. PLANET: Massively parallel learning of tree
ensembles with MapReduce. PVLDB, 2(2):1426-1437, 2009.
[PPRADMS2009,SIGMOD] A. Pavlo, E. Paulson, A. Rasin, D. Abadi, D.J. DeWitt, S. Madden, and Stonebraker,
M. A comparison of approaches to large-scale data analysis. In SIGMOD Conference, pages 165-178, 2009.
[STA1998,SIGMOD] S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with relational
database systems: alternatives and implications. In ACM SIGMOD, pages 343-354, 1998.
[SD2001,CIKM] K. Sattler and O. Dunemann. SQL database primitives for decision tree classifiers. In ACM CIKM
Conference, pages 379-386, 2001.
References
•
•
•
•
•
•
[NCFB2001,ICDE] A. Netz, S. Chaudhuri, U. Fayyad, and J. Berhardt. Integrating data mining with SQL databases:
OLE DB for data mining. In IEEE ICDE Conference, pages 379-387, 2001.
[SMAHHH2007,VLDB] M. Stonebraker, S. Madden, D. J. Abadi, S. Harizopoulos, N. Hachem, and P. Helland. The
end of an architectural era: (it's time for a complete rewrite). In VLDB, pages 1150-1160, 2007.
[WH2009,SIGMOD] F. M. Waas and J. M. Hellerstein. Parallelizing extensible query optimizers. In SIGMOD
Conference, pages 871-878, 2009.
[ZHY2009,CIDR] Y. Zhang, H. Herodotou, and J. Yang. Riot: I/O-efficient numerical computing without SQL. In
CIDR, 2009.
[ZZY2010,ICDE] Y. Zhang, W. Zhang, J. Yang. I/O-Efficient Statistical Computing with RIOT. In ICDE, 2010.
[ZRL1996,SIGMOD] T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH: An efficient data clustering method for
very large databases. In ACM SIGMOD Conference, pages 103-114, 1996.