Download Data Mining - PersianGig

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
‫شناسنامه مهارت‬
‫ داده کاوي‬: ‫نام درس‬
Data Mining: Concepts and Techniques : ‫نام منبع‬
Jiawei Han , Micheline Kamber : ‫نام مولفان‬
Morgan Kaufmann Publishers : ‫انتشارات‬
‫صفحه قبل‬
‫صفحه بعد‬
‫هدف هاي کلي درس‬
: ‫آشنايي دانشجو با‬
An introduction to the multidisciplinary field of data mining•
Techniques for preprocessing the data before mining•
A solid introduction to data warehouse, OLAP (On-Line •
Analytical Processing), and data generalization
Methods for mining frequent patterns, associations, and •
correlations in transactional and relational databases and
data warehouses
‫صفحه قبل‬
‫صفحه بعد‬
‫پيشگفتار‬
This course is an introduction to data mining and knowledge
discovery from data. The emphasis is placed on basic data
mining concepts and techniques for uncovering interesting
data patterns hidden in large data sets.
The implementation methods discussed are particularly
oriented toward the development of scalable and efficient
data mining tools.
‫صفحه قبل‬
‫صفحه بعد‬
‫مهارت اول‪ :‬مقدمه‬
‫فصل اول کتاب‪ :‬آشنائي با مفهوم و وظايف داده کاوي‬
‫صفحه بعد‬
‫صفحه قبل‬
‫فهرست مطالب‬
‫ آشنايي با مفهوم و وظايف داده کاوي‬:‫هدف هاي کلي مهارت‬
:‫عناوين زيرمهارت ها‬
1.1 What Motivated Data Mining? Why Is It Important?
1.2 So, What Is Data Mining?
1.3 Data Mining—On What Kind of Data?
1.4 Data Mining Functionalities—What Kinds of Patterns Can Be Mined?
1.5 Are All of the Patterns Interesting?
1.6 Classification of Data Mining Systems
1.7 Data Mining Task Primitives
1.8 Integration of a Data Mining System with a Database or Data Warehouse System
1.9 Major Issues in Data Mining
‫واژگان کليدي مهارت‬
Data mining architecture, data pattern, data mining query languages,
data mining integration, data mining classification
‫صفحه قبل‬
‫صفحه بعد‬
‫مهارت ‪ -1‬مقدمه‬
‫آشنايي دانشجو با ‪:‬‬
‫هدف هاي کلي مهارت‬
‫‪ -1‬چگونگي توسعه داده کاوي به عنوان بخش ي از تکامل طبيعي فناوري پايگاه داده ها‬
‫‪ -2‬اهميت داده کاوي‬
‫‪ -3‬تعريف داده کاوي‬
‫‪ -4‬ساختار کلي سيستم هاي داده کاوي‬
‫‪ -5‬انواع داده هايي که ميتوان روي آنها داده کاوي انجام داد‬
‫‪ -6‬انواع الگوهايي که ميتوان طي داده کاوي استخراج نمود‬
‫‪ -7‬و شناسائي الگوهاي داراي دانش و اطالعات مفيد‬
‫‪ -8‬اصول اوليه داده کاوي که با استفاده از آنها زبانهاي بازيابي داده کاوي طراحي ميشوند‬
‫‪... -9‬‬
‫صفحه بعد‬
‫صفحه قبل‬
Data Mining:
Chapter 1: Introduction

Chapter 2: Data Preprocessing

Chapter 3: Data Warehouse and OLAP Technology: An

Introduction
Chapter 4: Advanced Data Cube Technology and Data

Generalization
Chapter 5: Mining Frequent Patterns, Association and
Correlations
‫صفحه قبل‬
‫صفحه بعد‬

Chapter 1. Introduction
• Motivation: Why data mining?
• What is data mining?
• Data Mining: On what kind of data?
• Data mining functionality
• Classification of data mining systems
• Top-10 most popular data mining algorithms
• Major issues in data mining
8
Why Data Mining?
• The Explosive Growth of Data: from terabytes to petabytes
• Data collection and data availability
• Automated data collection tools, database systems, Web,
computerized society
• Major sources of abundant data
• Business: Web, e-commerce, transactions, stocks, …
• Science: Remote sensing, bioinformatics, scientific simulation, …
• Society and everyone: news, digital cameras, YouTube
• We are drowning in data, but starving for knowledge!
• “Necessity is the mother of invention”—Data mining—Automated analysis of
massive data sets
9
Evolution of Sciences
• Before 1600, empirical science
• 1600-1950s, theoretical science
• Each discipline has grown a theoretical component. Theoretical models often motivate
experiments and generalize our understanding.
• 1950s-1990s, computational science
• Over the last 50 years, most disciplines have grown a third, computational branch (e.g.
empirical, theoretical, and computational ecology, or physics, or linguistics.)
• Computational Science traditionally meant simulation. It grew out of our inability to find
closed-form solutions for complex mathematical models.
• 1990-now, data science
• The flood of data from new scientific instruments and simulations
• The ability to economically store and manage petabytes of data online
• The Internet and computing Grid that makes all these archives universally accessible
• Scientific info. management, acquisition, organization, query, and visualization tasks scale
almost linearly with data volumes. Data mining is a major new challenge!
10
Evolution of Database Technology
• 1960s:
• Data collection, database creation, IMS and network DBMS
• 1970s:
• Relational data model, relational DBMS implementation
• 1980s:
• RDBMS, advanced data models (extended-relational, OO, deductive, etc.)
• Application-oriented DBMS (spatial, scientific, engineering, etc.)
• 1990s:
• Data mining, data warehousing, multimedia databases, and Web databases
• 2000s
• Stream data management and mining
• Data mining and its applications
• Web technology (XML, data integration) and global information systems
11
What Is Data Mining?
• Data mining (knowledge discovery from data)
• Extraction of interesting (non-trivial, implicit, previously unknown and
potentially useful) patterns or knowledge from huge amount of data
• Data mining: a misnomer?
• Alternative names
• Knowledge discovery (mining) in databases (KDD), knowledge
extraction, data/pattern analysis, data archeology, data dredging,
information harvesting, business intelligence, etc.
• Watch out: Is everything “data mining”?
• Simple search and query processing
• (Deductive) expert systems
12
Knowledge Discovery (KDD) Process
Data mining—core of •
knowledge discovery
process
Pattern Evaluation
Data Mining
Task-relevant Data
Data Warehouse
Data Cleaning
Data Integration
13
Databases
Selection
Data Mining and Business Intelligence
Increasing potential
to support
business decisions
Decision
Making
Data Presentation
Visualization Techniques
Data Mining
Information Discovery
End User
Business
Analyst
Data
Analyst
Data Exploration
Statistical Summary, Querying, and Reporting
Data Preprocessing/Integration, Data Warehouses
Data Sources
Paper, Files, Web documents, Scientific experiments, Database Systems
14
DBA
Data Mining: Confluence of Multiple Disciplines
Database
Technology
Machine
Learning
Pattern
Recognition
15
Statistics
Data Mining
Algorithm
Visualization
Other
Disciplines
Why Not Traditional Data Analysis?
• Tremendous amount of data
• Algorithms must be highly scalable to handle such as tera-bytes of data
• High-dimensionality of data
• Micro-array may have tens of thousands of dimensions
• High complexity of data
• Data streams and sensor data
• Time-series data, temporal data, sequence data
• Structure data, graphs, social networks and multi-linked data
• Heterogeneous databases and legacy databases
• Spatial, spatiotemporal, multimedia, text and Web data
• Software programs, scientific simulations
• New and sophisticated applications
16
Multi-Dimensional View of Data Mining
• Data to be mined
• Relational, data warehouse, transactional, stream, object-oriented/relational,
active, spatial, time-series, text, multi-media, heterogeneous, legacy, WWW
• Knowledge to be mined
• Characterization, discrimination, association, classification, clustering,
trend/deviation, outlier analysis, etc.
• Multiple/integrated functions and mining at multiple levels
• Techniques utilized
• Database-oriented, data warehouse (OLAP), machine learning, statistics,
visualization, etc.
• Applications adapted
• Retail, telecommunication, banking, fraud analysis, bio-data mining, stock
market analysis, text mining, Web mining, etc.
17
Data Mining: Classification Schemes
• General functionality
• Descriptive data mining
• Predictive data mining
• Different views lead to different classifications
• Data view: Kinds of data to be mined
• Knowledge view: Kinds of knowledge to be discovered
• Method view: Kinds of techniques utilized
• Application view: Kinds of applications adapted
18
Data Mining: On What Kinds of Data?
• Database-oriented data sets and applications
• Relational database, data warehouse, transactional database
• Advanced data sets and advanced applications
• Data streams and sensor data
• Time-series data, temporal data, sequence data (incl. bio-sequences)
• Structure data, graphs, social networks and multi-linked data
• Object-relational databases
• Heterogeneous databases and legacy databases
• Spatial data and spatiotemporal data
• Multimedia database
• Text databases
• The World-Wide Web
19
Data Mining Functionalities
• Multidimensional concept description: Characterization and discrimination
• Generalize, summarize, and contrast data characteristics, e.g., dry vs.
wet regions
• Frequent patterns, association, correlation vs. causality
• Diaper  Beer [0.5%, 75%] (Correlation or causality?)
• Classification and prediction
• Construct models (functions) that describe and distinguish classes or
concepts for future prediction
• E.g., classify countries based on (climate), or classify cars based on
(gas mileage)
• Predict some unknown or missing numerical values
20
Data Mining Functionalities (2)
• Cluster analysis
• Class label is unknown: Group data to form new classes, e.g., cluster
houses to find distribution patterns
• Maximizing intra-class similarity & minimizing interclass similarity
• Outlier analysis
• Outlier: Data object that does not comply with the general behavior of the
data
• Noise or exception? Useful in fraud detection, rare events analysis
• Trend and evolution analysis
• Trend and deviation: e.g., regression analysis
• Sequential pattern mining: e.g., digital camera  large SD memory
• Periodicity analysis
• Similarity-based analysis
• Other pattern-directed or statistical analyses
21
Top-10 Most Popular DM Algorithms:
18 Identified Candidates (I)
• Classification
• #1. C4.5: Quinlan, J. R. C4.5: Programs for Machine Learning. Morgan Kaufmann., 1993.
• #2. CART: L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth, 1984.
• #3. K Nearest Neighbours (kNN): Hastie, T. and Tibshirani, R. 1996. Discriminant Adaptive Nearest
Neighbor Classification. TPAMI. 18(6)
• #4. Naive Bayes Hand, D.J., Yu, K., 2001. Idiot's Bayes: Not So Stupid After All? Internat. Statist. Rev. 69,
385-398.
• Statistical Learning
• #5. SVM: Vapnik, V. N. 1995. The Nature of Statistical Learning Theory. Springer-Verlag.
• #6. EM: McLachlan, G. and Peel, D. (2000). Finite Mixture Models. J. Wiley, New York. Association
Analysis
• #7. Apriori: Rakesh Agrawal and Ramakrishnan Srikant. Fast Algorithms for Mining Association Rules. In
VLDB '94.
• #8. FP-Tree: Han, J., Pei, J., and Yin, Y. 2000. Mining frequent patterns without candidate generation. In
SIGMOD '00.
22
The 18 Identified Candidates (II)
• Link Mining
• #9. PageRank: Brin, S. and Page, L. 1998. The anatomy of a large-scale hypertextual Web
search engine. In WWW-7, 1998.
• #10. HITS: Kleinberg, J. M. 1998. Authoritative sources in a hyperlinked environment. SODA,
1998.
• Clustering
• #11. K-Means: MacQueen, J. B., Some methods for classification and analysis of multivariate
observations, in Proc. 5th Berkeley Symp. Mathematical Statistics and Probability, 1967.
• #12. BIRCH: Zhang, T., Ramakrishnan, R., and Livny, M. 1996. BIRCH: an efficient data
clustering method for very large databases. In SIGMOD '96.
• Bagging and Boosting
• #13. AdaBoost: Freund, Y. and Schapire, R. E. 1997. A decision-theoretic generalization of online learning and an application to boosting. J. Comput. Syst. Sci. 55, 1 (Aug. 1997), 119-139.
23
The 18 Identified Candidates (III)
• Sequential Patterns
• #14. GSP: Srikant, R. and Agrawal, R. 1996. Mining Sequential Patterns: Generalizations and
Performance Improvements. In Proceedings of the 5th International Conference on Extending Database
Technology, 1996.
• #15. PrefixSpan: J. Pei, J. Han, B. Mortazavi-Asl, H. Pinto, Q. Chen, U. Dayal and M-C. Hsu. PrefixSpan:
Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth. In ICDE '01.
• Integrated Mining
• #16. CBA: Liu, B., Hsu, W. and Ma, Y. M. Integrating classification and association rule mining. KDD-98.
• Rough Sets
• #17. Finding reduct: Zdzislaw Pawlak, Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer
Academic Publishers, Norwell, MA, 1992
• Graph Mining
• #18. gSpan: Yan, X. and Han, J. 2002. gSpan: Graph-Based Substructure Pattern Mining. In ICDM '02.
24
Most Popular Algorithms
#1: C4.5
#2: K-Means
#3: SVM
#4: Apriori
#5: EM
25
#6: PageRank
#7: AdaBoost
#8: KNN
#9: Naive Bayes
#10: CART
Major Issues in Data Mining
• Mining methodology
• Mining different kinds of knowledge from diverse data types, e.g., bio, stream, Web
• Performance: efficiency, effectiveness, and scalability
• Pattern evaluation: the interestingness problem
• Incorporation of background knowledge
• Handling noise and incomplete data
• Parallel, distributed and incremental mining methods
• Integration of the discovered knowledge with existing one: knowledge fusion
• User interaction
• Data mining query languages and ad-hoc mining
• Expression and visualization of data mining results
• Interactive mining of knowledge at multiple levels of abstraction
• Applications and social impacts
• Domain-specific data mining & invisible data mining
• Protection of data security, integrity, and privacy
26
Why Data Mining?—Potential Applications
• Data analysis and decision support
• Market analysis and management
• Target marketing, customer relationship management (CRM),
market basket analysis, cross selling, market segmentation
• Risk analysis and management
• Forecasting, customer retention, improved underwriting, quality
control, competitive analysis
• Fraud detection and detection of unusual patterns (outliers)
• Other Applications
• Text mining (news group, email, documents) and Web mining
• Stream data mining
• Bioinformatics and bio-data analysis
27
Ex. 1: Market Analysis and Management
• Where does the data come from?—Credit card transactions, loyalty cards, discount
coupons, customer complaint calls, plus (public) lifestyle studies
• Target marketing
• Find clusters of “model” customers who share the same characteristics: interest, income level,
spending habits, etc.
• Determine customer purchasing patterns over time
• Cross-market analysis—Find associations/co-relations between product sales, & predict
based on such association
• Customer profiling—What types of customers buy what products (clustering or
classification)
• Customer requirement analysis
• Identify the best products for different groups of customers
• Predict what factors will attract new customers
• Provision of summary information
• Multidimensional summary reports
• Statistical summary information (data central tendency and variation)
28
Ex. 2: Corporate Analysis & Risk Management
• Finance planning and asset evaluation
• cash flow analysis and prediction
• contingent claim analysis to evaluate assets
• cross-sectional and time series analysis (financial-ratio, trend analysis,
etc.)
• Resource planning
• summarize and compare the resources and spending
• Competition
• monitor competitors and market directions
• group customers into classes and a class-based pricing procedure
• set pricing strategy in a highly competitive market
29
Ex. 3: Fraud Detection & Mining Unusual Patterns
• Approaches: Clustering & model construction for frauds, outlier analysis
• Applications: Health care, retail, credit card service, telecomm.
• Auto insurance: ring of collisions
• Money laundering: suspicious monetary transactions
• Medical insurance
• Professional patients, ring of doctors, and ring of references
• Unnecessary or correlated screening tests
• Telecommunications: phone-call fraud
• Phone call model: destination of the call, duration, time of day or week.
Analyze patterns that deviate from an expected norm
• Retail industry
• Analysts estimate that 38% of retail shrink is due to dishonest employees
• Anti-terrorism
30
KDD Process: Several Key Steps
• Learning the application domain
• relevant prior knowledge and goals of application
• Creating a target data set: data selection
• Data cleaning and preprocessing: (may take 60% of effort!)
• Data reduction and transformation
• Find useful features, dimensionality/variable reduction, invariant representation
• Choosing functions of data mining
• summarization, classification, regression, association, clustering
• Choosing the mining algorithm(s)
• Data mining: search for patterns of interest
• Pattern evaluation and knowledge presentation
• visualization, transformation, removing redundant patterns, etc.
• Use of discovered knowledge
31
Are All the “Discovered” Patterns Interesting?
• Data mining may generate thousands of patterns: Not all of them are
interesting
• Suggested approach: Human-centered, query-based, focused mining
• Interestingness measures
• A pattern is interesting if it is easily understood by humans, valid on new or test
data with some degree of certainty, potentially useful, novel, or validates some
hypothesis that a user seeks to confirm
• Objective vs. subjective interestingness measures
• Objective: based on statistics and structures of patterns, e.g., support,
confidence, etc.
• Subjective: based on user’s belief in the data, e.g., unexpectedness, novelty,
actionability, etc.
32
Find All and Only Interesting Patterns?
• Find all the interesting patterns: Completeness
• Can a data mining system find all the interesting patterns? Do we need to
find all of the interesting patterns?
• Heuristic vs. exhaustive search
• Association vs. classification vs. clustering
• Search for only interesting patterns: An optimization problem
• Can a data mining system find only the interesting patterns?
• Approaches
• First generate all the patterns and then filter out the uninteresting
ones
• Generate only the interesting patterns—mining query optimization
33
Other Pattern Mining Issues
• Precise patterns vs. approximate patterns
• Association and correlation mining: possible find sets of precise patterns
• But approximate patterns can be more compact and sufficient
• How to find high quality approximate patterns??
• Gene sequence mining: approximate patterns are inherent
• How to derive efficient approximate pattern mining algorithms??
• Constrained vs. non-constrained patterns
• Why constraint-based mining?
• What are the possible kinds of constraints? How to push constraints into
the mining process?
34
Primitives that Define a Data Mining Task
• Task-relevant data
•
•
•
•
•
Database or data warehouse name
Database tables or data warehouse cubes
Condition for data selection
Relevant attributes or dimensions
Data grouping criteria
• Type of knowledge to be mined
• Characterization, discrimination, association, classification, prediction,
clustering, outlier analysis, other data mining tasks
• Background knowledge
• Pattern interestingness measurements
• Visualization/presentation of discovered patterns
35
Primitive 3: Background Knowledge
• A typical kind of background knowledge: Concept hierarchies
• Schema hierarchy
• E.g., street < city < province_or_state < country
• Set-grouping hierarchy
• E.g., {20-39} = young, {40-59} = middle_aged
• Operation-derived hierarchy
• email address: login-name < department < university < country
• Rule-based hierarchy
• low_profit_margin (X) <= price(X, P1) and cost (X, P2) and (P1 - P2) < $50
36
Primitive 4: Pattern Interestingness Measure
Simplicity •
e.g., (association) rule length, (decision) tree size
Certainty •
e.g., confidence, P(A|B) = #(A and B)/ #(B), classification reliability or
accuracy, certainty factor, rule strength, rule quality, discriminating
weight, etc.
Utility •
potential usefulness, e.g., support (association), noise threshold
(description)
Novelty •
not previously known, surprising (used to remove redundant rules)
37
Primitive 5: Presentation of Discovered Patterns
• Different backgrounds/usages may require different forms of representation
• E.g., rules, tables, crosstabs, pie/bar chart, etc.
• Concept hierarchy is also important
• Discovered knowledge might be more understandable when
represented at high level of abstraction
• Interactive drill up/down, pivoting, slicing and dicing provide different
perspectives to data
• Different kinds of knowledge require different representation: association,
classification, clustering, etc.
38
DMQL—A Data Mining Query Language
• Motivation
• A DMQL can provide the ability to support ad-hoc and interactive
data mining
• By providing a standardized language like SQL
• Hope to achieve a similar effect like that SQL has on relational
database
• Foundation for system development and evolution
• Facilitate information exchange, technology transfer,
commercialization and wide acceptance
• Design
• DMQL is designed with the primitives described earlier
39
Other Data Mining Languages & Standardization Efforts
• Association rule language specifications
• MSQL (Imielinski & Virmani’99)
• MineRule (Meo Psaila and Ceri’96)
• Query flocks based on Datalog syntax (Tsur et al’98)
• OLEDB for DM (Microsoft’2000) and recently DMX (Microsoft SQLServer 2005)
• Based on OLE, OLE DB, OLE DB for OLAP, C#
• Integrating DBMS, data warehouse and data mining
• DMML (Data Mining Mark-up Language) by DMG (www.dmg.org)
• Providing a platform and process structure for effective data mining
• Emphasizing on deploying data mining technology to solve business problems
40
Integration of Data Mining and Data Warehousing
• Data mining systems, DBMS, Data warehouse systems coupling
• No coupling, loose-coupling, semi-tight-coupling, tight-coupling
• On-line analytical mining data
• integration of mining and OLAP technologies
• Interactive mining multi-level knowledge
• Necessity of mining knowledge and patterns at different levels of
abstraction by drilling/rolling, pivoting, slicing/dicing, etc.
• Integration of multiple mining functions
• Characterized classification, first clustering and then association
41
Coupling Data Mining with DB/DW Systems
• No coupling—flat file processing, not recommended
• Loose coupling
• Fetching data from DB/DW
• Semi-tight coupling—enhanced DM performance
• Provide efficient implement a few data mining primitives in a DB/DW
system, e.g., sorting, indexing, aggregation, histogram analysis,
multiway join, precomputation of some stat functions
• Tight coupling—A uniform information processing
environment
• DM is smoothly integrated into a DB/DW system, mining query is
optimized based on mining query, indexing, query processing
methods, etc.
42
Architecture: Typical Data Mining System
Graphical User Interface
Pattern Evaluation
KnowledgeBase
Data Mining Engine
Database or Data
Warehouse Server
data cleaning, integration, and selection
Database
43
Data
Warehouse
World-Wide
Web
Other Info
Repositories
‫واژگان کليدي مهارت‬
1. Data mining architecture
2. Data pattern
3. Data mining query languages
4. Data mining integration
5. Data mining classification
‫صفحه قبل‬
‫صفحه بعد‬
‫آزمون‬
1. Is data mining a simple transformation of technology
developed from databases, statistics, and machine learning?
Answer:
• No. Data mining is more than a simple transformation of
technology developed from databases, statistics, and machine
learning. Instead, data mining involves an integration, rather
than a simple transformation, of techniques from multiple
disciplines such as database technology, statistics, machine
learning, high-performance computing, pattern recognition,
neural networks, data visualization, information retrieval, image
and signal processing, and spatial data analysis.
‫آزمون‬
2. Explain how the evolution of database technology led to data
mining.
Answer:
• Database technology began with the development of data
collection and database creation mechanisms that led to the
development of effective mechanisms for data management
including data storage and retrieval, and query and transaction
processing. The large number of database systems offering query
and transaction processing eventually and naturally led to the
need for data analysis and understanding. Hence, data mining
began its development out of this necessity.
‫آزمون‬
3.
Describe the steps involved in data mining when viewed as a
process of knowledge discovery.
Answer:
• Data cleaning, a process that removes or transforms noise and inconsistent data
• Data integration, where multiple data sources may be combined
• Data selection, where data relevant to the analysis task are retrieved from the
database
• Data transformation, where data are transformed or consolidated into forms
appropriate for mining
• Data mining, an essential process where intelligent and efficient methods are
applied in order to extract patterns
• Pattern evaluation, a process to identify the truly interesting patterns
representing knowledge based on some interestingness measures
• Knowledge presentation, using visualization and knowledge representation
techniques to present the mined knowledge to the user
‫آزمون‬
4. How is a data warehouse different from a database? How are
they similar?
Answer:
• Differences: A data warehouse is a repository of information collected from
multiple sources, over a history of time, stored under a unified schema, and
used for data analysis and decision support; whereas a database, is a
collection of interrelated data that represents the current status of the stored
data. There could be multiple heterogeneous databases where the schema of
one database may not agree with the schema of another. A database system
supports ad-hoc query and on-line transaction processing.
• Similarities: Both are repositories of information, storing huge amounts of
persistent data.
‫آزمون‬
5. Define each of the following data mining functionalities:
characterization, discrimination, association, classification, and
prediction.
Answer:
• Characterization is a summarization of the general characteristics or features of a
target class of data.
• Discrimination is a comparison of the general features of target class data objects
with the general features of objects from one or a set of contrasting classes.
• Association is the discovery of association rules showing attribute-value
conditions that occur frequently together in a given set of data.
• Classification is to construct a set of models (or functions) that describe and
distinguish data class or concepts.
• Prediction is to predict some missing or unavailable, and often numerical, data
values.
‫آزمون‬
6. List the five primitives for specifying a data mining
task.
Answer:
• Task-relevant data
• Knowledge type to be mined
• Background knowledge
• Pattern interestingness measure
• Visualization of discovered patterns
‫مهارت دوم‪ :‬پيش پردازش داده ها‬
‫فصل دوم‪ ،‬بخش اول ‪ :‬چرائي و اهداف پيش پردازش ديتا‬
‫صفحه بعد‬
‫صفحه قبل‬
‫فهرست مطالب‬
‫ آشنايي با مفهوم و وظايف پيش پردازش داده ها‬:‫هدف هاي کلي مهارت‬
:‫عناوين زيرمهارت ها‬
2.1 Why Preprocess the Data?
2.2 Descriptive Data Summarization
2.3 Data Cleaning
‫واژگان کليدي مهارت‬
Data preprocessing, Descriptive data summarization, Data cleaning
‫صفحه قبل‬
‫صفحه بعد‬
‫ پيش پردازش داده ها‬-2 ‫مهارت‬
‫هدف هاي کلي مهارت‬
: ‫آشنايي دانشجو با‬
Basic concepts of data preprocessing•
Descriptive Data Summarization•
Data Cleaning•
‫صفحه قبل‬
‫صفحه بعد‬
Chapter 2: Data Preprocessing
Why preprocessing the data? •
Descriptive data summarization •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
54
Why Data Preprocessing?
Data in the real world is dirty •
incomplete: lacking attribute values, lacking certain •
attributes of interest, or containing only aggregate data
e.g., occupation=“ ” •
noisy: containing errors or outliers •
e.g., Salary=“-10” •
inconsistent: containing discrepancies in codes or names •
e.g., Age=“42” Birthday=“03/07/1997” •
e.g., Was rating “1,2,3”, now rating “A, B, C” •
e.g., discrepancy between duplicate records •
55
Why Is Data Dirty?
Incomplete data may come from •
“Not applicable” data value when collected •
Different considerations between the time when the data was collected •
and when it is analyzed.
Human/hardware/software problems •
Noisy data (incorrect values) may come from •
Faulty data collection instruments •
Human or computer error at data entry •
Errors in data transmission •
Inconsistent data may come from •
Different data sources •
Functional dependency violation (e.g., modify some linked data) •
Duplicate records also need data cleaning •
56
Why Is Data Preprocessing Important?
No quality data, no quality mining results! •
Quality decisions must be based on quality data •
e.g., duplicate or missing data may cause incorrect or even •
misleading statistics.
Data warehouse needs consistent integration of quality data •
Data extraction, cleaning, and transformation comprises the •
majority of the work of building a data warehouse
57
Multi-Dimensional Measure of Data Quality
A well-accepted multidimensional view: •
Accuracy •
Completeness •
Consistency •
Timeliness •
Believability •
Value added •
Interpretability •
Accessibility •
Broad categories: •
Intrinsic, contextual, representational, and accessibility •
58
Major Tasks in Data Preprocessing
Data cleaning •
Fill in missing values, smooth noisy data, identify or remove outliers, •
and resolve inconsistencies
Data integration •
Integration of multiple databases, data cubes, or files •
Data transformation •
Normalization and aggregation •
Data reduction •
Obtains reduced representation in volume but produces the same or •
similar analytical results
Data discretization •
Part of data reduction but with particular importance, especially for •
numerical data
59
Forms of Data Preprocessing
60
‫صفحه قبل‬
‫صفحه بعد‬
Chapter 2: Data Preprocessing
Why preprocess the data? •
Descriptive data summarization •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
61
Mining Data Descriptive Characteristics
Motivation
To better understand the data: central tendency, variation and spread
•
Data dispersion characteristics
median, max, min, quantiles, outliers, variance, etc.
•
•
•
Numerical dimensions correspond to sorted intervals
Data dispersion: analyzed with multiple granularities of precision
•
Boxplot or quantile analysis on sorted intervals
•
Dispersion analysis on computed measures
Folding measures into numerical dimensions
•
Boxplot or quantile analysis on the transformed cube
•
•
•
62
Measuring the Central Tendency
1 n
• x
x
x


Mean (algebraic measure) (sample
vs. population):
 i
N
n i 1
n
Weighted arithmetic
w x mean:
•
x  extreme
n
Trimmed mean: chopping
values
•

i 1
i
i
w
i 1
i
Median: A holistic measure
Middle value if odd number of values, or average of the middle two values
•
•
otherwise
• f )l )c
Estimated by interpolation (for grouped
n / data):
2(
median  L1  (
f median
Mode
Value that occurs most frequently in the data •
Unimodal, bimodal, trimodal •
mean  mode  3Empirical
 (meanformula:
 median
• )
63
‫صفحه قبل‬
‫صفحه بعد‬
•
Symmetric vs. Skewed Data
Median, mean and mode of symmetric, •
positively and negatively skewed data
64
‫صفحه قبل‬
‫صفحه بعد‬
Measuring the Dispersion of Data
Quartiles, outliers and boxplots •
Quartiles: Q1 (25th percentile), Q3 (75th percentile) •
Inter-quartile range: IQR = Q3 – Q1 •
Five number summary: min, Q1, M, Q3, max •
Boxplot: ends of the box are the quartiles, median is marked, whiskers, and plot •
outlier individually
Outlier: usually, a value higher/lower than 1.5 x IQR •
Variance and standard deviation (sample: s, population: σ) •
Variance: (algebraic, scalable computation) •
1 n
1 n 2 1 n
2
s 
( xi  x ) 
[ xi  ( xi ) 2 ]

n  1 i 1
n  1 i 1
n i 1
2
65
1 n
1 n 2
2
   ( xi   )   xi   2
N i 1
N i 1
Standard deviation s (or σ) is the square root of variance s2 (or σ2) •
‫صفحه قبل‬
2
‫صفحه بعد‬
Properties of Normal Distribution
Curve
The normal (distribution) curve •
From μ–σ to μ+σ: contains about 68% of the measurements •
(μ: mean, σ: standard deviation)
From μ–2σ to μ+2σ: contains about 95% of it •
From μ–3σ to μ+3σ: contains about 99.7% of it •
66
‫صفحه قبل‬
‫صفحه بعد‬
Boxplot Analysis
Five-number summary of a distribution: •
Minimum, Q1, M, Q3, Maximum
Boxplot •
Data is represented with a box •
The ends of the box are at the first and third quartiles, •
i.e., the height of the box is IRQ
The median is marked by a line within the box •
Whiskers: two lines outside the box extend to •
Minimum and Maximum
67
‫صفحه قبل‬
‫صفحه بعد‬
Visualization of Data Dispersion: Boxplot Analysis
68
Histogram Analysis
Graph displays of basic statistical class descriptions •
Frequency histograms •
A univariate graphical method •
Consists of a set of rectangles that reflect the counts or •
frequencies of the classes present in the given data
69
Quantile Plot
Displays all of the data (allowing the user to assess both the •
overall behavior and unusual occurrences)
Plots quantile information •
For a data xi data sorted in increasing order, fi indicates that •
approximately 100 fi% of the data are below or equal to the
value xi
70
Quantile-Quantile (Q-Q) Plot
Graphs the quantiles of one univariate distribution against the •
corresponding quantiles of another
Allows the user to view whether there is a shift in going from •
one distribution to another
71
Scatter plot
Provides a first look at bi-variate data to see clusters of points, •
outliers, etc.
Each pair of values is treated as a pair of coordinates and •
plotted as points in the plane
72
Loess Curve
Adds a smooth curve to a scatter plot in order to provide •
better perception of the pattern of dependence
Loess curve is fitted by setting two parameters: a smoothing •
parameter, and the degree of the polynomials that are fitted
by the regression
73
Positively and Negatively Correlated Data
74
‫صفحه قبل‬
‫صفحه بعد‬
Not Correlated Data
75
‫صفحه قبل‬
‫صفحه بعد‬
Graphic Displays of Basic Statistical Descriptions
Histogram: (shown before) •
Boxplot: (covered before) •
Quantile plot: each value xi is paired with fi indicating that •
approximately 100 fi % of data are  xi
Quantile-quantile (q-q) plot: graphs the quantiles of one •
univariant distribution against the corresponding quantiles of
another
Scatter plot: each pair of values is a pair of coordinates and •
plotted as points in the plane
Loess (local regression) curve: add a smooth curve to a scatter •
plot to provide better perception of the pattern of dependence
76
Chapter 2: Data Preprocessing
Why preprocess the data? •
Descriptive data summarization •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
77
Data Cleaning
Importance •
“Data cleaning is one of the three biggest problems in •
data warehousing”—Ralph Kimball
“Data cleaning is the number one problem in data •
warehousing”—DCI survey
Data cleaning tasks •
Fill in missing values •
Identify outliers and smooth out noisy data •
Correct inconsistent data •
Resolve redundancy caused by data integration •
78
Missing Data
Data is not always available •
E.g., many tuples have no recorded value for several attributes, such •
as customer income in sales data
Missing data may be due to •
equipment malfunction •
inconsistent with other recorded data and thus deleted •
data not entered due to misunderstanding •
certain data may not be considered important at the time of entry •
not register history or changes of the data •
Missing data may need to be inferred. •
79
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing (assuming the •
tasks in classification—not effective when the percentage of missing values
per attribute varies considerably.
Fill in the missing value manually: tedious + infeasible? •
Fill in it automatically with •
a global constant : e.g., “unknown”, a new class?! •
the attribute mean •
the attribute mean for all samples belonging to the same class: smarter •
the most probable value: inference-based such as Bayesian formula or •
decision tree
80
Noisy Data
Noise: random error or variance in a measured variable •
Incorrect attribute values may be due to •
faulty data collection instruments •
data entry problems •
data transmission problems •
technology limitation •
inconsistency in naming convention •
Other data problems which requires data cleaning •
duplicate records •
incomplete data •
inconsistent data •
81
How to Handle Noisy Data?
Binning
first sort data and partition into (equal-frequency) bins •
then one can smooth by bin means, smooth by bin median, •
smooth by bin boundaries, etc.
Regression
smooth by fitting the data into regression functions •
Clustering
detect and remove outliers •
Combined computer and human inspection
detect suspicious values and check by human (e.g., deal with •
possible outliers)
•
•
•
•
82
Simple Discretization Methods: Binning
Equal-width (distance) partitioning •
Divides the range into N intervals of equal size: uniform grid •
if A and B are the lowest and highest values of the attribute, the width of •
intervals will be: W = (B –A)/N.
The most straightforward, but outliers may dominate presentation •
Skewed data is not handled well •
Equal-depth (frequency) partitioning •
Divides the range into N intervals, each containing approximately same •
number of samples
Good data scaling •
Managing categorical attributes can be tricky •
83
Binning Methods for Data Smoothing
Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
84
Regression
y
Y1
y=x+1
Y1’
X1
85
‫صفحه قبل‬
‫صفحه بعد‬
x
Cluster Analysis
86
‫صفحه قبل‬
‫صفحه بعد‬
Data Cleaning as a Process
Data discrepancy detection •
Use metadata (e.g., domain, range, dependency, distribution) •
Check field overloading •
Check uniqueness rule, consecutive rule and null rule •
Use commercial tools •
Data scrubbing: use simple domain knowledge (e.g., postal code, •
spell-check) to detect errors and make corrections
Data auditing: by analyzing data to discover rules and relationship to •
detect violators (e.g., correlation and clustering to find outliers)
Data migration and integration •
Data migration tools: allow transformations to be specified •
ETL (Extraction/Transformation/Loading) tools: allow users to specify •
transformations through a graphical user interface
Integration of the two processes •
Iterative and interactive •
87
‫واژگان کليدي مهارت‬
1. Data preprocessing
2. Descriptive data summarization
3. Data cleaning
‫صفحه قبل‬
‫صفحه بعد‬
‫مهارت سوم‪ :‬پيش پردازش داده ها‬
‫فصل دوم‪ ،‬بخش دوم ‪ :‬تکنيک هاي پيش پردازش ديتا‬
‫صفحه بعد‬
‫صفحه قبل‬
‫فهرست مطالب‬
‫ معرفي ديگرتکنيک هاي پيش پردازش داده ها‬:‫هدف هاي کلي مهارت‬
:‫عناوين زيرمهارت ها‬
2.4 Data Integration and Transformation
2.5 Data Reduction
2.6 Data Discretization and Concept Hierarchy Generation
‫واژگان کليدي مهارت‬
data integration, data transformation, data reduction, data
discretization, concept hierarchy generation
‫صفحه قبل‬
‫صفحه بعد‬
‫ ديگر تکنيکهاي پيش پردازش داده ها‬-3 ‫مهارت‬
‫هدف هاي کلي مهارت‬
: ‫آشنايي دانشجو با‬
A number of other data preprocessing techniques:
Data Integration •
Data Transformations •
Data Reduction •
‫صفحه قبل‬
‫صفحه بعد‬
Chapter 2: Data Preprocessing
Why preprocess the data? •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
‫صفحه قبل‬
‫صفحه بعد‬
92
Data Integration
Data integration: •
Combines data from multiple sources into a coherent store •
Schema integration: e.g., A.cust-id  B.cust-# •
Integrate metadata from different sources •
Entity identification problem: •
Identify real world entities from multiple data sources, e.g., •
Bill Clinton = William Clinton
Detecting and resolving data value conflicts •
For the same real world entity, attribute values from different •
sources are different
Possible reasons: different representations, different scales, •
e.g., metric vs. British units
‫صفحه قبل‬
‫صفحه بعد‬
93
Handling Redundancy in Data Integration
Redundant data occur often when integration of multiple •
databases
Object identification: The same attribute or object may •
have different names in different databases
Derivable data: One attribute may be a “derived” attribute •
in another table, e.g., annual revenue
Redundant attributes may be able to be detected by correlation •
analysis
Careful integration of the data from multiple sources may help •
reduce/avoid redundancies and inconsistencies and improve
mining speed and quality
‫صفحه قبل‬
‫صفحه بعد‬
94
Correlation Analysis (Numerical Data)
Correlation coefficient (also called Pearson’s product moment •
coefficient)
rA, B
( A  A)( B  B )  ( AB)  n A B



(n  1)AB
( n  1)AB
where n is the number of tuples,
and
B respective means of A
A are the
and B, σA and σB are the respective standard deviation of A and B, and
Σ(AB) is the sum of the AB cross-product.
If rA,B > 0, A and B are positively correlated (A’s values increase as •
B’s). The higher, the stronger correlation.
rA,B = 0: independent; rA,B < 0: negatively correlated •
95
‫صفحه قبل‬
‫صفحه بعد‬
Correlation Analysis (Categorical Data)
Χ2 (chi-square) test •
2
(
Observed

Expected
)
2  
Expected
The larger the Χ2 value, the more likely the variables are related •
The cells that contribute the most to the Χ2 value are those •
whose actual count is very different from the expected count
Correlation does not imply causality •
# of hospitals and # of car-theft in a city are correlated •
Both are causally linked to the third variable: population •
96
‫صفحه قبل‬
‫صفحه بعد‬
Chi-Square Calculation: An Example
Play chess
Not play chess
Sum (row)
Like science fiction
250(90)
200(360)
450
Not like science fiction
50(210)
1000(840)
1050
Sum(col.)
300
1200
1500
Χ2 (chi-square) calculation (numbers in parenthesis are expected counts •
calculated based on the data distribution in the two categories)
2
2
2
2
(
250

90
)
(
50

210
)
(
200

360
)
(
1000

840
)
2 



 507.93
It shows that
like_science_fiction
and
play_chess
are
correlated
in the •
90
210
360
840
group.
97
‫صفحه قبل‬
‫صفحه بعد‬
Data Transformation
Smoothing: remove noise from data •
Aggregation: summarization, data cube construction •
Generalization: concept hierarchy climbing •
Normalization: scaled to fall within a small, specified range •
min-max normalization •
z-score normalization •
normalization by decimal scaling •
Attribute/feature construction •
New attributes constructed from the given ones •
‫صفحه قبل‬
‫صفحه بعد‬
98
Data Transformation: Normalization
Min-max normalization: to [new_minA, new_maxA] •
v' 
v  minA
(new _ maxA  new _ minA)  new _ minA
maxA  minA
Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then •
73,600  12,000
0)  0  0.716
$73,000(1.is0 mapped
to
98,000  12,000
v' 
Z-score normalization (μ: mean, σ: standard deviation): •
v  A

A
73,600  54,000
 1.225
Ex. Let μ = 54,000,16σ,000
= 16,000. Then
v
v'  j
10
99
•
Normalization by decimal scaling •
Where j is the smallest integer such that Max(|ν’|) < 1
‫صفحه قبل‬
‫صفحه بعد‬
Chapter 2: Data Preprocessing
Why preprocess the data? •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
‫صفحه قبل‬
‫صفحه بعد‬
100
Data Reduction Strategies
Why data reduction? •
A database/data warehouse may store terabytes of data •
Complex data analysis/mining may take a very long time to run on the •
complete data set
Data reduction •
Obtain a reduced representation of the data set that is much smaller in •
volume but yet produce the same (or almost the same) analytical
results
Data reduction strategies •
Data cube aggregation: •
Dimensionality reduction — e.g., remove unimportant attributes •
Data Compression •
Numerosity reduction — e.g., fit data into models •
Discretization and concept hierarchy generation •
‫صفحه قبل‬
‫صفحه بعد‬
101
Data Cube Aggregation
The lowest level of a data cube (base cuboid) •
The aggregated data for an individual entity of interest •
E.g., a customer in a phone calling data warehouse •
Multiple levels of aggregation in data cubes •
Further reduce the size of data to deal with •
Reference appropriate levels •
Use the smallest representation which is enough to solve the •
task
Queries regarding aggregated information should be answered •
using data cube, when possible
‫صفحه قبل‬
‫صفحه بعد‬
102
Attribute Subset Selection
Feature selection (i.e., attribute subset selection): •
Select a minimum set of features such that the probability •
distribution of different classes given the values for those
features is as close as possible to the original distribution
given the values of all features
reduce # of patterns in the patterns, easier to understand •
Heuristic methods (due to exponential # of choices): •
Step-wise forward selection •
Step-wise backward elimination •
Combining forward selection and backward elimination •
Decision-tree induction •
‫صفحه قبل‬
‫صفحه بعد‬
103
Example of Decision Tree Induction
Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A6?
A1?
Class 1
>
Class 2
Class 1
Class 2
Reduced attribute set: {A1, A4, A6}
104
Heuristic Feature Selection Methods
There are 2d possible sub-features of d features •
Several heuristic feature selection methods: •
Best single features under the feature independence •
assumption: choose by significance tests
Best step-wise feature selection: •
The best single-feature is picked first •
Then next best feature condition to the first, ... •
Step-wise feature elimination: •
Repeatedly eliminate the worst feature •
Best combined feature selection and elimination •
Optimal branch and bound: •
Use feature elimination and backtracking •
‫صفحه قبل‬
‫صفحه بعد‬
105
Data Compression
String compression •
There are extensive theories and well-tuned algorithms •
Typically lossless •
But only limited manipulation is possible without expansion •
Audio/video compression •
Typically lossy compression, with progressive refinement •
Sometimes small fragments of signal can be reconstructed •
without reconstructing the whole
Time sequence is not audio •
Typically short and vary slowly with time •
‫صفحه قبل‬
‫صفحه بعد‬
106
Data Compression
Original Data
Compressed
Data
lossless
Original Data
Approximated
107
‫صفحه قبل‬
‫صفحه بعد‬
Dimensionality Reduction: Wavelet Transformation
Discrete wavelet transform (DWT): linear signal processing, multi- •
resolutional analysis
Compressed approximation: store only a small fraction of the •
strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but •
better lossy compression, localized in space
Haar2
Daubechie4•
Method:
Length, L, must be an integer power of 2 (padding with 0’s, when necessary) •
Each transform has 2 functions: smoothing, difference •
Applies to pairs of data, resulting in two set of data of length L/2 •
Applies two functions recursively, until reaches the desired length •
‫صفحه قبل‬
‫صفحه بعد‬
108
Dimensionality Reduction: Principal Component Analysis (PCA)
Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors •
(principal components) that can be best used to represent data
Steps •
Normalize input data: Each attribute falls within the same range •
Compute k orthonormal (unit) vectors, i.e., principal components •
Each input data (vector) is a linear combination of the k principal •
component vectors
The principal components are sorted in order of decreasing “significance” •
or strength
Since the components are sorted, the size of the data can be reduced by •
eliminating the weak components, i.e., those with low variance. (i.e.,
using the strongest principal components, it is possible to reconstruct a
good approximation of the original data
Works for numeric data only •
Used when the number of dimensions is large •
‫صفحه قبل‬
‫صفحه بعد‬
109
Principal Component Analysis
X2
Y1
Y2
X1
110
Numerosity Reduction
Reduce data volume by choosing alternative, smaller forms of •
data representation
Parametric methods •
Assume the data fits some model, estimate model •
parameters, store only the parameters, and discard the
data (except possible outliers)
Example: Log-linear models—obtain value at a point in m-D •
space as the product on appropriate marginal subspaces
Non-parametric methods •
Do not assume models •
Major families: histograms, clustering, sampling •
‫صفحه قبل‬
‫صفحه بعد‬
111
Data Reduction Method (1): Regression and Log-Linear Models
Linear regression: Data are modeled to fit a straight line •
Often uses the least-square method to fit the line •
Multiple regression: allows a response variable Y to be •
modeled as a linear function of multidimensional feature
vector
Log-linear model: approximates discrete multidimensional •
probability distributions
‫صفحه قبل‬
‫صفحه بعد‬
112
Regress Analysis and Log-Linear Models
Linear regression: Y = w X + b •
Two regression coefficients, w and b, specify the line and are •
to be estimated by using the data at hand
Using the least squares criterion to the known values of Y1, •
Y2, …, X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2. •
Many nonlinear functions can be transformed into the •
above
Log-linear models: •
The multi-way table of joint probabilities is approximated by •
a product of lower-order tables
Probability: p(a, b, c, d) = ab acad bcd •
‫صفحه قبل‬
‫صفحه بعد‬
113
Data Reduction Method (2): Histograms
Divide data into buckets and store •
average (sum) for each bucket
40
Partitioning rules:35 •
Equal-width: equal bucket range • 30
Equal-frequency (or equal-depth) • 25
20
V-optimal: with the least histogram •
15
variance (weighted sum of the
10
original values that each bucket
5
represents) 0
MaxDiff: set bucket boundary •
between each pair for pairs having
the β–1 largest differences
‫صفحه قبل‬
10000
30000
‫صفحه بعد‬
50000
70000
90000
114
Data Reduction Method (3): Clustering
Partition data set into clusters based on similarity, and store cluster •
representation (e.g., centroid and diameter) only
Can be very effective if data is clustered but not if data is “smeared” •
Can have hierarchical clustering and be stored in multi-dimensional index •
tree structures
There are many choices of clustering definitions and clustering algorithms. •
‫صفحه قبل‬
‫صفحه بعد‬
115
Data Reduction Method (4): Sampling
Sampling: obtaining a small sample s to represent the whole •
data set N
Allow a mining algorithm to run in complexity that is potentially •
sub-linear to the size of the data
Choose a representative subset of the data •
Simple random sampling may have very poor performance •
in the presence of skew
Develop adaptive sampling methods •
Stratified sampling: •
Approximate the percentage of each class (or subpopulation of •
interest) in the overall database
Used in conjunction with skewed data •
‫صفحه قبل‬
‫صفحه بعد‬
116
Sampling: with or without Replacement
Raw Data
117
Sampling: Cluster or Stratified Sampling
Cluster/Stratified Sample
Raw Data
118
‫صفحه قبل‬
‫صفحه بعد‬
Chapter 2: Data Preprocessing
Why preprocess the data? •
Data cleaning •
Data integration and transformation •
Data reduction •
Discretization and concept hierarchy generation •
‫صفحه قبل‬
‫صفحه بعد‬
119
Discretization
Three types of attributes: •
Nominal — values from an unordered set, e.g., color, profession •
Ordinal — values from an ordered set, e.g., military or academic rank •
Continuous — real numbers, e.g., integer or real numbers •
Discretization: •
Divide the range of a continuous attribute into intervals •
Some classification algorithms only accept categorical attributes. •
Reduce data size by discretization •
Prepare for further analysis •
‫صفحه قبل‬
‫صفحه بعد‬
120
Discretization and Concept Hierarchy
Discretization •
Reduce the number of values for a given continuous attribute by dividing •
the range of the attribute into intervals
Interval labels can then be used to replace actual data values •
Supervised vs. unsupervised •
Split (top-down) vs. merge (bottom-up) •
Discretization can be performed recursively on an attribute •
Concept hierarchy formation •
Recursively reduce the data by collecting and replacing low level concepts •
(such as numeric values for age) by higher level concepts (such as young,
middle-aged, or senior)
‫صفحه قبل‬
‫صفحه بعد‬
121
Discretization and Concept Hierarchy Generation for Numeric Data
Typical methods: All the methods can be applied recursively •
Binning (covered above) •
Top-down split, unsupervised, •
Histogram analysis (covered above) •
Top-down split, unsupervised •
Clustering analysis (covered above) •
Either top-down split or bottom-up merge, unsupervised •
Entropy-based discretization: supervised, top-down split •
Interval merging by 2 Analysis: unsupervised, bottom-up merge •
Segmentation by natural partitioning: top-down split, unsupervised •
‫صفحه قبل‬
‫صفحه بعد‬
122
Entropy-Based Discretization
Given a set of samples S, if S is partitioned into two intervals S1 and S2 using •
boundary T, the information gain after partitioning is
|S |
|S |
I ( S , T )  1 Entropy( S1)  2 Entropy( S 2)
|S|
|S|
Entropy is calculated based on class distribution of the samples in the set. Given m •
classes, the entropy of S1 is
m
Entropy( S1 )   pi log 2 ( pi )
i 1
where
pi is the probability of class i in S1
The boundary that minimizes the entropy function over all possible boundaries is •
selected as a binary discretization
The process is recursively applied to partitions obtained until some stopping •
criterion is met
Such a boundary may reduce data size and improve classification accuracy •
‫صفحه قبل‬
‫صفحه بعد‬
123
Interval Merge by 2 Analysis
Merging-based (bottom-up) vs. splitting-based methods •
Merge: Find the best neighboring intervals and merge them to form larger intervals •
recursively
ChiMerge •
Initially, each distinct value of a numerical attr. A is considered to be one •
interval
2 tests are performed for every pair of adjacent intervals •
Adjacent intervals with the least 2 values are merged together, since low 2 •
values for a pair indicate similar class distributions
This merge process proceeds recursively until a predefined stopping criterion is •
met (such as significance level, max-interval, max inconsistency, etc.)
‫صفحه قبل‬
‫صفحه بعد‬
124
Segmentation by Natural Partitioning
•
A simply 3-4-5 rule can be used to segment numeric data into
relatively uniform, “natural” intervals.
If an interval covers 3, 6, 7 or 9 distinct values at the most
•
significant digit, partition the range into 3 equi-width
intervals
If it covers 2, 4, or 8 distinct values at the most significant
•
digit, partition the range into 4 intervals
If it covers 1, 5, or 10 distinct values at the most significant
•
digit, partition the range into 5 intervals
‫صفحه قبل‬
‫صفحه بعد‬
125
Example of 3-4-5 Rule
count
$4,700
Step 1:
$1,838
profit
High(i.e, 95%-0 tile)
Step 2:
High=$2,000
Low=-$1,000
(-$400 -$5,000)
Step 4:
(-$400 - 0)
(-$200 -$100)
126
(-$100 0)
($1,000 - $2,000)
(0 -$ 1,000)
($1,000 - $2, 000)
(0 - $1,000)
(0 $200)
($1,000 $1,200)
($200 $400)
($1,400 $1,600)
($400 $600)
($600 $800)
($800 $1,000)
‫صفحه قبل‬
($2,000 - $5, 000)
($2,000 $3,000)
($1,200 $1,400)
($1,600 ($1,800 $1,800)
$2,000)
‫صفحه بعد‬
-$351
Low (i.e, 5%-tile)
(-$1,000 - $2,000)
(-$1,000 - 0)
(-$300 -$200)
Min
msd=1,000
Step 3:
(-$400 -$300)
Max
-$159
($3,000 $4,000)
($4,000 $5,000)
Concept Hierarchy Generation for Categorical Data
Specification of a partial/total ordering of attributes explicitly at •
the schema level by users or experts
street < city < state < country •
Specification of a hierarchy for a set of values by explicit data •
grouping
{Urbana, Champaign, Chicago} < Illinois •
Specification of only a partial set of attributes •
E.g., only street < city, not others •
Automatic generation of hierarchies (or attribute levels) by the •
analysis of the number of distinct values
E.g., for a set of attributes: {street, city, state, country} •
‫صفحه قبل‬
‫صفحه بعد‬
127
Automatic Concept Hierarchy Generation
Some hierarchies can be automatically generated based on •
the analysis of the number of distinct values per attribute in
the data set
The attribute with the most distinct values is placed at the •
lowest level of the hierarchy
Exceptions, e.g., weekday, month, quarter, year •
15 distinct values
country
province_or_ state
365 distinct values
3567 distinct values
city
674,339 distinct values
street
‫صفحه قبل‬
‫صفحه بعد‬
128
‫واژگان کليدي مهارت‬
1. Data integration
2. Data transformation
3. Data reduction
4. Data discretization
5. Concept hierarchy generation
‫صفحه قبل‬
‫صفحه بعد‬
‫آزمون مهارت‬
Data quality can be assessed in terms of accuracy, .1
completeness, and consistency. Propose other dimensions of
data quality.
Answer:
Timeliness: Data must be available within a time frame. •
Believability: Data values must be within the range. •
Value added: Data must provide additional value in terms of •
information.
Interpretability: Data must not be too complex. •
Accessability: Data must be accessable. •
‫صفحه قبل‬
‫صفحه بعد‬
130
‫آزمون مهارت‬
How is a quantile-quantile plot different from a quantile plot? .2
Answer:
A quantile plot displays quantile information for all the data, •
where the values measured for the independent variable are
plotted against their corresponding quantile.
A quantile-quantile plot, however, graphs the quantiles of one •
univariate distribution against the corresponding quantiles of
another univariate distribution.
‫صفحه قبل‬
‫صفحه بعد‬
131
‫آزمون مهارت‬
What are the various methods for handling tuples with missing .3
values for some attributes?
Answer:
Ignoring the tuple •
Manually filling in the missing value •
Using a global constant to fill in the missing value •
Using the attribute mean for quantitative (numeric) values or •
attribute mode for categorical (nominal) values
Using the attribute mean for quantitative (numeric) values or •
attribute mode for categorical (nominal) values, for all samples
belonging to the same class as the given tuple
Using the most probable value to fill in the missing value •
‫صفحه قبل‬
‫صفحه بعد‬
132
‫آزمون مهارت‬
Discuss issues to consider during data integration. .4
Answer:
Schema integration: The metadata from the different data •
sources must be integrated in order to match up equivalent realworld entities.
Handling redundant data: Derived attributes may be redundant, •
and inconsistent attribute naming may also lead to
redundancies. Also, duplications at the tuple level may occur and
thus need to be detected and resolved.
Detection and resolution of data value conflicts: Differences in •
representation, scaling or encoding may cause the same realworld entity attribute values to differ in the data sources being
integrated.
‫صفحه قبل‬
‫صفحه بعد‬
133
‫آزمون مهارت‬
Use the two methods below to normalize the following group .5
of data:
200, 300, 400, 600, 1000
(a) min-max normalization by setting min = 0 and max = 1
(b) z-score normalization
Answer:
(a) [0,1] normalized 0 0.125 0.25 0.5 1
(b) z-score -1.06 -0.7 -0.35 0.35 1.78
‫صفحه قبل‬
‫صفحه بعد‬
134
:‫مهارت چهارم‬
Data Warehousing and OLAP Technology
‫صفحه قبل‬
‫صفحه بعد‬
‫فهرست مطالب‬
OLAP ‫ معرفي انبارداده ها و فناوري‬:‫هدف هاي کلي مهارت‬
:‫عناوين زيرمهارت ها‬
What is a data warehouse?
.1
A multi-dimensional data model
.2
Data warehouse architecture
.3
Data warehouse implementation
.4
From data warehousing to data
mining
.5
‫مهارت‬
‫واژگان کليدي‬
data warehouse, data warehouse architecture, data warehouse
implementation, on-line analytical processing (OLAP), data cube, rollup, drill-down, slicing, dicing, OLAP data indexing, OLAP query
processing, on-line-analytical mining
‫صفحه قبل‬
‫صفحه بعد‬
‫ انبار داده ها و فناوري‬-4 ‫مهارت‬
OLAP
‫هدف هاي کلي مهارت‬
: ‫آشنايي دانشجو با‬
A definition of the data warehouse
Why data warehousing?
A multidimensional data model
OLAP , and OLAP operations
Data warehouse architecture
Data warehouse implementation
On-line-analytical mining
‫صفحه قبل‬
‫صفحه بعد‬
•
•
•
•
•
•
•
Data Warehousing and OLAP Technology:
An Overview
What is a data warehouse? •
A multi-dimensional data model •
Data warehouse architecture •
Data warehouse implementation •
From data warehousing to data mining •
138
‫صفحه قبل‬
‫صفحه بعد‬
What is “Data Warehouse” ?
Defined in many different ways, but not rigorously. •
A decision supporting database that is maintained separately from the •
organization’s operational database
Supports information processing by providing a solid platform of •
consolidated, historical data for analysis.
“A data warehouse is a subject-oriented, integrated, time-variant, and •
nonvolatile collection of data in support of management’s decision-making
process.”—W. H. Inmon
Data warehousing: •
The process of constructing and using data warehouses •
139
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse—Subject-Oriented
Organized around major subjects, such as customer, product, •
sales
Focusing on the modeling and analysis of data for decision •
makers, not on daily operations or transaction processing
Provides a simple and concise view around particular subject •
issues by excluding data that are not useful in the decision
support process
140
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse—Integrated
Constructed by integrating multiple, heterogeneous data •
sources
relational databases, flat files, on-line transaction records •
Data cleaning and data integration techniques are applied. •
Ensure consistency in naming conventions, encoding •
structures, attribute measures, etc. among different data
sources
E.g., Hotel price: currency, tax, breakfast covered, etc. •
When data is moved to the warehouse, it is converted. •
141
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse—Time Variant
The time horizon for the data warehouse is significantly longer •
than that of operational systems
Operational database: current value data •
Data warehouse data: provide information from a historical •
perspective (e.g., past 5-10 years)
Every key structure in the data warehouse •
Contains an element of time, explicitly or implicitly •
But the key of operational data may or may not contain •
“time element”
142
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse—Nonvolatile
A physically separated store of data transformed from the •
operational environment
Operational update of data does not occur in the data •
warehouse environment
Does not require transaction processing, recovery, and •
concurrency control mechanisms
Requires only two operations in data accessing: •
initial loading of data and access of data •
143
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse vs. Heterogeneous
DBMS
Traditional heterogeneous DB integration: A query driven approach •
Build wrappers/mediators on top of heterogeneous databases •
When a query is posed to a client site, a meta-dictionary is used to •
translate the query into queries appropriate for individual
heterogeneous sites involved, and the results are integrated into a
global answer set
Complex information filtering, compete for resources •
Data warehouse: update-driven, high performance •
Information from heterogeneous sources is integrated in advance and •
stored in warehouses for direct query and analysis
144
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse vs. Operational
DBMS
OLTP (on-line transaction
processing) •
Major task of traditional relational DBMS •
Day-to-day operations: purchasing, inventory, banking, manufacturing, •
payroll, registration, accounting, etc.
OLAP (on-line analytical processing) •
Major task of data warehouse system •
Data analysis and decision making •
Distinct features (OLTP vs. OLAP): •
User and system orientation: customer vs. market •
Data contents: current, detailed vs. historical, consolidated •
Database design: ER + application vs. star + subject •
View: current, local vs. evolutionary, integrated •
Access patterns: update vs. read-only but complex queries •
145
‫صفحه قبل‬
‫صفحه بعد‬
OLTP vs. OLAP
OLTP
OLAP
users
clerk, IT professional
knowledge worker
function
day to day operations
decision support
DB design
application-oriented
subject-oriented
data
current, up-to-date
detailed, flat relational
isolated
repetitive
historical,
summarized, multidimensional
integrated, consolidated
ad-hoc
lots of scans
unit of work
read/write
index/hash on prim. key
short, simple transaction
# records accessed
tens
millions
#users
thousands
hundreds
DB size
100MB-GB
100GB-TB
metric
transaction throughput
query throughput, response
usage
access
146
‫صفحه قبل‬
complex query
‫صفحه بعد‬
Why Separate Data Warehouse?
High performance for both systems •
DBMS— tuned for OLTP: access methods, indexing, concurrency control, •
recovery
Warehouse—tuned for OLAP: complex OLAP queries, multidimensional •
view, consolidation
Different functions and different data: •
missing data: Decision Support requires historical data which operational •
DBs do not typically maintain
data consolidation: DS requires consolidation (aggregation, •
summarization) of data from heterogeneous sources
data quality: different sources typically use inconsistent data •
representations, codes and formats which have to be reconciled
Note: There are more and more systems which perform OLAP analysis •
directly on relational databases
147
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehousing and OLAP
Technology: An Overview
What is a data warehouse? •
A multi-dimensional data model •
Data warehouse architecture •
Data warehouse implementation •
From data warehousing to data mining •
148
‫صفحه قبل‬
‫صفحه بعد‬
From Tables and Spreadsheets to Data
Cubes
A data warehouse is based on a multidimensional data model which views •
data in the form of a data cube
A data cube, such as sales, allows data to be modeled and viewed in •
multiple dimensions
Dimension tables, such as item (item_name, brand, type), or time(day, •
week, month, quarter, year)
Fact table contains measures (such as dollars_sold) and keys to each of •
the related dimension tables
In data warehousing literature, an n-D base cube is called a base cuboid. The •
top most 0-D cuboid, which holds the highest-level of summarization, is
called the apex cuboid. The lattice of cuboids forms a data cube.
149
‫صفحه قبل‬
‫صفحه بعد‬
Cube: A Lattice of Cuboids
all
time
0-D(apex) cuboid
item
time,location
time,item
location
item,location
supplier
1-D cuboids
location,supplier
2-D cuboids
time,supplier
item,supplier
time,location,supplier
3-D cuboids
time,item,location
time,item,supplier
item,location,supplier
4-D(base) cuboid
time, item, location, supplier
150
‫صفحه قبل‬
‫صفحه بعد‬
Conceptual Modeling of Data
Warehouses
Modeling data warehouses: dimensions & measures •
Star schema: A fact table in the middle connected to a set of •
dimension tables
Snowflake schema: A refinement of star schema where •
some dimensional hierarchy is normalized into a set of
smaller dimension tables, forming a shape similar to
snowflake
Fact constellations: Multiple fact tables share dimension •
tables, viewed as a collection of stars, therefore called galaxy
schema or fact constellation
151
‫صفحه قبل‬
‫صفحه بعد‬
Example of Star Schema
time
item
time_key
day
day_of_the_week
month
quarter
year
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
152
‫صفحه قبل‬
‫صفحه بعد‬
item_key
item_name
brand
type
supplier_type
location
location_key
street
city
state_or_province
country
Example of Snowflake Schema
time
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
branch_key
branch
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
Measures
153
‫صفحه قبل‬
‫صفحه بعد‬
item_key
item_name
brand
type
supplier_key
supplier
supplier_key
supplier_type
location
location_key
street
city_key
city
city_key
city
state_or_province
country
Example of Fact Constellation
time
time_key
day
day_of_the_week
month
quarter
year
item
Sales Fact Table
time_key
item_key
item_name
brand
type
supplier_type
item_key
location_key
branch_key
branch_name
branch_type
units_sold
dollars_sold
avg_sales
‫صفحه قبل‬
item_key
shipper_key
location
to_location
location_key
street
city
province_or_state
country
dollars_cost
Measures
154
time_key
from_location
branch_key
branch
Shipping Fact Table
‫صفحه بعد‬
units_shipped
shipper
shipper_key
shipper_name
location_key
shipper_type
Cube Definition Syntax (BNF) in DMQL
Cube Definition (Fact Table) •
define cube <cube_name> [<dimension_list>]:
<measure_list>
Dimension Definition (Dimension Table) •
define dimension <dimension_name> as
(<attribute_or_subdimension_list>)
Special Case (Shared Dimension Tables) •
First time as “cube definition” •
define dimension <dimension_name> as •
<dimension_name_first_time> in cube
<cube_name_first_time>
155
‫صفحه قبل‬
‫صفحه بعد‬
Defining Star Schema in DMQL
define cube sales_star [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month,
quarter, year)
define dimension item as (item_key, item_name, brand, type,
supplier_type)
define dimension branch as (branch_key, branch_name,
branch_type)
define dimension location as (location_key, street, city,
province_or_state, country)
156
‫صفحه قبل‬
‫صفحه بعد‬
Defining Snowflake Schema in DMQL
define cube sales_snowflake [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales =
avg(sales_in_dollars), units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type,
supplier(supplier_key, supplier_type))
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city(city_key,
province_or_state, country))
157
‫صفحه قبل‬
‫صفحه بعد‬
Defining Fact Constellation in DMQL
define cube sales [time, item, branch, location]:
dollars_sold = sum(sales_in_dollars), avg_sales = avg(sales_in_dollars),
units_sold = count(*)
define dimension time as (time_key, day, day_of_week, month, quarter, year)
define dimension item as (item_key, item_name, brand, type, supplier_type)
define dimension branch as (branch_key, branch_name, branch_type)
define dimension location as (location_key, street, city, province_or_state, country)
define cube shipping [time, item, shipper, from_location, to_location]:
dollar_cost = sum(cost_in_dollars), unit_shipped = count(*)
define dimension time as time in cube sales
define dimension item as item in cube sales
define dimension shipper as (shipper_key, shipper_name, location as location in cube
sales, shipper_type)
define dimension from_location as location in cube sales
define dimension to_location as location in cube sales
158
‫صفحه قبل‬
‫صفحه بعد‬
Measures of Data Cube: Three Categories
Distributive: if the result derived by applying the function to n •
aggregate values is the same as that derived by applying the
function on all the data without partitioning
E.g., count(), sum(), min(), max() •
Algebraic: if it can be computed by an algebraic function with M •
arguments (where M is a bounded integer), each of which is
obtained by applying a distributive aggregate function
E.g., avg(), min_N(), standard_deviation() •
Holistic: if there is no constant bound on the storage size needed •
to describe a subaggregate.
E.g., median(), mode(), rank() •
159
‫صفحه قبل‬
‫صفحه بعد‬
A Concept Hierarchy: Dimension (location)
all
all
Europe
region
country
city
office
Germany
Frankfurt
...
...
...
Spain
North_America
Canada
Vancouver ...
L. Chan
...
...
Mexico
Toronto
M. Wind
160
View of Warehouses and Hierarchies
Specification of hierarchies
Schema hierarchy •
day < {month < quarter;
week} < year
Set_grouping hierarchy •
{1..10} < inexpensive
161
‫صفحه قبل‬
‫صفحه بعد‬
Multidimensional Data
Sales volume as a function of product, month, and •
region
Dimensions: Product, Location, Time
Hierarchical summarization paths
Industry Region
Year
Product
Category Country Quarter
Product
Month Week
Office
Month
162
City
‫صفحه قبل‬
‫صفحه بعد‬
Day
A Sample Data Cube
2Qtr
3Qtr
4Qtr
sum
U.S.A
Canada
Mexico
Country
TV
PC
VCR
sum
1Qtr
Date
Total annual sales
of TV in U.S.A.
sum
163
Cuboids Corresponding to the Cube
all
0-D(apex) cuboid
product
product,date
country
date
product,country
1-D cuboids
date, country
2-D cuboids
3-D(base) cuboid
product, date, country
164
‫صفحه قبل‬
‫صفحه بعد‬
Browsing a Data Cube
Visualization •
OLAP capabilities •
Interactive manipulation •
165
‫صفحه قبل‬
‫صفحه بعد‬
Typical OLAP Operations
Roll up (drill-up): summarize data •
by climbing up hierarchy or by dimension reduction •
Drill down (roll down): reverse of roll-up •
from higher level summary to lower level summary or detailed •
data, or introducing new dimensions
Slice and dice: project and select •
Pivot (rotate): •
reorient the cube, visualization, 3D to series of 2D planes •
Other operations •
drill across: involving (across) more than one fact table •
drill through: through the bottom level of the cube to its back- •
end relational tables (using SQL)
166
‫صفحه قبل‬
‫صفحه بعد‬
Fig. 3.10 Typical OLAP
Operations
167
‫صفحه قبل‬
‫صفحه بعد‬
A Star-Net Query Model
Customer Orders
Shipping Method
Customer
CONTRACTS
AIR-EXPRESS
TRUCK
ORDER
PRODUCT LINE
Time
Product
ANNUALY QTRLY
DAILY
PRODUCT ITEM
PRODUCT GROUP
CITY
SALES PERSON
COUNTRY
DISTRICT
REGION
Location
168
DIVISION
Each circle is
called a footprint Promotion
‫صفحه قبل‬
Organization
‫صفحه بعد‬
Data Warehousing and OLAP Technology:
An Overview
What is a data warehouse? •
A multi-dimensional data model •
Data warehouse architecture •
Data warehouse implementation •
From data warehousing to data mining •
169
‫صفحه قبل‬
‫صفحه بعد‬
Design of Data Warehouse: A Business Analysis
Framework
Four views regarding the design of a data warehouse •
Top-down view •
allows selection of the relevant information necessary for the data •
warehouse
Data source view •
exposes the information being captured, stored, and managed by •
operational systems
Data warehouse view •
consists of fact tables and dimension tables •
Business query view •
sees the perspectives of data in the warehouse from the view of •
end-user
170
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse Design Process
Top-down, bottom-up approaches or a combination of both •
Top-down: Starts with overall design and planning (mature) •
Bottom-up: Starts with experiments and prototypes (rapid) •
From software engineering point of view •
Waterfall: structured and systematic analysis at each step before •
proceeding to the next
Spiral: rapid generation of increasingly functional systems, short turn •
around time, quick turn around
Typical data warehouse design process •
Choose a business process to model, e.g., orders, invoices, etc. •
Choose the grain (atomic level of data) of the business process •
Choose the dimensions that will apply to each fact table record •
Choose the measure that will populate each fact table record •
171
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse: A Multi-Tiered Architecture
Other
sources
Metadata
Operational
Extract
DBs Transform
Load
Refresh
Monitor
&
Integrator
Data
Warehouse
OLAP Server
Serve
Analysis
Query
Reports
Data mining
Data Marts
Data Sources
Data Storage
OLAP Engine Front-End Tools
172
Three Data Warehouse Models
Enterprise warehouse •
collects all of the information about subjects spanning the •
entire organization
Data Mart •
a subset of corporate-wide data that is of value to a specific •
groups of users. Its scope is confined to specific, selected
groups, such as marketing data mart
Independent vs. dependent (directly from warehouse) data mart •
Virtual warehouse •
A set of views over operational databases •
Only some of the possible summary views may be •
materialized
173
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse Development: A Recommended
Approach
Multi-Tier Data
Warehouse
Distributed
Data Marts
Data
Mart
Enterprise
Data
Warehouse
Data
Mart
Model refinement
Model refinement
Define a high-level corporate data model
174
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse Back-End Tools and Utilities
Data extraction •
get data from multiple, heterogeneous, and external sources •
Data cleaning •
detect errors in the data and rectify them when possible •
Data transformation •
convert data from legacy or host format to warehouse format •
Load •
sort, summarize, consolidate, compute views, check integrity, •
and build indicies and partitions
Refresh •
propagate the updates from the data sources to the •
warehouse
175
‫صفحه قبل‬
‫صفحه بعد‬
Metadata Repository
Meta data is the data defining warehouse objects. It stores: •
Description of the structure of the data warehouse •
schema, view, dimensions, hierarchies, derived data defn, data mart locations •
and contents
Operational meta-data •
data lineage (history of migrated data and transformation path), currency of •
data (active, archived, or purged), monitoring information (warehouse usage
statistics, error reports, audit trails)
The algorithms used for summarization •
The mapping from operational environment to the data warehouse •
Data related to system performance •
warehouse schema, view and derived data definitions •
Business data •
business terms and definitions, ownership of data, charging policies •
176
‫صفحه قبل‬
‫صفحه بعد‬
OLAP Server Architectures
Relational OLAP (ROLAP) •
Use relational or extended-relational DBMS to store and manage •
warehouse data and OLAP middle ware
Include optimization of DBMS backend, implementation of aggregation •
navigation logic, and additional tools and services
Greater scalability •
Multidimensional OLAP (MOLAP) •
Sparse array-based multidimensional storage engine •
Fast indexing to pre-computed summarized data •
Hybrid OLAP (HOLAP) (e.g., Microsoft SQLServer) •
Flexibility, e.g., low level: relational, high-level: array •
Specialized SQL servers (e.g., Redbricks) •
Specialized support for SQL queries over star/snowflake schemas •
177
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehousing and OLAP Technology: An
Overview
What is a data warehouse? •
A multi-dimensional data model •
Data warehouse architecture •
Data warehouse implementation •
From data warehousing to data mining •
178
‫صفحه قبل‬
‫صفحه بعد‬
Efficient Data Cube Computation
Data cube can be viewed as a lattice of cuboids •
The bottom-most cuboid is the base cuboid •
The top-most cuboid (apex) contains only one cell •
How many cuboids in an n-dimensional cube with L levels? •
n
T   ( Li 1)
Materialization of data cube •
i 1
Materialize every (cuboid) (full materialization), none (no •
materialization), or some (partial materialization)
Selection of which cuboids to materialize •
Based on size, sharing, access frequency, etc. •
179
‫صفحه قبل‬
‫صفحه بعد‬
Cube Operation
Cube definition and computation in DMQL •
define cube sales[item, city, year]: sum(sales_in_dollars)
compute cube sales
Transform it into a SQL-like language (with a new operator cube by, •
introduced by Gray et al.’96)
SELECT item, city, year, SUM (amount)
FROM SALES
(city)
()
(item)
(year)
CUBE BY item, city, year
Need to compute the following Group-Bys •
(date, product,
customer),(city, year) (item, year)
(city, item)
(date,product),(date, customer), (product, customer),
(date), (product), (customer)
(city,
() item, year)
180
‫صفحه قبل‬
‫صفحه بعد‬
Iceberg Cube
Computing only the cuboid cells whose count or •
other aggregates satisfying the condition like
HAVING COUNT(*) >= minsup
Motivation
Only a small portion of cube cells may be “above the water’’ 
in a sparse cube
Only calculate “interesting” cells—data above certain 
threshold
Avoid explosive growth of the cube 
Suppose 100 dimensions, only 1 base cell. How many aggregate cells
if count >= 1? What about count >= 2?
181
‫صفحه قبل‬
‫صفحه بعد‬


Indexing OLAP Data: Bitmap Index
Index on a particular column •
Each value in the column has a bit vector: bit-op is fast •
The length of the bit vector: # of records in the base table •
The i-th bit is set if the i-th row of the base table has the value for the •
indexed column
not suitable for high cardinality domains •
Base table
Cust
C1
C2
C3
C4
C5
182
Region
Asia
Europe
Asia
America
Europe
Index on Region
Index on Type
Type RecIDAsia Europe America RecID Retail Dealer
Retail
1
1
0
1
1
0
0
Dealer 2
2
0
1
0
1
0
Dealer 3
1
0
0
3
0
1
4
0
0
1
4
1
0
Retail
0
1
0
5
0
1
Dealer 5
‫صفحه قبل‬
‫صفحه بعد‬
Indexing OLAP Data: Join Indices
Join index: JI(R-id, S-id) where R (R-id, …)  S (S-id, …) •
Traditional indices map the values to a list of record ids •
It materializes relational join in JI file and speeds •
up relational join
In data warehouses, join index relates the values of the •
dimensions of a start schema to rows in the fact table.
E.g. fact table: Sales and two dimensions city and •
product
A join index on city maintains for each distinct •
city a list of R-IDs of the tuples recording the
Sales in the city
Join indices can span multiple dimensions •
183
‫صفحه قبل‬
‫صفحه بعد‬
Efficient Processing OLAP Queries
Determine which operations should be performed on the available cuboids •
Transform drill, roll, etc. into corresponding SQL and/or OLAP operations, e.g., dice •
= selection + projection
Determine which materialized cuboid(s) should be selected for OLAP op. •
Let the query-to-be-processed be on {brand, province_or_state} with the condition •
“year = 2004”, and there are 4 materialized cuboids available:
1) {year, item_name, city}
2) {year, brand, country}
3) {year, brand, province_or_state}
4) {item_name, province_or_state} where year = 2004
Which should be selected to process the query?
Explore indexing structures and compressed vs. dense array structs in MOLAP •
184
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehousing and OLAP Technology:
An Overview
What is a data warehouse? •
A multi-dimensional data model •
Data warehouse architecture •
Data warehouse implementation •
From data warehousing to data mining •
185
‫صفحه قبل‬
‫صفحه بعد‬
Data Warehouse Usage
Three kinds of data warehouse applications •
Information processing •
supports querying, basic statistical analysis, and reporting using •
crosstabs, tables, charts and graphs
Analytical processing •
multidimensional analysis of data warehouse data •
supports basic OLAP operations, slice-dice, drilling, pivoting •
Data mining •
knowledge discovery from hidden patterns •
supports associations, constructing analytical models, performing •
classification and prediction, and presenting the mining results
using visualization tools
186
‫صفحه قبل‬
‫صفحه بعد‬
From On-Line Analytical Processing (OLAP) to On-Line Analytical Mining (OLAM)
Why online analytical mining? •
High quality of data in data warehouses •
DW contains integrated, consistent, cleaned data •
Available information processing structure surrounding data •
warehouses
ODBC, OLEDB, Web accessing, service facilities, reporting and OLAP •
tools
OLAP-based exploratory data analysis •
Mining with drilling, dicing, pivoting, etc. •
On-line selection of data mining functions •
Integration and swapping of multiple mining functions, algorithms, •
and tasks
187
‫صفحه قبل‬
‫صفحه بعد‬
An OLAM System Architecture
Mining result
Mining query
Layer4
User Interface
User GUI API
OLAM
Engine
OLAP
Engine
Layer3
OLAP/OLAM
Data Cube API
Layer2
MDDB
Meta Data
Filtering&Integration Database API
Databases
188
Filterin
g
Data cleaning
Data
Data integrationWarehouse
‫صفحه قبل‬
‫صفحه بعد‬
MDDB
Layer1
Data Repository
‫واژگان کليدي مهارت‬
data warehouse
data warehouse architecture
data warehouse implementation
on-line analytical processing (OLAP)
OLAP data indexing
OLAP query processing
on-line-analytical mining
data cube
roll-up
drill-down
slicing and dicing
‫صفحه قبل‬
‫صفحه بعد‬
‫آزمون مهارت‬
Briefy compare the concepts data cleaning, data .1
transformation, refresh.
Answer:
Data cleaning is the process of detecting errors in the data and •
rectifying them when possible.
Data transformation is the process of converting the data from •
heterogeneous sources to a unified data warehouse format or
semantics.
Refresh is the function propagating the updates from the data •
sources to the warehouse.
‫صفحه قبل‬
‫صفحه بعد‬
190
‫آزمون مهارت‬
Regarding the computation of measures in a data cube, .2
enumerate three categories of measures, based on the kind of
aggregate functions used in computing a data cube.
Answer:
The three categories of measures are distributive, algebraic, and •
holistic.
‫صفحه قبل‬
‫صفحه بعد‬
191
‫آزمون مهارت‬
3. Suppose that a data warehouse contains 20 dimensions, each with
about 5 levels of granularity. Users are mainly interested in 4
particular dimensions, each having 3 frequently accessed levels
for rolling up and drilling down. How would you design a data
cube structure to efifciently support this preference?
Answer:
An efficient data cube structure to support this preference would be •
to use partial materialization, or selected computation of cuboids.
By computing only the proper subset of the whole set of possible
cuboids, the total amount of storage space required would be
minimized while maintaining a fast response time and avoiding
redundant computation.
‫صفحه قبل‬
‫صفحه بعد‬
192
‫آزمون مهارت‬
4. What are the differences between the three main types of data
warehouse usage: information processing, analytical processing,
and data mining?
Answer:
Information processing involves using queries to find and report •
useful information using crosstabs, tables, charts, or graphs.
Analytical processing uses basic OLAP operations such as slice-anddice, drill-down, roll-up, and pivoting on historical data in order to
provide multidimensional analysis of data warehouse data. Data
mining uses knowledge discovery to find hidden patterns and
associations, constructing analytical models, performing
classification and prediction, and presenting the mining results
using visualization tools..
‫صفحه قبل‬
‫صفحه بعد‬
193