Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
DATA MINING Introduction [email protected] Why Mine Data? Commercial Viewpoint • Lots of data is being collected and warehoused • Web data, e-commerce • purchases at department/ grocery stores • Bank/Credit Card transactions • Computers have become cheaper and more powerful • Competitive Pressure is Strong • Provide better, customized services for an edge (e.g. in Customer Relationship Management) 2 Why Mine Data? Scientific Viewpoint • Data collected and stored at enormous speeds (GB/hour) • remote sensors on a satellite • telescopes scanning the skies • microarrays generating gene expression data • scientific simulations generating terabytes of data • Traditional techniques infeasible for raw data • Data mining may help scientists • in classifying and segmenting data • in Hypothesis Formation 3 Mining Large Data Sets - Motivation • There is often information “hidden” in the data that is not readily evident • Human analysts may take weeks to discover useful information • Much of the data is never analyzed at all 4,000,000 3,500,000 The Data Gap 3,000,000 2,500,000 2,000,000 1,500,000 Total new disk (TB) since 1995 1,000,000 Number of analysts 500,000 0 1995 1996 1997 1998 1999 From: R. Grossman, C. Kamath, V. Kumar, “Data Mining for Scientific and Engineering Applications” 4 What is Data Mining? • Many Definitions • Non-trivial extraction of implicit, previously unknown and potentially useful information from data • Exploration & analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns 5 What is (not) Data Mining? What is not Data Mining? 6 What is Data Mining? – Look up phone number in phone directory – Certain names are more prevalent in certain US locations (O’Brien, O’Rurke, O’Reilly… in Boston area) – Query a Web search engine for information about “Amazon” – Group together similar documents returned by search engine according to their context (e.g. Amazon rainforest, Amazon.com,) Query Examples • Database – Find all credit applicants with last name of Smith. – Identify customers who have purchased more than $10,000 in the last month. – Find all customers who have purchased milk • Data Mining – Find all credit applicants who are poor credit risks. (classification) – Identify customers with similar buying habits. (Clustering) – Find all items which are frequently purchased with milk. (association rules) 8 Applications In finance, telecom, insurance and retail: • Loan/credit card approval • market segmentation • fraud detection • better marketing • trend analysis • market basket analysis • customer churn • Web site design and promotion ©GKGupta 9 Loan/Credit card approvals In a modern society, a bank does not know its customers. Only knowledge a bank has is their information stored in the computer. Credit agencies and banks collect a lot of customers’ behavioural data from many sources. This information is used to predict the chances of a customer paying back a loan. ©GKGupta 10 Market Segmentation • Large amounts of data about customers contains valuable information • The market may be segmented into many subgroups according to variables that are good discriminators • Not always easy to find variables that will help in market segmentation ©GKGupta 11 Fraud Detection • Very challenging since it is difficult to define characteristics of fraud. Often based on detecting changes from the norm. • In statistics, it is common to throw out the outliers but in data mining it may be useful to identify them since they could either be due to errors or perhaps fraud. ©GKGupta 12 Better Marketing When customers buy new products, other products may be suggested to them when they are ready. As noted earlier, in mail order marketing for example, one wants to know: - will the customer respond? - will the customer buy and how much? - will the customer return purchase? - will the customer pay for the purchase? ©GKGupta 13 Better Marketing It has been reported that more than 1000 variable values on each customer are held by some mail order marketing companies. The aim is to “lift” the response rate. ©GKGupta 14 Trend analysis In a large company, not all trends are always visible to the management. It is then useful to use data mining software that will identify trends. Trends may be long term trends, cyclic trends or seasonal trends. ©GKGupta 15 Market Basket Analysis • Aims to find what the customers buy and what they buy together • This may be useful in designing store layouts or in deciding which items to put on sale • Basket analysis can also be used for applications other than just analysing what items customers buy together ©GKGupta 16 Customer Churn • In businesses like telecommunications, companies are trying very hard to keep their good customers and to perhaps persuade good customers of their competitors to switch to them. • In such an environment, businesses want to find which customers are good, why customers switch and what makes customers loyal. • Cheaper to develop a retention plan and retain an old customer than to bring in a new customer. ©GKGupta 17 Customer Churn • The aim is to get to know the customers better so you will be able to keep them longer. • Given the competitive nature of businesses, customers will move if not looked after. • Also, some businesses may wish to get rid of customers that cost more than they are worth e.g. credit card holders that don’t use the card, bank customers with very small amount of money in their accounts. ©GKGupta 18 Web site design • A Web site is effective only if the visitors easily find what they are looking for. • Data mining can help discover affinity of visitors to pages and the site layout may be modified based on this information. ©GKGupta 19 Data Mining Process Successful data mining involves careful determining the aims and selecting appropriate data. The following steps should normally be followed: 1. Requirements analysis 2. Data selection and collection 3. Cleaning and preparing data 4. Data mining exploration and validation 5. Implementing, evaluating and monitoring 6. Results visualisation ©GKGupta 20 Requirements Analysis The enterprise decision makers need to formulate goals that the data mining process is expected to achieve. The business problem must be clearly defined. One cannot use data mining without a good idea of what kind of outcomes the enterprise is looking for. If objectives have been clearly defined, it is easier to evaluate the results of the project. ©GKGupta 21 Data Selection and Collection Find the best source databases for the data that is required. If the enterprise has implemented a data warehouse, then most of the data could be available there. Otherwise source OLTP systems need to be identified and required information extracted and stored in some temporary system. In some cases, only a sample of the data available may be required. ©GKGupta 22 Cleaning and Preparing Data This may not be an onerous task if a data warehouse containing the required data exists, since most of this must have already been done when data was loaded in the warehouse. Otherwise this task can be very resource intensive, perhaps more than 50% of effort in a data mining project is spent on this step. Essentially a data store that integrates data from a number of databases may need to be created. When integrating data, one often encounters problems like identifying data, dealing with missing data, data conflicts and ambiguity. An ETL (extraction, transformation and loading) tool may be used to overcome these problems. ©GKGupta 23 Exploration and Validation Assuming that the user has access to one or more data mining tools, a data mining model may be constructed based on the enterprise’s needs. It may be possible to take a sample of data and apply a number of relevant techniques. For each technique the results should be evaluated and their significance interpreted. This is likely to be an iterative process which should lead to selection of one or more techniques that are suitable for further exploration, testing and validation. ©GKGupta 24 Implementing, Evaluating and Monitoring Once a model has been selected and validated, the model can be implemented for use by the decision makers. This may involve software development for generating reports or for results visualisation and explanation for managers. If more than one technique is available for the given data mining task, it is necessary to evaluate the results and choose the best. This may involve checking the accuracy and effectiveness of each technique. ©GKGupta 25 Implementing, Evaluating and Monitoring Regular monitoring of the performance of the techniques that have been implemented is required. Every enterprise evolves with time and so must the data mining system. Monitoring may from time to time to lead to the refinement of tools and techniques that have been implemented. ©GKGupta 26 Results Visualisation Explaining the results of data mining to the decision makers is an important step. Most DM software includes data visualisation modules which should be used in communicating data mining results to the managers. Clever data visualisation tools are being developed to display results that deal with more than two dimensions. The visualisation tools available should be tried and used if found effective for the given problem. ©GKGupta 27 Data Mining Process – Another Approach The last few slides presented one approach. Another approach that also includes six steps has been proposed by CRISP–DM (Cross–Industry Standard Process for Data Mining) developed by an industry consortium. The six steps are: 1. 2. 3. 4. 5. 6. Business understanding Data understanding Data preparation Modelling Evaluation Deployment ©GKGupta CRISP Data Mining Model 28 Origins of Data Mining • Draws ideas from machine learning/AI, pattern recognition, statistics, and database systems • Traditional Techniques may be unsuitable due to • Enormity of data • High dimensionality of data • Heterogeneous, distributed nature of data Statistics/ AI Machine Learning/ Pattern Recognition Data Mining Database systems 29 Data Mining Tasks • Prediction Methods • Use some variables to predict unknown or future values of other variables. • Description Methods • Find human-interpretable patterns that describe the data. From [Fayyad, et.al.] Advances in Knowledge Discovery and Data Mining, 1996 30 Data Mining Models and Tasks Data Mining Tasks... • Classification [Predictive] • Clustering [Descriptive] • Association Rule Discovery [Descriptive] • Sequential Pattern Discovery [Descriptive] • Regression [Predictive] • Deviation Detection [Predictive] 32 ©GKGupta 33 Association Analysis • Association analysis involves discovery of relationships or correlations among a set of items. • Discovering that personal loans are repaid with 80% confidence when the person owns his home. • The classical example is the one where a store discovered that people buying nappies tend also to buy beer. ©GKGupta 34 Associations The association rules are often written as X → Y meaning that whenever X appears Y also tends to appear. X and Y may be collection of attributes. A supermarket like Woolworths may have several thousand items and many millions of transactions a week (i.e. Gigabytes of data each week). Note that the quantities of items bought is ignored. ©GKGupta 35 Classification and Prediction A set of training objects each with a number of attribute values are given to the classifier. The classifier formulates rules for each class in the training set so that the rules may be used to classify new objects. Some techniques do not require training data. Classification may be used for predicting the class label of data objects. Number of techniques including decision tree and neural network. ©GKGupta 36 Cluster Analysis Similar to classification in that the aim is to build clusters such that each of them is similar within itself but is dissimilar to others. Clustering does not rely on classlabeled data objects. Based on the principle of maximizing the intracluster similarity and minimizing the intercluster similarity. ©GKGupta 37 Data Warehousing and OLAP Data warehousing is a process by which an enterprise collects data from the whole enterprise to build a single version of the truth. This information is useful for decision makers and may also be used for data mining. A data warehouse can be of real help in data mining since data cleaning and other problems of collecting data would have already been overcome. OLAP (Online Analytical Processing) tools are decision support tools that are often built on top of a data warehouse or another database. OLAP goes further than traditional query and report tools in that a decision maker already has a hypothesis which he/she is trying to test. ©GKGupta 38 Data Warehousing and OLAP Data mining is somewhat different than OLAP since in data mining a hypothesis is not being tested. Instead data mining is used to uncover novel patterns in the data. ©GKGupta 39 Before Data Mining To define a data mining task, one needs to answer the following: • What data set do I want to mine? • What kind of knowledge do I want to mine? • What background knowledge could be useful? • How do I measure if the results are interesting? • How do I display what I have discovered? ©GKGupta 40 Task-relevant Data The whole database may not be required since it may be that we only want to study something specific e.g. trends in postgraduate students - countries they come from - degree program they are doing - their age? - time they take to finish the degree - scholarship they have they been awarded May need to build a database subset before data mining can be done. ©GKGupta 41 Task-relevant Data Data collection is non-trivial. OLTP data is not useful since it is changing all the time. In some cases, data from more than one database may be needed. ©GKGupta 42 Preprocessing • A data mining process would normally involve preprocessing • Often data mining applications use data warehousing • One approach is to pre-mine the data, warehouse it, then carry out data mining • The process is usually iterative and can take years of effort for a large project ©GKGupta 43 Data Preprocessing • Preprocessing is very important although often considered too mundane to be taken seriously • Preprocessing may also be needed after the data warehouse phase • Data reduction may be needed to transform very high dimensional data to a lower dimensional data ©GKGupta Data Preprocessing • Feature Selection • Use sampling? • Normalization • Smoothing • Dealing with duplicates, missing data • Dealing with time-dependent data 44 ©GKGupta 45 Measuring interest Data mining process may generate many patterns. We cannot look at all of them and so need some way to separate uninteresting results from the interesting ones. This may be based on simplicity of pattern, rule length, or level of confidence. ©GKGupta 46 Visualization We must be able to display results so that they are easy to understand. Display may be a graph, pie chart, tables etc. Some displays are better than others for a given kind of knowledge. ©GKGupta 47 Guidelines for Successful Data Mining • The data must be available • The data must be relevant, adequate and clean • There must be a well-defined problem • The problem should not be solvable by means of ordinary query or OLAP tools • The results must be actionable ©GKGupta 48 Data Mining Software As noted earlier, a large variety of DM software is now available. Some more widely used software is: • IBM - Intelligent Miner and more • SAS - Enterprise Miner • Silicon Graphics - MineSet • Oracle - Thinking Machines - Darwin • Angoss - knowledgeSEEKER Classification: Definition • Given a collection of records (training set ) • Each record contains a set of attributes, one of the attributes is the class. • Find a model for class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible. • A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. 50 Classification Example Tid Refund Marital Status Taxable Income Cheat Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No No Single 75K ? 2 No Married 100K No Yes Married 50K ? 3 No Single 70K No No Married 150K ? 4 Yes Married 120K No Yes Divorced 90K ? 5 No Divorced 95K Yes No Single 40K ? 6 No Married No No Married 80K ? 60K 10 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 10 No Single 90K Yes Training Set Learn Classifier Test Set Model Classification: Application 1 • Direct Marketing • Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product. • Approach: • Use the data for a similar product introduced before. • We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute. • Collect various demographic, lifestyle, and company-interaction related information about all such customers. • Type of business, where they stay, how much they earn, etc. • Use this information as input attributes to learn a classifier model. From [Berry & Linoff] Data Mining Techniques, 1997 52 Classification: Application 2 • Fraud Detection • Goal: Predict fraudulent cases in credit card transactions. • Approach: • Use credit card transactions and the information on its account- holder as attributes. • When does a customer buy, what does he buy, how often he pays on time, etc • Label past transactions as fraud or fair transactions. This forms the class attribute. • Learn a model for the class of the transactions. • Use this model to detect fraud by observing credit card transactions on an account. 53 Classification: Application 3 • Customer Attrition/Churn: • Goal: To predict whether a customer is likely to be lost to a competitor. • Approach: • Use detailed record of transactions with each of the past and present customers, to find attributes. • How often the customer calls, where he calls, what time-of-the day he calls most, his financial status, marital status, etc. • Label the customers as loyal or disloyal. • Find a model for loyalty. From [Berry & Linoff] Data Mining Techniques, 1997 54 Classification: Application 4 • Sky Survey Cataloging • Goal: To predict class (star or galaxy) of sky objects, especially visually faint ones, based on the telescopic survey images (from Palomar Observatory). • 3000 images with 23,040 x 23,040 pixels per image. • Approach: • Segment the image. • Measure image attributes (features) - 40 of them per object. • Model the class based on these features. • Success Story: Could find 16 new high red-shift quasars, some of the farthest objects that are difficult to find! From [Fayyad, et.al.] Advances in Knowledge Discovery and Data Mining, 1996 55 Classifying Galaxies Courtesy: http://aps.umn.edu Early Class: • Stages of Formation Attributes: • Image features, • Characteristics of light waves received, etc. Intermediate Late Data Size: • 72 million stars, 20 million galaxies • Object Catalog: 9 GB • Image Database: 150 GB Clustering Definition • Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that • Data points in one cluster are more similar to one another. • Data points in separate clusters are less similar to one another. • Similarity Measures: • Euclidean Distance if attributes are continuous. • Other Problem-specific Measures. 57 Illustrating Clustering x Euclidean Distance Based Clustering in 3-D space. Intracluster distances are minimized Intercluster distances are maximized Clustering: Application 1 • Market Segmentation: • Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix. • Approach: • Collect different attributes of customers based on their geographical and lifestyle related information. • Find clusters of similar customers. • Measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters. 59 Clustering: Application 2 • Document Clustering: • Goal: To find groups of documents that are similar to each other based on the important terms appearing in them. • Approach: To identify frequently occurring terms in each document. Form a similarity measure based on the frequencies of different terms. Use it to cluster. • Gain: Information Retrieval can utilize the clusters to relate a new document or search term to clustered documents. 60 Illustrating Document Clustering • Clustering Points: 3204 Articles of Los Angeles Times. • Similarity Measure: How many words are common in these documents (after some word filtering). Category Total Articles Correctly Placed 555 364 Foreign 341 260 National 273 36 Metro 943 746 Sports 738 573 Entertainment 354 278 Financial 61 Clustering of S&P 500 Stock Data z Observe Stock Movements every day. z Clustering points: Stock-{UP/DOWN} z Similarity Measure: Two points are more similar if the events described by them frequently happen together on the same day. z We used association rules to quantify a similarity measure. Discovered Clusters 1 2 3 4 Applied-Matl-DOW N,Bay-Net work-Down,3-COM-DOWN, Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN, DSC-Co mm-DOW N,INTEL-DOWN,LSI-Logic-DOWN, Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down, Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOW N, Sun-DOW N Apple-Co mp-DOW N,Autodesk-DOWN,DEC-DOWN, ADV-M icro-Device-DOWN,Andrew-Corp-DOWN, Co mputer-Assoc-DOWN,Circuit-City-DOWN, Co mpaq-DOWN, EM C-Corp-DOWN, Gen-Inst-DOWN, Motorola-DOW N,Microsoft-DOWN,Scientific-Atl-DOWN Fannie-Mae-DOWN,Fed-Ho me-Loan-DOW N, MBNA-Corp -DOWN,Morgan-Stanley-DOWN Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP, Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlu mberger-UP Industry Group Technology1-DOWN Technology2-DOWN Financial-DOWN Oil-UP Association Rule Discovery: Definition • Given a set of records each of which contain some number of items from a given collection; • Produce dependency rules which will predict occurrence of an item based on occurrences of other TID items. Items 1 2 3 4 5 63 Bread, Coke, Milk Beer, Bread Beer, Coke, Diaper, Milk Beer, Bread, Diaper, Milk Coke, Diaper, Milk Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer} Association Rule Discovery: Application 1 • Marketing and Sales Promotion: • Let the rule discovered be {Bagels, … } --> {Potato Chips} • Potato Chips as consequent => Can be used to determine what should be done to boost its sales. • Bagels in the antecedent => Can be used to see which products would be affected if the store discontinues selling bagels. • Bagels in antecedent and Potato chips in consequent => Can be used to see what products should be sold with Bagels to promote sale of Potato chips! 64 Association Rule Discovery: Application 2 • Supermarket shelf management. • Goal: To identify items that are bought together by sufficiently many customers. • Approach: Process the point-of-sale data collected with barcode scanners to find dependencies among items. • A classic rule -• If a customer buys diaper and milk, then he is very likely to buy beer. • So, don’t be surprised if you find six-packs stacked next to diapers! 65 Association Rule Discovery: Application 3 • Inventory Management: • Goal: A consumer appliance repair company wants to anticipate the nature of repairs on its consumer products and keep the service vehicles equipped with right parts to reduce on number of visits to consumer households. • Approach: Process the data on tools and parts required in previous repairs at different consumer locations and discover the co-occurrence patterns. 66 Sequential Pattern Discovery: Definition • Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events. • Rules are formed by first disovering patterns. Event occurrences in the patterns are governed by timing constraints. (A B) (A B) <= xg (C) (D E) (C) (D E) >ng <= ms 67 <= ws Sequential Pattern Discovery: Examples • In telecommunications alarm logs, • (Inverter_Problem Excessive_Line_Current) (Rectifier_Alarm) --> (Fire_Alarm) • In point-of-sale transaction sequences, • Computer Bookstore: (Intro_To_Visual_C) (C++_Primer) --> (Perl_for_dummies,Tcl_Tk) • Athletic Apparel Store: (Shoes) (Racket, Racketball) --> (Sports_Jacket) 68 Regression • Predict a value of a given continuous valued variable based on the values of other variables, assuming a linear or nonlinear model of dependency. • Greatly studied in statistics, neural network fields. • Examples: • Predicting sales amounts of new product based on advetising expenditure. • Predicting wind velocities as a function of temperature, humidity, air pressure, etc. • Time series prediction of stock market indices. 69 Deviation/Anomaly Detection • Detect significant deviations from normal behavior • Applications: • Credit Card Fraud Detection • Network Intrusion Detection Typical network traffic at University level may reach over 100 million connections per day 70 Challenges of Data Mining • Scalability • Dimensionality • Complex and Heterogeneous Data • Data Quality • Data Ownership and Distribution • Privacy Preservation • Streaming Data 71