Download Text Mining - COW :: Ceng

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Cluster analysis wikipedia , lookup

Transcript
Text Mining: Finding Nuggets in
Mountains of Textual Data
Jochen Dijrre, Peter Gerstl, Roland Seiffert
Presented and modified by
Aydın Göze Polat
(Previously modified by Ebrahim Kobeissi and Michel Bruley)
Text in this color are my additions/modifications
1
Outline
 Motivation
 Methodology
 Feature Extraction
 Clustering and Categorizing
 Some Applications
 Comparison with Data Mining
 Conclusion & Exam Questions
2
Motivation
 A large portion of a company’s data is
unstructured or semi-structured → Improve
business intelligence
 Finding new associations within large
domains → Analyze exploratory data




Letters
Emails
Phone transcripts
Contracts




Technical documents
Patents
Web pages
Articles
3
Definition
 Data mining:



Identification of a collection
Preparation and feature selection
Distribution analysis
4
Definition
 Text Mining:

Discovery by computer, of new, previously
unknown information, by automatically
extracting information from different written
resources (unstructured&semistructured data) →
Relevance, novelty, interestingness

Feature extraction → reduce high dimensionality (>10000)

Distribution analysis is more complex

Linguistic, statistical and machine learning techniques
5
Typical Applications
 Summarizing documents
 Discovering/monitoring relations among people,
places, organizations etc
 Customer profile analysis
 Trend analysis → monitoring public opinion
 Spam identification
 Public health early warning
 Event tracks, fraud detection
 Automatic labeling
6
Methodology
 Identification of a corpus (information retrieval, tokenization,
normalization, stemming, indexing, tf-idf etc. )
 Linguistic analysis (part of speech tagging, syntactic parsing etc.)
 Named entity recognition (feature extraction)
 Information Distillation → may depend on user defined criteria

Analysis of feature distribution

Coreference (reference to the same object)

Relationship, fact and event extraction
7
Text mining pipeline
Unstructured Text
(implicit knowledge)
Information
Retrieval
Knowledge
Discovery
Structured content
(explicit knowledge)
Semantic
Search/
Data Mining
Information
extraction
Semantic
metadata
Text mining process
Text preprocessing
Syntactic/Semantic text analysis
Features Generation
Bag of words
Features Selection
Simple counting
Statistics
Text/Data Mining
Classification- Supervised learning
Clustering- Unsupervised learning
Analyzing results
Mapping/Visualization
Result interpretation
Iterative and interactive process
Text mining tasks
Name Extractions
Feature extraction
Text Analysis
Tools
Categorization
Summarization
Clustering
TM
Text search engine
Web Searching
Tools
NetQuestion Solution
Web Crawler
Term Extraction
Abbreviation Extraction
Relationship Extraction
Hierarchical Clustering
Binary relational Clustering
Back to the Paper: Two Text Mining Approaches
 Extraction

Extraction of codified information from single
document
 Analysis

Analysis of the features to detect patterns, trends,
etc, over whole collections of documents
11
IBM Intelligent Miner for Text
 IBM introduced Intelligent Miner for Text in 1998
 SDK with: Feature extraction, clustering,
categorization, and more
 Traditional components (search engine, etc)
 The rest of the paper describes text mining
methodology of Intelligent Miner.
12
Feature Extraction
 Recognize and classify “significant”
vocabulary items from the text (dimension
reduction)
 Categories of vocabulary





Proper names – Mrs. Albright or Dheli[sic], India
Multiword terms – Joint venture, online document
Abbreviations – CPU, CEO
Relations – Jack Smith-age-42
Other useful things: numerical forms of numbers,
percentages, money, etc
13
Canonical Form Examples
 Normalize numbers, money

Four = 4, five-hundred dollar = $500
 Conversion of date to normal form
 Morphological variants

Drive, drove, driven = drive
 Proper names and other forms

Mr. Johnson, Bob Johnson, The author = Bob
Johnson
14
Feature Extraction Approach
 Linguistically motivated heuristics
 Pattern matching
 Limited lexical information (part-of-speech)
 Avoid analyzing with too much depth


Does not use too much lexical information
No in-depth syntactic or semantic analysis
15
Advantages to IBM’s approach
 Processing is very fast (helps when dealing
with huge amounts of data)
 Heuristics work reasonably well
 Generally applicable to any domain
16
Extra: Information extraction
Keyword Ranking
Link Analysis
Query Log Analysis
Metadata Extraction
Extract domainspecific
information from
natural language
text
–
Intelligent Match
Duplicate Elimination
–
Need a dictionary of extraction
patterns (e.g., “traveled to <x>”
or “presidents of <x>”)
• Constructed by hand
• Automatically learned from
hand-annotated training data
Need a semantic lexicon
(dictionary of words with
semantic category labels)
• Typically constructed by
hand
Clustering
 Fully automatic process
 Documents are grouped according to
similarity of their feature vectors
 Each cluster is labeled by a listing of the
common terms/keywords
 Good for getting an overview of a document
collection
18
Example: Obama vs. McCain
Two Clustering Engines
 Hierarchical clustering

Orders the clusters into a tree reflecting various
levels of similarity
 Binary relational clustering


Flat clustering
Relationships of different strengths between
clusters, reflecting similarity
20
Clustering Model
21
Categorization
 Examples: K-Nearest Neighbor, Naive Bayes
Classifier, Centroid Based Classifier (cosine similarity)
 Assigns documents to preexisting categories
 Classes of documents are defined by providing a set of
sample documents.
 Training phase produces “categorization schema”
 Documents can be assigned to more than one category
 If confidence is low, document is set aside for human
intervention
22
Categorization Model
23
Applications
 Customer Relationship Management
application provided by IBM Intelligent
Miner for Text called “Customer
Relationship Intelligence”

“Help companies better understand what their
customers want and what they think about the
company itself”
24
Customer Intelligence Process
 Take as input body of communications with
customer
 Cluster the documents to identify issues
 Characterize the clusters to identify the
conditions for problems
 Assign new messages to appropriate clusters
25
Customer Intelligence Usage
 Knowledge Discovery

Clustering used to create a structure that can be
interpreted
 Information Distillation

Refinement and extension of clustering results
Interpreting the results
 Tuning of the clustering process
 Selecting meaningful clusters

26
Comparison with Data Mining
 Data mining



Discovers hidden
models.
Tries to generalize all of
the data into a single
model.
Marketing, medicine,
health care, etc.
 Text mining



Discovers hidden facts.
Tries to understand the
details, cross reference
between individual
instances
Customer profile analysis,
trend analysis, information
filtering and routing, etc.
27
Where are we?
 OTMI : Text mining research papers:
– Text2genome project : map genome
– Neurosynth : mapping the brain
– Surechem : molecules from patents
– Drug discovery : find indirect links between
diseases and drugs, searching MEDLINE
 DTD (Document Type Definition) : Provide
semantic cues
 NacTeM (National Centre for Text Mining):
Provide tools and research facilities
28
Conclusion
 This paper introduced text mining and how it
differs from data mining proper.
 Focused on the tasks of feature extraction
and clustering/categorization
 Presented an overview of the tools/methods
of IBM’s Intelligent Miner for Text
29
Questions?
Thanks!
30
Exam Question #1
 What are the two aspects of Text Mining?


Knowledge Discovery: Discovering a common
customer complaint in a large collection of
documents containing customer feedback.
Information Distillation: Filtering future
comments into pre-defined categories
31
Exam Question #2
 How does the procedure for text mining
differ from the procedure for data mining?



Adds feature extraction phase
Infeasible for humans to select features manually
The feature vectors are, in general, high
dimensional and sparse
32
Exam Question #3
 In the Nominator program of IBM’s Intelligent
Miner for Text, an objective of the design is to
enable rapid extraction of names from large
amounts of text. How does this decision affect the
ability of the program to interpret the semantics of
text?

Does not perform in-depth syntactic or semantic analysis
of the text; the results are fast but only heuristic with
regards to actual semantics of the text.
33