Download data mining and visualisation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Nonlinear dimensionality reduction wikipedia , lookup

Transcript
Data Mining and Visualisation Coursework Semester A 2012/13
1
DATA MINING AND VISUALISATION
Task 1: Your Personal Data Warehouse Application (PDWA)
[25 marks in total]
As the module progresses you will build a substantial data warehouse application for a real-world
scenario of your choosing. You will design a star schema for the data warehouse. The data warehouse
is a metaphor for multidimensional data storage. The actual physical storage of such data may differ
from its logical representation. Assuming the data are stored in a relational database (Relational OLAP),
you will create an actual data warehouse using either Microsoft Access, Microsoft SQL Server or Oracle,
etc. Data warehouse can also be constructed through array-based multidimensional storage
(Multidimensional OLAP). There is a capability of direct array addressing with this data structure, where
dimension values are accessed via the position or index of their corresponding array locations.
Your first step is to identify the domain you would like to manage with your data warehouse, and to
construct an entity-relationship diagram for the data warehouse. I suggest that you pick an application
that you will enjoy working with --a hobby, material from another course, a research project, etc.
Try to pick an application that is relatively substantial, but not too enormous. For example, a data
warehouse for a university consists of the following four dimensions: student, module, semester, and
lecturer, and two measures count and avg_grade. When at the lowest conceptual level (e.g., for a given
student, module, semester and lecturer combination), the avg_grade measure stores the actual module
grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given
combination. [Note: in your coursework, you should not use the university scenario or similar ones any
longer!] Your data warehouse should consist of at least four dimensions, one of which should be time
dimension, when expressed in the entity-relationship model, you might want your design to have one
fact table plus four(or more) dimensional tables, and a similar number of relationships. You should
certainly include one-many relationships. Each dimension has at least three levels (including all), such
as student < course < university (all).
a) Describe the data warehouse application you propose to work with throughout the module. Your
description should be brief and relatively formal. If there are any unique or particularly difficult
aspects of your proposed application, please point them out. Your description will be graded only
on suitability and conciseness.
[2 marks]
b) [ROLAP] Draw a star schema diagram including attributes for your data warehouse. Don't forget
to underline primary key attributes and include arrowheads indicating the multiplicity of
relationships. Write an SQL database schema for your PDA, using the CREATE TABLE
commands ( Pick suitable datatypes for each attribute). Using INSERT commands to insert
tuples. You need to populate the data warehouse with sample data (at least five attributes for
each dimensional table and at least three records each table) for manipulating the data
warehouse. For this task, you ONLY need to submit the star schema diagram, and the populated
tables.
[5 marks]
c) [ROLAP] Starting with the base cuboid [e.g., student, module, semester, lecturer], carry out two
OLAP operations. For example, what specific OLAP operations should you perform in order to list
the average grade of the Data Mining module for each university student in the university
scenario? Write and run an SQL query to obtain your list resembling the above example. Provide
a screenshot as a proof your query worked.
[6 marks]
d) [MOLAP] Use any of your favourite high-level languages, like C, C++, Java or VB, to implement
a multi-dimensional array for your data warehouse. Populate your arrays, then perform the same
Data Mining and Visualisation Coursework Semester A 2012/13
2
operation as described in c). Compare solution c) with d) and resolve any differences.
[8 marks]
e) [MOLAP] Unfortunately, this cube may often generate a huge, yet very sparse multidimensional
matrix. Present an example illustrating such a huge and sparse data cube. Describe an
implementation method that can elegantly overcome this sparse matrix problem.
[4 marks]
Task 2: Choose one from the following three tasks.
[15 marks in total]
a) Mining association rules over distributed databases
Review the algorithms mining association rules over distributed databases.
b) Mining classification over large databases
Review the algorithms mining classification over large databases (focusing on efficiency and scalability).
c) Mining cluster over large databases
Review algorithms mining cluster over large databases (focusing on performance. e.g. efficiency,
scalability, able to deal with noise and outliers).
Task 3:
[30 marks in total]
A database in .ARFF format has been provided for you on Studynet. Analyse this database using the
WEKA toolkit and tools introduced within this module. Produce a report explaining which tools you used
and why, what results you obtained, and what this tells you about the data. Marks will be awarded for:
variety of tools used, quality of analysis, and interpretation of the results. An extensive report is not
required (at most 4000 words), nor is detailed explanation of the techniques employed, but any graphs
or tables produced should be described and analysed in the text. A reasonable report could be achieved
by doing a thorough analysis using three techniques. An excellent report would use at least four tools to
analyse the dataset, and provide detailed comparisons between the results.
You should perform the following steps:
1. Analyse the attributes in the data, and consider their relative importance with respect to the target
class.
2. Construct graphs of classification performance against training set size for a range of classifiers taken
from those considered in the module.
3. Analyse the data structure/representation generated by each classifier when trained on the complete
dataset.
4. Combine the results from the previous three steps and all your classifiers to develop a model of why
instances fall into particular classes.
Produce a report containing your answers to the above.
[Total 30 marks]