Download Introduction

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
CII504
Intelligent Engine
© 2005 Irfan Subakti
Department of Informatics
Institute Technology of Sepuluh Nopember
Surabaya - Indonesia
Outline

Course overview
Lecture 1
© 2005 Irfan Subakti
2
Course overview

Credit


Prerequisites


3 Credits (50 minutes x 3 = 150 minutes)
Artificial Intelligence (CI1420)
Goals



Lecture 1
Student are able to get the understanding about machine learning
which has relation with the computer program. It can improve its
performance by training or learning set
Student will get knowledge theoretic about several concepts:
inductive bias, Probability Approximately Correct (PAC) and the
Mistake bound learning frameworks, Minimum Description Length
principle and Occam’s Razor.
Student will get practical applications, i.e., learning method
algorithms such as: Decision Trees learning, Neural Network
learning, Statistical Learning methods, Genetic Algorithms,
Bayesian Learning methods, Explanation-based learning and
Reinforcement learning.
© 2005 Irfan Subakti
3
Course overview (cont.)

Contents

Introduction to machine learning


Concept learning


Representing hypotheses, genetic operators, fitness function and selection, hypothesis space search
Reinforcement Learning

Lecture 1
Bayes theorem and concept learning: Brute-force Bayes concept learning and MAP hypotheses and consistent
learners, Maximum likelihood and Least-squared error hypotheses, Minimum description length principle, Bayes
optimal classifier, Naïve Bayes classifier, Bayesian belief networks, the Expectation Maximization (EM) algorithm
Genetic Algorithm


Neural network representations, perceptrons, multilayer networks and the Backpropagation algorithm, problems
with: convergence and local minima, hidden layer representations, generalization, overfitting, and stopping
criterion
Bayesian Learning


Decision tree representation, the basic decision tree learning algorithm, hypothesis space search in decision tree
learning, inductive bias in decision tree learning, issues in decision tree learning: overfitting, incorporating
continuous-valued attributes, handling training examples with missing attribute values
Artificial Neural Networks


Concept learning task: notation and the inductive learning hypothesis, concept learning as search at space
hypotheses, version spaces, inductive bias.
Decision Tree learning


Well-posed learning problems, designing a learning system: choosing the training experience, choosing the target
function, choosing a representation for the target function, choosing a function approximation algorithm, final
design; perspectives and issues in machine learning
Q learning: the Q function, an algorithm for learning Q, Convergence; nondeterministic rewards and actions,
temporal difference learning
© 2005 Irfan Subakti
4
Course overview (cont.)

References






Lecture 1
Tom M. Mitchell, Machine Learning, International Edition, McGrawHill, Singapore, 1997.
Mitsuo Gen, Runwei Chen, Genetic Algorithms and Engineering
Design, John Wiley & Sons, Inc., New York, USA, 1997.
Richard O. Duda, Peter E. Hart, David G. Stork, Pattern
Classification, Second Edition, John Wiley & Sons, Inc., USA, 2001.
Stuart J. Russell and Peter Norvig, Artificial Intelligence – A Modern
Approach, Second Edition, Prentice Hall – Pearson Education, Inc.,
New Jersey, USA, 2003.
IEEE Transactions on Neural Networks, The Institute of Electrical
and Electronics Engineers, Inc.
IEEE Transactions on Pattern Analysis and Machine Intelligence,
The Institute of Electrical and Electronics Engineers, Inc.
© 2005 Irfan Subakti
5
Related documents