Download pptx

Document related concepts

Orthogonal matrix wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix multiplication wikipedia , lookup

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

Transcript
Lecture 2: Math Primer
Machine Learning
CUNY Graduate Center
Today
• Probability and Statistics
– Naïve Bayes Classification
• Linear Algebra
– Matrix Multiplication
– Matrix Inversion
• Calculus
– Vector Calculus
– Optimization
– Lagrange Multipliers
1
Classical Artificial Intelligence
•
•
•
•
Expert Systems
Theorem Provers
Shakey
Chess
• Largely characterized by determinism.
2
Modern Artificial Intelligence
•
•
•
•
•
•
Fingerprint ID
Internet Search
Vision – facial ID, object recognition
Speech Recognition
Asimo
Jeopardy!
• Statistical modeling to generalize from data.
3
Two Caveats about Statistical
Modeling
• Black Swans
• The Long Tail
4
Black Swans
• In the 17th Century, all known swans were white.
• Based on evidence, it is impossible for a swan to
be anything other than white.
• In the 18th Century, black swans were
discovered in Western Australia
• Black Swans are rare, sometimes unpredictable
events, that have extreme impact
• Almost all statistical models underestimate the
likelihood of unseen events.
5
The Long Tail
• Many events follow an exponential distribution
• These distributions have a very long “tail”.
– I.e. A large region with significant probability mass, but low
likelihood at any particular point.
• Often, interesting events occur in the Long Tail, but it
is difficult to accurately model behavior in this region.
6
Boxes and Balls
• 2 Boxes, one red and one blue.
• Each contain colored balls.
7
Boxes and Balls
• Suppose we randomly select a box, then
randomly draw a ball from that box.
• The identity of the Box is a random
variable, B.
• The identity of the ball is a random
variable, L.
• B can take 2 values, r, or b
• L can take 2 values, g or o.
8
Boxes and Balls
• Given some information about B and L, we
want to ask questions about the likelihood
of different events.
• What is the probability of selecting an
apple?
• If I chose an orange ball, what is the
probability that I chose from the blue box?
9
Some basics
• The probability (or likelihood) of an event is the
fraction of times that the event occurs out of n
trials, as n approaches infinity.
• Probabilities lie in the range [0,1]
• Mutually exclusive events are events that cannot
simultaneously occur.
– The sum of the likelihoods of all mutually exclusive
events must equal 1.
• If two events are independent then,
p(X, Y) = p(X)p(Y)
p(X|Y) = p(X)
10
Joint Probability – P(X,Y)
• A Joint Probability function defines the likelihood of two
(or more) events occurring.
Blue box
Red box
Orange
1
6
7
Green
3
2
5
4
8
12
• Let nij be the number of times event i and event j
simultaneously occur.
11
Generalizing the Joint Probability
12
Marginalization
• Consider the probability of X irrespective of Y.
• The number of instances in column j is the sum of
instances in each cell
• Therefore, we can marginalize or “sum over” Y:
13
Conditional Probability
• Consider only instances where X = xj.
• The fraction of these instances where Y =
yi is the conditional probability
– “The probability of y given x”
14
Relating the Joint, Conditional and
Marginal
15
Sum and Product Rules
• In general, we’ll refer to a distribution over a
random variable as p(X) and a distribution
evaluated at a particular value as p(x).
Sum Rule
Product Rule
16
Bayes Rule
17
Interpretation of Bayes Rule
Posterior
Prior
Likelihood
• Prior: Information we have before
observation.
• Posterior: The distribution of Y after
observing X
• Likelihood: The likelihood of observing X
given Y
18
Boxes and Balls with Bayes Rule
• Assuming I’m inherently more likely to
select the red box (66.6%) than the blue
box (33.3%).
• If I selected an orange ball, what is the
likelihood that I selected the red box?
– The blue box?
19
Boxes and Balls
20
Naïve Bayes Classification
• This is a simple case of a simple
classification approach.
• Here the Box is the class, and the colored
ball is a feature, or the observation.
• We can extend this Bayesian classification
approach to incorporate more
independent features.
21
Naïve Bayes Classification
• Some theory first.
22
Naïve Bayes Classification
• Assuming independent features simplifies
the math.
23
Naïve Bayes Example Data
HOT
LIGHT
SOFT
RED
COLD
HEAVY
SOFT
RED
HOT
HEAVY
FIRM
RED
HOT
LIGHT
FIRM
RED
COLD
LIGHT
SOFT
BLUE
COLD
HEAVY
FIRM
BLUE
HOT
HEAVY
FIRM
BLUE
HOT
LIGHT
FIRM
BLUE
HOT
HEAVY
FIRM
?????
24
Naïve Bayes Example Data
HOT
LIGHT
SOFT
RED
COLD
HEAVY
SOFT
RED
HOT
HEAVY
FIRM
RED
HOT
LIGHT
FIRM
RED
COLD
LIGHT
SOFT
BLUE
COLD
HEAVY
FIRM
BLUE
HOT
HEAVY
FIRM
BLUE
HOT
LIGHT
FIRM
BLUE
HOT
HEAVY
FIRM
?????
Prior:
25
Naïve Bayes Example Data
HOT
LIGHT
SOFT
RED
COLD
HEAVY
SOFT
RED
HOT
HEAVY
FIRM
RED
HOT
LIGHT
FIRM
RED
COLD
LIGHT
SOFT
BLUE
COLD
HEAVY
SOFT
BLUE
HOT
HEAVY
FIRM
BLUE
HOT
LIGHT
FIRM
BLUE
HOT
HEAVY
FIRM
?????
26
Naïve Bayes Example Data
HOT
LIGHT
SOFT
RED
COLD
HEAVY
SOFT
RED
HOT
HEAVY
FIRM
RED
HOT
LIGHT
FIRM
RED
COLD
LIGHT
SOFT
BLUE
COLD
HEAVY
SOFT
BLUE
HOT
HEAVY
FIRM
BLUE
HOT
LIGHT
FIRM
BLUE
HOT
HEAVY
FIRM
?????
27
Continuous Probabilities
• So far, X has been discrete where it can take
one of M values.
• What if X is continuous?
• Now p(x) is a continuous probability density
function.
• The probability that x will lie in an interval
(a,b) is:
28
Continuous probability example
29
Properties of probability density
functions
Sum Rule
Product Rule
30
Expected Values
• Given a random variable, with a distribution
p(X), what is the expected value of X?
31
Multinomial Distribution
• If a variable, x, can take 1-of-K states, we
represent the distribution of this variable
as a multinomial distribution.
• The probability of x being in state k is μk
32
Expected Value of a Multinomial
• The expected value is the mean values.
33
Gaussian Distribution
• One Dimension
• D-Dimensions
34
Gaussians
35
How machine learning uses
statistical modeling
• Expectation
– The expected value of a function is the
hypothesis
• Variance
– The variance is the confidence in that
hypothesis
36
Variance
• The variance of a random variable describes
how much variability around the expected
value there is.
• Calculated as the expected squared error.
37
Covariance
• The covariance of two random variables
expresses how they vary together.
• If two variables are independent, their
covariance equals zero.
38
Linear Algebra
• Vectors
– A one dimensional array.
– If not specified, assume x is a column
vector.
• Matrices
– Higher dimensional array.
– Typically denoted with capital letters.
– n rows by m columns
39
Transposition
• Transposing a matrix swaps columns and
rows.
40
Transposition
• Transposing a matrix swaps columns and
rows.
41
Addition
• Matrices can be added to themselves iff
they have the same dimensions.
– A and B are both n-by-m matrices.
42
Multiplication
• To multiply two matrices, the inner dimensions must
be the same.
– An n-by-m matrix can be multiplied by an m-by-k matrix
43
Inversion
• The inverse of an n-by-n or square matrix
A is denoted A-1, and has the following
property.
• Where I is the identity matrix is an n-by-n
matrix with ones along the diagonal.
– Iij = 1 iff i = j, 0 otherwise
44
Identity Matrix
• Matrices are invariant under multiplication
by the identity matrix.
45
Helpful matrix inversion properties
46
Norm
• The norm of a vector, x, represents the
euclidean length of a vector.
47
Positive Definite-ness
• Quadratic form
– Scalar
– Vector
• Positive Definite matrix M
• Positive Semi-definite
48
Calculus
• Derivatives and Integrals
• Optimization
49
Derivatives
• A derivative of a function defines the
slope at a point x.
50
Derivative Example
51
Integrals
• Integration is the inverse operation of
derivation (plus a constant)
• Graphically, an integral can be considered
the area under the curve defined by f(x)
52
Integration Example
53
Vector Calculus
• Derivation with respect to a matrix or
vector
• Gradient
• Change of Variables with a Vector
54
Derivative w.r.t. a vector
• Given a vector x, and a function f(x), how
can we find f’(x)?
55
Derivative w.r.t. a vector
• Given a vector x, and a function f(x), how
can we find f’(x)?
56
Example Derivation
57
Example Derivation
Also referred to as the gradient of a function.
58
Useful Vector Calculus identities
• Scalar Multiplication
• Product Rule
59
Useful Vector Calculus identities
• Derivative of an inverse
• Change of Variable
60
Optimization
• Have an objective function that we’d like to
maximize or minimize, f(x)
• Set the first derivative to zero.
61
Optimization with constraints
• What if I want to constrain the parameters
of the model.
– The mean is less than 10
• Find the best likelihood, subject to a
constraint.
• Two functions:
– An objective function to maximize
– An inequality that must be satisfied
62
Lagrange Multipliers
• Find maxima of
f(x,y) subject to a
constraint.
63
General form
• Maximizing:
• Subject to:
• Introduce a new variable, and find a
maxima.
64
Example
• Maximizing:
• Subject to:
• Introduce a new variable, and find a
maxima.
65
Example
Now have 3 equations with 3 unknowns.
66
Example
Eliminate Lambda
Substitute and Solve
67
Why does Machine Learning need
these tools?
• Calculus
– We need to identify the maximum likelihood, or
minimum risk. Optimization
– Integration allows the marginalization of
continuous probability density functions
• Linear Algebra
– Many features leads to high dimensional spaces
– Vectors and matrices allow us to compactly
describe and manipulate high dimension al
feature spaces.
68
Why does Machine Learning need
these tools?
• Vector Calculus
– All of the optimization needs to be performed
in high dimensional spaces
– Optimization of multiple variables
simultaneously – Gradient Descent
– Want to take a marginal over high
dimensional distributions like Gaussians.
69
Next Time
• Linear Regression and Regularization
• Read Chapter 1.1, 3.1, 3.3
70