Download pdf

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

K-nearest neighbors algorithm wikipedia , lookup

Transcript
CS4670/5670: Computer Vision Kavita Bala Lecture 27: Recogni0on Basics Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Announcements •  PA 3 Ar0fact vo0ng Vote by Tuesday night Today •  Image classifica0on pipeline •  Training, valida0on, tes0ng •  Score func0on and loss func0on •  Building up to CNNs for learning –  5-­‐6 lectures on deep learning Image Classifica0on Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Image Classifica0on: Problem Data-­‐driven approach •  Collect a database of images with labels •  Use ML to train an image classifier •  Evaluate the classifier on test images Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Data-­‐driven approach •  Collect a database of images with labels •  Use ML to train an image classifier •  Evaluate the classifier on test images Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Train and Test •  Split dataset between training images and test images •  Be careful about infla0on of results Classifiers •  Nearest Neighbor •  kNN •  SVM •  … Nearest Neighbor Classifier •  Train –  Remember all training images and their labels •  Predict –  Find the closest (most similar) training image –  Predict its label as the true label How to find the most similar training image? What is the distance metric? Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Choice of distance metric •  Hyperparameter Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
k-­‐nearest neighbor •  Find the k closest points from training data •  Labels of the k points “vote” to classify How to pick hyperparameters? •  Methodology –  Train and test –  Train, validate, test •  Train for original model •  Validate to find hyperparameters •  Test to understand generalizability Valida0on Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Cross-­‐valida0on Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
CIFAR-­‐10 and NN results Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Visualiza0on: L2 distance Complexity and Storage •  N training images, M tes0ng images •  Training: O(1) •  Tes0ng: O(MN) •  Hmm –  Normally need the opposite –  Slow training (ok), fast tes0ng (necessary) Summary •  Data-­‐driven: Train, validate, test –  Need labeled data •  Classifier –  Nearest neighbor, kNN (approximate NN, ANN) Score func0on Slides from Andrej Karpathy and Fei-Fei Li
http://vision.stanford.edu/teaching/cs231n/
Linear Classifier Linear Classifier Compu0ng scores Geometric Interpreta0on Interpreta0on: Template matching Linear classifiers •  Find linear func0on (hyperplane) to separate posi0ve and nega0ve examples xi positive :
xi ⋅ w + b ≥ 0
xi negative :
xi ⋅ w + b < 0
Which hyperplane is best? Support vector machines •  Find hyperplane that maximizes the margin between the posi0ve and nega0ve examples C. Burges, A Tutorial on Support Vector Machines for Pafern Recogni0on, Data Mining and Knowledge Discovery, 1998 Support vector machines •  Find hyperplane that maximizes the margin between the posi0ve and nega0ve examples xi positive ( yi = 1) :
xi ⋅ w + b ≥ 1
xi negative ( yi = −1) :
xi ⋅ w + b ≤ −1
For support, vectors, Support vectors Margin xi ⋅ w + b = ±1
Support vector machines •  Find hyperplane that maximizes the margin between the posi0ve and nega0ve examples xi positive ( yi = 1) :
xi ⋅ w + b ≥ 1
xi negative ( yi = −1) :
xi ⋅ w + b ≤ −1
For support, vectors, Distance between point and hyperplane: xi ⋅ w + b = ±1
| xi ⋅ w + b |
|| w ||
Therefore, the margin is 2 / ||w||
Support vectors Margin Bias Trick