Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Deep Metric Learning Improved Deep Metric Learning with Multi-class N-pair Loss Objective Kihyuk Sohn. NIPS 2016. cited :1 Deep Relative Distance Learning: Tell the Difference Between Similar Vehicles Liu H, Tian Y, Yang Y, et al. CVPR 2016.cited :1 Outline Improved Deep Metric Learning with Multi-class N-pair Loss Objective • Distance Metric Learning • Deep Metric Learning with Multiple Negative Examples • Experimental Results Deep Relative Distance Learning: Tell the Difference Between Similar Vehicles • Deep Relative Distance Learning • Experimental Results Distance Metric Learning Contrastive loss Triplet loss hard negative data mining • An evident way to improve the vanilla triplet loss is to select a negative example that violates the triplet constraint. • However, hard negative data mining can be expensive with a large number of output classes for deep metric learning. Distance Metric Learning However, during one update, the triplet loss only compares an example with one negative example while ignoring negative examples from the rest of the classes • Triplet loss Learning to identify from multiple negative examples When N = 2: Learning to identify from multiple negative examples When N > 2, (L+1)-tuplet loss coupled with a single example per negative class can be written as follows: partition function of the likelihood P(y = y+). Deep Metric Learning with Multiple Negative Examples triplet loss (N+1)-tuplet loss N-pair loss for efficient deep metric learning The number of examples to evaluate for each batch grows in quadratic to M and N, it again becomes impractical to scale the training for a very deep convolutional networks. So we introduce an effective batch construction to avoid excessive computational burden. N pairs of examples from N different classes: The positive example: The negative examples: N-pair loss for efficient deep metric learning multi-class N-pair loss (N-pair-mc): triplet loss, one-vs-one N-pair loss (N-pair-ovo): N-pair loss for efficient deep metric learning Ideally, we would like the loss function to incorporate examples across every class all at once. But it is usually not attainable for large scale deep metric learning due to the memory bottleneck from the neural network based embedding. Hard negative class mining • 1.Evaluate Embedding Vectors: • choose randomly a large number of output classes C; for each class, randomly pass a few (one or two) examples to extract their embedding vectors. • 2.Select Negative Classes: • select one class randomly from C classes from step 1. Next, greedily add a new class that violates triplet constraint the most w.r.t. the selected classes till we reach N classes. When a tie appears, we randomly pick one of tied classes. • 3.Finalize N-pair: • draw two examples from each selected class from step 2. Experimental Results Fine-grained visual object recognition and verification Experimental Results Distance metric learning for unseen object recognition [21] Song H O, Xiang Y, Jegelka S, et al. Deep metric learning via lifted structured feature embedding[J]. arXiv preprint arXiv:1511.06452, 2015. Experimental Results Face verification and identification We train our networks on the WebFace database , which is composed of 494, 414 images from 10, 575 identities, and evaluate the quality of embedding networks trained with different metric learning objectives on Labeled Faces in the Wild (LFW) database Experimental Results Deep Relative Distance Learning: Tell the Difference Between Similar Vehicles Liu H, Tian Y, Yang Y, et al. CVPR 2016.cited :1 Vehicle re-identification task Vehicle re-identification is the problem of identifying the same vehicle across different surveillance camera views Contribution present a new vehicle re-identification dataset named “VehicleID”,the dataset includes over 200,000 images of about 26,000 propose an end-to-end framework DRDL that are suited for both vehicle retrieval and vehicle re-identification tasks. Framework Framework of our model for vehicle re-identification Triplet Loss Triplet loss function Some special cases that the triplet loss may judge fasely when processing randomly selected triplet units Coupled Clusters Loss The positve set X p x1p , , xNp p and the negative set X n x1n , , xNn n It is assumed that samples belong to the same identity should locate around a common center point in the d-dimensional Euclidean space Coupled Clusters Loss Estimate the center point as the mean value of all positive samples The relative distance relationship is reflected as Coupled Clusters Loss The coupled clusters loss: n Where x* is the nearest negative sample to the center point Coupled Clusters Loss The advantage of the coupled clusters loss Distances are measured between samples and a cluster center , it ensures the distances we get and the direction the samples will be moved to. guarantees all positive samples which are not close enough to the center will move closer. The selection of the nearest negative sample x*n will further prevent the relative distance relationship Eq.(5) being too easily satisfied compared with a randomly selected negative reference. Mixed Difference Network Structure There is a small but quite important difference between identifying a specific vehicle and person,two vehicles running on road may have the same visual appearance if they belong to the same vehicle model There may exist some special makers to distinguish Measure the distance: ① Whether they belong to the same vehicle model ② Whether they are same vehicle Mixed Difference Network Structure VehicleID Dataset “VehicleID” dataset contains 221763 images of 26267 vehicles in total Experiments Vehicle Model Verification Experiments Vehicle Retrieval Experiments Vehicle Re-identification Experiments Vehicle Re-identification THANK YOU