Download Efficient Estimation of Word Representation in Vector Space

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pattern recognition wikipedia , lookup

Numerical weather prediction wikipedia , lookup

Generalized linear model wikipedia , lookup

Computer simulation wikipedia , lookup

General circulation model wikipedia , lookup

History of numerical weather prediction wikipedia , lookup

Tropical cyclone forecast model wikipedia , lookup

Atmospheric model wikipedia , lookup

Types of artificial neural networks wikipedia , lookup

Transcript
Efficient Estimation of Word
Representation in Vector Space
Topics
o
Language Models in NLP
o
Markov Models (n-gram model)
o
Distributed Representation of words
o
Motivation for word vector model of data
o
Feedforward Neural Network Language Model (Feedforward NNLM)
o
Recurrent Neural Network Language Model (Recurrent NNLM)
o
Continuous Bag of Words Recurrent NNLM
o
Skip-gram Recurrent NNLM
o
Results
o
References
๐‘›-gram model for NLP
o
o
o
Traditional NLP models are based on prediction of next word given
previous ๐‘› โˆ’ 1 words. Also known as ๐‘›-gram model
An ๐‘›-gram model is defined as probability of a word ๐‘ค, given previous
words ๐‘ฅ1 , ๐‘ฅ2 โ€ฆ ๐‘ฅ๐‘›โˆ’1 using ๐‘› โˆ’ 1 ๐‘กโ„Ž order Markov assumption
Mathematically, the parameter
๐‘ž ๐‘ค ๐‘ฅ1 , ๐‘ฅ2 โ€ฆ ๐‘ฅ๐‘›โˆ’1 =
๐‘๐‘œ๐‘ข๐‘›๐‘ก ๐‘ค, ๐‘ฅ1 , ๐‘ฅ2 โ€ฆ ๐‘ฅ๐‘›โˆ’1
๐‘๐‘œ๐‘ข๐‘›๐‘ก ๐‘ฅ1 , ๐‘ฅ2 โ€ฆ ๐‘ฅ๐‘›โˆ’1
where ๐‘ค, ๐‘ฅ1 , ๐‘ฅ2 โ€ฆ ๐‘ฅ๐‘›โˆ’1 โˆˆ ๐‘‰ and ๐‘‰ is some definite size vocabulary
o
o
o
Above model is based on Maximum Likelihood estimation
Probability of occurrence of any sentence can be obtained by multiplying
the ๐‘›-gram model of every word
Estimation can be done using linear interpolation or discounting
methods
Drawbacks associated with
๐‘›-gram models
o
Curse of dimensionality: large number of parameters to be learned even
with the small size of vocabulary
o
๐‘›-gram model has discrete space, so itโ€™s difficult to generalize the
parameters for that model. On the other hand, generalization is easier
when the model has continuous space
o
Simple scaling up of ๐‘›-gram models do not show expected performance
improvement for vocabularies containing limited data
o
๐‘›-gram models do not perform well in word similarity tasks
Distributed representation of
words as vectors
o
Associate with each word in the vocabulary a distributed word feature vector in
โ„๐‘š
genesis
๏ƒ 
0.537 0.299 0.098
โ€ฆ
0.624
๐‘š
o
A vocabulary ๐‘‰ of size ๐‘‰ will therefore have ๐‘‰ × ๐‘š free parameters, which
needs to learned using some learning algorithm.
o
These distributed feature vectors can either be learned in an unsupervised
fashion as part of pre-training procedure or can also be learned in a supervised
way as well.
Why word vector model?
o
This model is based on continuous space real variables, hence probability
distribution learn by generative models are smooth functions
o
Therefore unlike the ๐‘›-gram models, where if a sequence of words is not
present in the data corpus is not a big issue; generalization is better with
this approach
o
Multiple degrees of similarity : similarity between words goes beyond basic
syntactic and semantic regularities. For example:
๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐พ๐‘–๐‘›๐‘” โˆ’ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘€๐‘Ž๐‘› + ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ(๐‘Š๐‘œ๐‘š๐‘Ž๐‘›) โ‰ˆ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ(๐‘„๐‘ข๐‘’๐‘’๐‘›)
๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘ƒ๐‘Ž๐‘Ÿ๐‘–๐‘  โˆ’ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐น๐‘Ÿ๐‘Ž๐‘›๐‘๐‘’ + ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐ผ๐‘ก๐‘Ž๐‘™๐‘ฆ โ‰ˆ ๐‘ฃ๐‘’๐‘๐‘ก๐‘œ๐‘Ÿ ๐‘…๐‘œ๐‘š๐‘’
o
Easier to train vector models on unsupervised data
Learning distributed word
vector representations
o
Feedforward Neural Network Language Model : Joint probability
distribution of words sequences is learned along with word feature vectors
using feed forward neural network
o
Recurrent Neural Network Language Models : These NNLM are based on
recurrent neural networks
o
Continuous Bag of Words : It is based on log linear classifier, but the input
will be average of past and future word vectors. In short, here our goal is to
predict word surrounding a context
o
Continuous Skip-gram Model: It is also based on log linear classifier, but
here it will try to predict the past and future words surrounding a given
word
Feedforward Neural Network
Language Model
o
Initially proposed by Yoshua Bengio et al
o
It is slightly related to ๐‘›-gram language model, as it aims to learn the
probability function of word sequences of length ๐‘›
o
Here input will be a concatenated feature vector of words
๐‘ค๐‘›โˆ’1 , ๐‘ค๐‘›โˆ’2 โ€ฆ ๐‘ค2 , ๐‘ค1 and training criteria will be to predict the word ๐‘ค๐‘›
o
Output of the model will give us the estimated probability of a given sequence
of ๐‘› words
o
Neural network architecture consists of a projection layer, a hidden layer of
neurons, output layer and a softmax function to evaluate the joint probability
distribution of words
TRAINING CORPUS
๐‘ค๐‘›โˆ’2
โ‹ฎ
๐‘ค2
Concatenated input vector of size of
Lookup table of word vectors
of size
Input Projection
Layer
โ‹ฎ
Hidden
Layer
โ‹ฎ
๐‘ค1
Output
Layer
๐‘ƒ(๐‘ค๐‘› |๐‘ค๐‘›โˆ’1 โ€ฆ ๐‘ค2 , ๐‘ค1 )
Softmax function
Feedforward NNLM
๐‘ค๐‘›โˆ’1
Feedforward NNLM
o
Fairly huge model in terms of free parameters
o
Neural network parameters consist of ๐‘› โˆ’ 1 × ๐‘š × ๐ป + ๐ป × ๐‘‰
parameters
o
Training criteria is to predict ๐‘›๐‘กโ„Ž word
o
Uses forward propagation and backpropagation algorithm for training
using mini batch gradient descent
o
Number of output layers in neural network can be reduced to log 2 ๐‘‰
using hierarchical softmax layers. This will significantly reduce the training
time of model
Recurrent Neural Network
Language Model
o
Initially implemented by Tomas Mikolov, but probably inspired by Yoshua
Bengioโ€™s seminal work on NNLM
o
Uses a recurrent neural network, where input layer consists of the current
word vector and hidden neuron values of previous word
o
Training objective is to predict the current word
o
Contrary to Feedforward NNLM, it keeps on building a kind of history of
previous words which got trained using the model. Therefore context window
of analysis is variable here
โ‹ฎ
Hidden
๐‘๐‘œ๐‘›๐‘ก๐‘’๐‘ฅ๐‘ก(๐‘ก)
โ‹ฎ
Output
Layer
Softmax function
Input
Lookup table of word vectors
of size
๐‘ค๐‘ก
Hidden
TRAINING CORPUS
Recurrent NNLM
Recurrent NNLM
o
Requires less number of hidden units in comparison to feedforward NNLM,
though one may have to increase the same with increase in vocabulary size
o
Stochastic gradient descent is used along with backpropagation algorithm to
train the model over several epochs
o
Number of output layers can be reduced to log 2 ๐‘‰ using hierarchical softmax
layers
o
Recurrent NNLM models as much as twice reduction in perplexity as compared
to ๐‘›-gram models
o
In practice recurrent NNLM models are much faster to train than feedforward
NNLM models
Continuous Bag of Words
o
It is similar to feedforward NNLM with no hidden layer. This model only
consists of an input and an output layer
o
In this model, words in sequences from past and future are input and they
are trained to predict the current sample
o
Owing to its simplicity, this model can be trained on huge amount of data in
a small time as compared to other neural network models
o
This model actually does the current word estimation provided context or a
sentence.
Continuous Bag of Words
๐‘ค๐‘›โˆ’1
๐‘ค๐‘›โˆ’2
โ‹ฎ
๐‘ค๐‘›โˆ’3
๐‘ค๐‘›โˆ’4
Input Projection
Layer
Output
Layer
Softmax function
๐‘ค๐‘›+1
input vectors
๐‘ค๐‘›+2
Average of
๐‘ค๐‘›+3
Lookup table of word
vectors of size
TRAINING CORPUS
๐‘ค๐‘›+4
๐‘ค๐‘›
Continuous Skip-gram
Model
o
This model is similar to continuous bag of words model, its just the roles are
reversed for input and output
o
Here model attempts to predict the words around the current word
o
Input layer consists of the word vector from single word, while multiple
output layers are connected to input layer
Input Projection
Layer
โ‹ฎ
Softmax
๐‘ค๐‘›+4
โ‹ฎ
Softmax
Single word vector
Lookup table of word
vectors of size
๐‘ค๐‘›
โ‹ฎ
Softmax
TRAINING CORPUS
Continuous Skip-gram
Model
๐‘ค๐‘›+1
โ‹ฎ
๐‘ค๐‘›โˆ’4
Output
Layer
Analyzing language models
o
Perplexity : A measurement of how well a language model is able to
adapt the underlying probability distribution of a model
o
Word error rate : Percentage of words misrecognized by the language
model
o
Semantic Analysis : Deriving semantic analogies of word pairs, filling the
sentence with most logical word choice etc. These kind of tests are
especially used for measuring the performance of word vectors. For
example : Berlin : Germany :: Toronto : Canada
o
Syntactic Analysis : For language model, it might be the construction of
syntactically correct parse tree, for testing word vectors one might look
for predicting syntactic analogies such as : possibly : impossibly :: ethical :
unethical
Perplexity Comparison
Perplexity of different models tested on Brown Corpus
Perplexity Comparison
Perplexity comparison of different models on Penn Treebank
Sentence Completion Task
WSJ Kaldi Rescoring
Semantic Syntactic Tests
Results
Results
Different models with 640 dimensional word vectors
Training Time comparison of different models
Microsoft Research Sentence
Completion Challenge
Complex Learned Relationships
References
o
[1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient
Estimation of Word Representations in Vector Space. In Proceedings of
Workshop at ICLR, 2013
o
[2] Y. Bengio, R. Ducharme, P. Vincent. A neural probabilistic language
model. Journal of Machine Learning Research, 3:1137-1155, 2003
o
[3] T. Mikolov, J. Kopecky, L. Burget, O. Glembek and J. ´ Cernock ห‡ y. Neural
network based language models for highly inflective languages, In: Proc.
ICASSP 2009