Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Journal of Machine Learning Research 14 (2013) 867-897 Submitted 8/12; Revised 2/13; Published 3/13 A Widely Applicable Bayesian Information Criterion Sumio Watanabe SWATANAB @ DIS . TITECH . AC . JP Department of Computational Intelligence and Systems Science Tokyo Institute of Technology Mailbox G5-19, 4259 Nagatsuta, Midori-ku Yokohama, Japan 226-8502 Editor: Manfred Opper Abstract A statistical model or a learning machine is called regular if the map taking a parameter to a probability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/ log n, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model. Since WBIC can be numerically calculated without any information about a true distribution, it is a generalized version of BIC onto singular statistical models. Keywords: Bayes marginal likelihood, widely applicable Bayes information criterion 1. Introduction A statistical model or a learning machine is called regular if the map taking a parameter to a probability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. Many statistical models and learning machines are not regular but singular, for example, artificial neural networks, normal mixtures, binomial mixtures, reduced rank regressions, Bayesian networks, and hidden Markov models. In general, if a statistical model contains hierarchical layers, hidden variables, or grammatical rules, then it is singular. In other words, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular. If a statistical model is singular, then the likelihood function cannot be approximated by any normal distribution, resulting that neither AIC, BIC, nor MDL can be used c 2013 Sumio Watanabe. WATANABE in statistical model evaluation. Hence constructing singular learning theory is an important issue in both statistics and learning theory. A statistical model or a learning machine is represented by a probability density function p(x|w) of x ∈ RN for a given parameter w ∈ W ⊂ Rd , where W is a set of all parameters. A prior probability density function is denoted by ϕ(w) on W . Assume that training samples X1 , X2 , ..., Xn are independently subject to a probability density function q(x), which is called a true distribution. The log loss function or the minus log likelihood function is defined by Ln (w) = − 1 n ∑ log p(Xi |w). n i=1 (1) Also the Bayes free energy F is defined by F = − log Z n ∏ p(Xi |w)ϕ(w)dw. (2) i=1 This value F can be understood as the minus logarithm of marginal likelihood of a model and a prior, hence it plays an important role in statistical model evaluation. In fact, a model or a prior is often optimized by maximization of the Bayes marginal likelihood (Good, 1965), which is equivalent to minimization of the Bayes free energy. If a statistical model is regular, then the posterior distribution can be asymptotically approximated by a normal distribution, resulting that d 2 F = nLn (ŵ) + log n + O p (1), (3) where ŵ is the maximum likelihood estimator, d is the dimension of the parameter space, and n is the number of training samples. The right hand side of Equation (3) is the well-known Schwarz Bayesian information criterion (BIC) (Schwarz, 1978). If a statistical model is singular, then the posterior distribution is different from any normal distribution, hence the Bayes free energy cannot be approximated by BIC in general. Recently, it was proved that, even if a statistical model is singular, F = nLn (w0 ) + λ log n + O p (log log n), where w0 is the parameter that minimizes the Kullback-Leibler distance from a true distribution to a statistical model, and λ > 0 is a rational number called the real log canonical threshold (RLCT) (Watanabe, 1999, 2001a, 2009, 2010a) . The birational invariant RLCT, which was firstly found by a research of singular Schwartz distribution (Gelfand and Shilov, 1964), plays an important role in algebraic geometry and algebraic analysis (Bernstein, 1972; Sato and Shintani, 1974; Kashiwara, 1976; Varchenko, 1976; Kollár, 1997; Saito, 2007). In algebraic geometry, it represents a relative property of singularities of a pair of algebraic varieties. In statistical learning theory, it shows the asymptotic behaviors of the Bayes free energy and the generalization loss, which are determined by a pair of an optimal parameter set and a parameter set W . If a triple of a true distribution, a statistical model, and a prior distribution is fixed, then there is an algebraic geometrical procedure which enables us to find an RLCT (Hironaka, 1964). In 868 WBIC fact, RLCTs for several statistical models and learning machines are being discovered. For example, RLCTs have been studied in artificial neural networks (Watanabe, 2001b; Aoyagi and Nagata, 2012), normal mixtures (Yamazaki and Watanabe, 2003), reduced rank regressions (Aoyagi and Watanabe, 2005), Bayes networks with hidden variables (Rusakov and Geiger, 2005; Zwiernik, 2010, 2011), binomial mixtures, Boltzmann machines (Yamazaki and Watanabe, 2005), and hidden Markov models. To study singular statistical models, new algebraic geometrical theory is constructed (Watanabe, 2009; Drton et al., 2009; Lin, 2011; Király et al., 2012). Based on such researches, the theoretical behavior of the Bayes free energy is clarified. These results are very important because they indicate the quantitative difference of singular models from regular ones. However, in general, an RLCT depends on an unknown true distribution. In practical applications, we do not know a true distribution, hence we cannot directly apply the theoretical results to statistical model evaluation. In the present paper, in order to estimate the Bayes free energy without any information about a true distribution, we propose a widely applicable Bayesian information criterion (WBIC) by the following definition: 1 WBIC = Eβw [nLn (w)], β = , (4) log n β where Ew [ ] shows the expectation value over the posterior distribution on W that is defined by, for an arbitrary integrable function G(w), Eβw [G(w)] = Z n G(w) Z ∏ p(Xi |w)β ϕ(w)dw i=1 n . (5) ∏ p(Xi |w)β ϕ(w)dw i=1 In this definition, β > 0 is called the inverse temperature. Then the main purpose of this paper is to show p F = WBIC + O p ( log n). To establish mathematical support of WBIC, we prove three theorems. Firstly, in Theorem 3 we show that there exists a unique inverse temperature β∗ which satisfies ∗ F = Eβw [nLn (w)]. The optimal inverse temperature β∗ is a random variable which satisfies the convergence in probability, β∗ log n → 1 as n → ∞. Secondly, in Theorem 4 we prove that, even if a statistical model is singular, p WBIC = nLn (w0 ) + λ log n + O p ( log n). In other words, WBIC has the same asymptotic behavior as the Bayes free energy even if a statistical model is singular. And lastly, in Theorem 5 we prove that, if a statistical model is regular, then WBIC = nLn (ŵ) + d log n + O p (1), 2 which shows WBIC coincides with BIC in regular statistical models. Moreover, it is expected that a computational cost in numerical calculation of WBIC is far smaller than that of the Bayes free 869 WATANABE Variable F G WBIC WAIC β Ew [ ] β∗ L(w) Ln (w) K(w) Kn (w) λ m Q(K(w), ϕ(w)) (M , g(u), a(x, u), b(u)) Name Bayes free energy Generalization loss WBIC WAIC posterior average optimal inverse temperature log loss function empirical loss Average log likelihood ratio empirical log likelihood ratio real log canonical threshold multiplicity parity of model resolution quartet Equation Number Equation (2) Equation (29) Equation (4) Equation (30) Equation (5) Equation (18) Equation (6) Equation (1) Equation (7) Equation (8) Equation (15) Equation (16) Equation (17) Theorem 1 Table 1: Variable, Name, and Equation Number energy. These results show that WBIC is a generalized version of BIC onto singular statistical models and that RLCTs can be estimated even if a true distribution is unknown. This paper consists of eight sections. In Section 2, we summarize several notations. In Section 3, singular learning theory and the standard representation theorem are introduced. The main theorems and corollaries of this paper are explained in Section 4, which are mathematically proved in Section 5. As the purpose of the present paper is to prove the mathematical support of WBIC, Sections 4 and 5 are the main sections. In section 6, a method how to use WBIC in statistical model evaluation is illustrated using an experimental result. In section 7 and 8, we discuss and conclude the present paper. 2. Statistical Models and Notations In this section, we summarize several notations. Table 1 shows variables, names, and equation numbers in this paper. The average log loss function L(w) and the entropy of the true distribution S are respectively defined by L(w) = − S = − Z Z q(x) log p(x|w)dx, q(x) log q(x)dx. Then L(w) = S + D(q||pw ), where D(q||pw ) is the Kullback-Leibler distance defined by D(q||pw ) = Z q(x) log q(x) dx. p(x|w) Then D(q||pw ) ≥ 0, hence L(w) ≥ S. Moreover, L(w) = S if and only if p(x|w) = q(x). 870 (6) WBIC In this paper, we assume that there exists a parameter w0 in the interior of W which minimizes L(w), L(w0 ) = min L(w), w∈W where the interior of a set S is the defined by the largest open set that is contained in S. Note that such w0 is not unique in general, because the map w 7→ p(x|w) is not one-to-one in general in singular statistical models. We also assume that, for an arbitrary w that satisfies L(w) = L(w0 ), p(x|w) is the same probability density function. Let p0 (x) be such a unique probability density function. In general, the set W0 = {w ∈ W ; p(x|w) = p0 (x)} is not a set of single element but an analytic set or an algebraic set with singularities. Let us define a log density ratio function, p0 (x) f (x, w) = log , p(x|w) which is equivalent to p(x|w) = p0 (x) exp(− f (x, w)). Two functions K(w) and Kn (w) are respectively defined by K(w) = Kn (w) = Z q(x) f (x, w)dx, 1 n ∑ f (Xi , w). n i=1 (7) (8) Then it immediately follows that L(w) = L(w0 ) + K(w), Ln (w) = Ln (w0 ) + Kn (w). The expectation value over all sets of training samples X1 , X2 , ..., Xn is denoted by E[ ]. For example, E[Ln (w)] = L(w) and E[Kn (w)] = K(w). The problem of statistical learning is characterized by the log density ratio function f (x, w). In fact, Eβw [nLn (w)] = nLn (w0 ) + Eβw [nKn (w)], R nKn (w) exp(−nβKn (w))ϕ(w)dw β R Ew [nKn (w)] = . exp(−nβKn (w))ϕ(w)dw (9) (10) The main purpose of the present paper is to prove F = nLn (w0 ) + Eβw [nKn (w)] + O p ( log n) p for β = 1/ log n. Definition. (1) If q(x) = p0 (x), then q(x) is said to be realizable by p(x|w). If otherwise, it is said to be unrealizable. (2) If the set W0 consists of a single element w0 and if the Hessian matrix Ji j (w) = ∂2 L (w) ∂wi ∂w j 871 (11) WATANABE at w = w0 is strictly positive definite, then q(x) is said to be regular for p(x|w). If otherwise, it is said to be singular for p(x|w). Note that the matrix J(w) is equal to the Hessian matrix of K(w) and that J(w0 ) is equal to the Fisher information matrix if the true distribution is realizable by a statistical model. Also note that, if q(x) is realizable by p(x|w), then K(w) is Kullback-Leibler divergence of q(x) and p(x|w). However, if q(x) is not realizable by p(x|w), then it is not. 3. Singular Learning Theory In this section we summarize singular learning theory. In the present paper, we assume the following conditions. Fundamental Conditions. (1) The set of parameters W is a compact set in Rd whose interior is not the empty set. Its boundary is defined by several analytic functions π1 (w), π2 (w), ..., πk (w), in other words, W = {w ∈ Rd ; π1 (w) ≥ 0, π2 (w) ≥ 0, ..., πk (w) ≥ 0}. (2) The prior distribution satisfies ϕ(w) = ϕ1 (w)ϕ2 (w), where ϕ1 (w) ≥ 0 is an analytic function and ϕ2 (w) > 0 is a C∞ -class function. (3) Let s ≥ 6 and 1/s Z s < ∞} | f (x)|s q(x)dx L (q) = { f (x); k f ks ≡ be a Banach space. There exists an open set W ′ ⊃ W such that the map W ′ ∋ w 7→ f (x, w) is an Ls (q)-valued analytic function. (4) The set Wε is defined by Wε = {w ∈ W ; K(w) ≤ ε}. It is assumed that there exist constants ε, c > 0 such that (∀w ∈ Wε ) EX [ f (X, w)] ≥ c EX [ f (X, w)2 ]. (12) Remark. (1) These conditions allow that the set of optimal parameters W0 = {w ∈ W ; p(x|w) = p(x|w0 )} = {w ∈ W ; K(w) = 0} may contain singularities, and that the Hessian matrix J(w) at w ∈ W0 is not positive definite. Therefore K(w) can not be approximated by any quadratic form in general. (2) The condition Equation (12) is satisfied if a true distribution is realizable by or regular for a statistical model (Watanabe, 2010a). If a true distribution is unrealizable by and singular for a statistical model, there is an example which does not satisfy this condition. In the present paper, we study the case when Equation (12) is satisfied. Lemma 1 Assume Fundamental Conditions (1)-(4). Let β= β0 , log n 872 WBIC where β0 > 0 is a constant and let 0 ≤ r < 1/2. Then, as n → ∞, Z Z K(w)≥1/nr K(w)≥1/nr √ exp(−nβKn (w))ϕ(w)dw = o p (exp(− n)), (13) √ nKn (w) exp(−nβKn (w))ϕ(w)dw = o p (exp(− n)). (14) The proof of Lemma 1 is given in Section 5. Let ε > 0 be a sufficiently small constant. Lemma 1 shows that integrals outside of the region Wε β do not affect the expectation value Ew [nKn (w)] asymptotically, because in the following theorems, we prove that integral in the region Wε have larger orders than the integral outside of Wε . To study integrals in the region Wε , we need algebraic geometrical method, because the set {w; K(w) = 0} contains singularities in general. There are quite many kinds of singularities, however, the following theorem makes any singularities be a same standard form. Theorem 1 (Standard Representation) Assume Fundamental Conditions (1)-(4). Let ε > 0 be a sufficiently small constant. Then there exists an quartet (M , g(u), a(x, u), b(u)), where (1) M is a d-dimensional real analytic manifold, (2) g is a proper analytic function g : M → Wε′ , where Wε′ is the set that is defined by the largest open set contained in Wε and g : {u ∈ M ; K(g(u)) 6= 0} → {w ∈ Wε′ ; K(w) 6= 0} is a bijective map, (3) a(x, u) is an Ls (q)-valued analytic function, (4) and b(u) is an infinitely many times differentiable function which satisfies b(u) > 0, such that the following equations are satisfied in each local coordinate of M : K(g(u)) = u2k , f (x, g(u)) = uk a(x, u), ϕ(w)dw = ϕ(g(u))|g′ (u)|du = b(u)|uh |du, where k = (k1 , k2 , ..., kd ) and h = (h1 , h2 , ..., hd ) are multi-indices made of nonnegative integers. At least one of k j is not equal to zero. Remark. (1) In this theorem, for u = (u1 , u2 , · · · , ud ) ∈ Rd , notations u2k and |uh | respectively represent 2kd 1 2k2 u2k = u2k 1 u2 · · · ud , |uh | = |uh11 uh22 · · · uhdd |. The singularity u = 0 in u2k = 0 is said to be normal crossing. Theorem 1 shows that any singularities can be made normal crossing by using an analytic function w = g(u). (2) A map w = g(u) is said to be proper if, for an arbitrary compact set C, g−1 (C) is also compact. (3) The proof of Theorem 1 is given in Theorem 6.1 of a book (Watanabe, 2009, 2010a). In order to prove this theorem, we need the Hironaka resolution of singularities (Hironaka, 1964; Atiyah, 1970) that is the fundamental theorem in algebraic geometry. The function w = g(u) is often referred to as a resolution map. (4) In this theorem, a quartet (k, h, a(x, u), b(u)) depends on a local coordinate in general. For a given function K(w), there is an algebraic recursive algorithm which enables us to find a resolution 873 WATANABE map w = g(u). However, even for a fixed K(w), a resolution map is not unique, resulting that a quartet (M , g(u), a(x, u), b(u)) is not unique. Definition. (Real Log Canonical Threshold) Let {Uα ; α ∈ A } be a system of local coordinates of a manifold M , [ Uα . M= α∈A The real log canonical threshold (RLCT) is defined by d hj +1 λ = min min , α∈A j=1 2k j (15) where we define 1/k j = ∞ if k j = 0. The multiplicity m is defined by n h +1 o j m = max # j; =λ , 2k j α∈A (16) where #S shows the number of elements of a set S. The concept of RLCT is well known in algebraic geometry and statistical learning theory. In the following definition we introduce a parity of a statistical model. Definition. (Parity of Statistical Model) The support of ϕ(g(u)) is defined by supp ϕ(g(u)) = {u ∈ M ; g(u) ∈ Wε , ϕ(g(u)) > 0}, where S shows the closure of a set S. A local coordinate Uα is said to be an essential local coordinate if both equations d hj +1 , λ = min j=1 2k j m = #{ j; (h j + 1)/(2k j ) = λ}, hold in its local coordinate. The set of all essential local coordinates is denoted by {Uα ; α ∈ A ∗ }. If, for an arbitrary essential local coordinate, there exist both δ > 0 and a natural number j in the set { j ; (h j + 1)/(2k j ) = λ} such that (1) k j is an odd number, (2) {(0, 0, .., 0, u j , 0, 0, .., 0) ; |u j | < δ} ⊂ supp ϕ(g(u)), then we define Q(K(g(u)), ϕ(g(u))) = 1. If otherwise, Q(K(g(u)), ϕ(g(u))) = 0. If there exists a resolution map w = g(u) such that Q(K(g(u)), ϕ(g(u))) = 1, then we define Q(K(w), ϕ(w)) = 1. (17) If otherwise Q(K(w), ϕ(w)) = 0. If Q(K(w), ϕ(w)) = 1, then the parity of a statistical model is said to be odd, otherwise even. It was proved in Theorem 2.4 of a book (Watanabe, 2009) that, for a given set (q, p, ϕ), λ and m are independent of a choice of a resolution map. Such a value is called a birational invariant. The RLCT is a birational invariant. Lemma 2 If a true distribution q(x) is realizable by a statistical model p(x|w), then the value Q(K(g(u)), ϕ(g(u))) is independent of a choice of a resolution map w = g(u). 874 WBIC Proof of this lemma is shown in Section 5. Lemma 2 indicates that, if a true distribution is realizable by a statistical model, then Q(K(g(u)), ϕ(g(u))) is a birational invariant. The present paper proposes a conjecture that Q(K(g(u)), ϕ(g(u))) is a birational invariant in general. By Lemma 2, this conjecture is proved if we can show the proposition that, for an arbitrary nonnegative analytic function K(w), there exist q(x) and p(x|w) such that K(w) is the Kullback-Leibler distance from q(x) to p(x|w). Example. Let w = (a, b, c) ∈ R3 and K(w) = (ab + c)2 + a2 b4 , which is the Kullback-Leibler distance of a neural network model in Example 1.6 of a book (Watanabe, 2009), where a true distribution is realizable by a statistical model. The prior ϕ(w) is defined by some nonzero function on a sufficiently large compact set. In a singular statistical model, compactness of the prior is necessary in general, because, if the parameter set is not compact, then it is not easy to mathematically treat the integration on the neighborhood among the infinite point. Let a system of local coordinates be Ui = {(ai , bi , ci ) ∈ R3 } (i = 1, 2, 3, 4). A resolution map g : U1 ∪ U2 ∪ U3 ∪ U4 → R3 in each local coordinate is defined by a = a1 c1 , b = b1 , a = a2 , b = b2 c2 , a = a3 , b = b3 , a = a4 , b = b4 c4 , c = c1 , c = a2 (1 − b2 )c2 , c = a3 b3 (b3 c3 − 1), c = a4 b4 c4 (c4 − 1). This map g is made of recursive blowing-ups whose centers are smooth manifolds, hence it is oneto-one as a map g : {u; K(g(u)) > 0} → {w; K(w) > 0}. Then K(a, b, c) = c21 {(a1 b1 + 1)2 + a21 b41 } = a22 c22 (1 + b22 c22 ) = a23 b43 (c23 + 1) = a24 b24 c44 (1 + b24 ). Therefore integration over W can be calculated using integration over the manifold. The Jacobian determinant |g′ (u)| is |g′ (u)| = |c1 | = |a2 c2 | = |a3 b23 | = |a4 b4 c4 |2 . In other words, in each local coordinate, (k1 , k2 , k3 ) = (0, 0, 1), (1, 0, 1), (1, 2, 0), (1, 1, 2), (h1 , h2 , h3 ) = (0, 0, 1), (1, 0, 1), (1, 2, 0), (2, 2, 2). Therefore h +1 h +1 h +1 3 3 3 3 1 2 2 = (∞, ∞, 1), (1, 0, 1), (1, , ∞), ( , , ). , , 2k1 2k2 2k2 4 2 2 4 The smallest value among them is 3/4 which is equal to λ and the multiplicty is m = 1. The essential local coordinates are U3 and U4 . In U3 and U4 , the sets {u j ; (h j +1)/(2k j ) = 3/4} are respectively {u2 } and {u3 }, where 2k j = 4 in both cases. Consequently, both k j are even, thus the parity is given by Q(K(w), ϕ(w)) = 0. 875 WATANABE Lemma 3 Assume that the Fundamental Conditions (1)-(4) are satisfied and that a true distribution q(x) is regular for a statistical model p(x|w). If w0 is contained in the interior of W and if ϕ(w0 ) > 0, then d λ = , m = 1, 2 and Q(K(w), ϕ(w)) = 1. Proof of this lemma is shown in Section 5. Theorem 2 Assume that the Fundamental Conditions (1)-(4) are satisfied. Then the following holds. F = nLn (w0 ) + λ log n − (m − 1) log log n + Rn , where λ is a real log canonical threshold, m is its multiplicity, and {Rn } is a sequence of random variables which converges to a random variable in law, when n → ∞. Theorem 2 was proved in the previous papers. In the case when q(x) is realizable by and singular for p(x|w), the expectation value of F is given by algebraic analysis (Watanabe, 2001a). The asymptotic behavior of F as a random variable was shown in a book (Watanabe, 2009). These results were generalized (Watanabe, 2010a) for the case that q(x) is unrealizable. Remark. In practical applications, we do not know the true distribution, hence λ and m are unknown. Therefore, we can not directly apply Theorem 2 to such cases. The main purpose of the present paper is to make a new method how to estimate F even if the true distribution is unknown. 4. Main Results In this section, we introduce the main results of the present paper. Theorem 3 (Unique Existence of the Optimal Parameter) Assume that Ln (w) is not a constant function of w. Then the followings hold. β (1) The value Ew [nLn (w)] is a decreasing function of β. (2) There exists a unique β∗ (0 < β∗ < 1) which satisfies ∗ F = Eβw [nLn (w)]. (18) Note that the function Ln (w) is not a constant function in an ordinary statistical model with probability one. The Proof of Theorem 3 is given in Section 5. Based on this theorem, we define the optimal inverse temperature. Definition. The unique parameter β∗ that satisfies Equation (18) is called the optimal inverse temperature. In general, the optimal inverse temperature β∗ depends on a true distribution q(x), a statistical model p(x|w), a prior ϕ(w), and training samples. Therefore β∗ is a random variable. In the present paper, we study its probabilistic behavior. Theorem 4 is a mathematical base for such a purpose. Theorem 4 (Main Theorem) Assume Fundamental Conditions (1)-(4) and that β= β0 , log n 876 WBIC where β0 is a constant. Then there exists a random variable Un such that s λ log n λ log n β Ew [nLn (w)] = nLn (w0 ) + +Un + O p (1), β0 2β0 where λ is the real log canonical threshold and {Un } is a sequence of random variables, which satisfies E[Un ] = 0, converges to a Gaussian random variable in law as n → ∞. Moreover, if a true distribution q(x) is realizable by a statistical model p(x|w), then E[(Un )2 ] < 1. The proof of Theorem 4 is given in Section 5. Theorem 4 with β0 = 1 shows that r λ log n WBIC = nLn (w0 ) + λ log n +Un + O p (1), 2 whose first two main terms are equal to those of F in Theorem 2. From Theorem 4 and its proof, three important corollaries are derived. Corollary 1 If the parity of a statistical model is odd, Q(K(w), ϕ(w)) = 1, then Un = 0. Corollary 2 Let β∗ be the optimal inverse temperature. Then 1 Un 1 1+ p . β∗ = + op √ log n log n 2λ log n Corollary 3 Let β1 = β01 / log n and β2 = β02 / log n, where β01 and β02 are positive constants. Then the convergence in probability β β Ew1 [nLn (w)] − Ew2 [nLn (w)] →λ 1/β1 − 1/β2 (19) holds as n → ∞, where λ is the real log canonical threshold. β Proofs of these corollaries are given in Section 5. Note that, if the expectation value Ew1 [ ] is β β calculated by some numerical method, then Ew2 [ ] can be estimated using Ew1 [ ] by using β Eβw2 [nLn (w)] = Ew1 [nLn (w) exp(−(β2 − β1 )nLn (w))] β Ew1 [exp(−(β2 − β1 )nLn (w))] . (20) Therefore RLCT can be estimated by the same computational cost as WBIC. In Bayes estimation, the posterior distribution is often approximated by some numerical method. If we know theoretical values of RLCTs, then we can confirm the approximated posterior distribution by comparing theoretical values with estimated ones. The well-known Schwarz BIC is defined by BIC = nLn (ŵ) + d log n, 2 where ŵ is the maximum likelihood estimator. WBIC can be understood as the generalized BIC onto singular statistical models, because it satisfies the following theorem. 877 WATANABE Theorem 5 If a true distribution q(x) is regular for a statistical model p(x|w), then WBIC = nLn (ŵ) + d log n + o p (1). 2 Proof of Theorem 5 is given in Section 5. This theorem shows that the difference of WBIC and BIC is smaller than a constant order term, if a true distribution is regular for a statistical model. This theorem holds even if a true distribution q(x) is unrealizable by p(x|w). Remark. Since the set of parameters W was assumed to be compact, it is proved in Main Theorem 6.4 of a book (Watanabe, 2009) that nLn (w0 ) − nLn (ŵ) is a constant order random variable in general. If a true distribution is regular for and realizable by a statistical model, its average is asymptotically equal to d/2, where d is the dimension of parameter. If a true distribution is singular for a statistical model, then it is sometimes much larger than d/2, because it is asymptotically equal to the maximum value of the Gaussian process. Whether replacement of nLn (w0 ) by nL(ŵ) is appropriate or not depends on the statistical model and its singularities (Drton, 2009). 5. Proofs of Main Results In this section, we prove the main theorems and corollaries. 5.1 Proof of Lemma 1 Let us define an empirical process, 1 n ηn (w) = √ ∑ (K(w) − f (Xi , w)). n i=1 It was proved in Theorem 5.9 and 5.10 of a book (Watanabe, 2009) that ηn (w) converges to a random process in law and kηn k ≡ sup |ηn (w)| w∈W also converges to a random variable in law. If K(w) ≥ 1/nr , then √ nKn (w) = nK(w) − n ηn (w) √ ≥ n1−r − n kηn k. By the condition 1 − r > 1/2 and β = β0 / log n, √ Z exp( n) exp(−nβKn (w))ϕ(w)dw √ √ ≤ exp(−n1−r β + n + nβkηn k), K(w)≥1/nr which converges to zero in probability, which shows Equation (13). Then, let us prove Equation (14). Since the set of parameter W is compact, kKk ≡ supw K(w) < ∞. Therefore, √ |nKn (w)| ≤ nkKk + nkηn k √ = n (kKk + kηn k/ n). 878 WBIC Hence √ Z exp( n) K(w)≥1/nr |nKn (w)| exp(−nβKn (w))ϕ(w)dw √ ≤ (kKk + kηn k/ n) √ √ × exp(−n1−r β + n + nβkηn k + log n), which converges to zero in probability. (Q.E.D.) 5.2 Proof of Lemma 3 Without loss of generality, we can assume w0 = 0. Since q(x) is regular for p(x|w), there exists w∗ such that 1 K(w) = w · J(w∗ )w, 2 where J(w) is given in Equation (11). Since J(w0 ) is a strictly positive definite matrix, there exists ε > 0 such that, if K(w) ≤ ε, then J(w∗ ) is positive definite. Let ℓ1 and ℓ2 be respectively the minimum and maximum eigen values of {J(w∗ ); K(w) ≤ ε}. Then d 1 d 2 1 ℓ1 ∑ w j ≤ w · J(w∗ )w ≤ ℓ2 ∑ w2j . 4 j=1 2 j=1 By using a blow-up g : U1 ∪ · · · ∪ Ud → W which is represented on each local coordinate Ui = (ui1 , ui2 , ..., uid ), wi = uii , w j = uii ui j ( j 6= i), it follows that ℓ1 u2ii u2 (1 + ∑ u2i j ) ≤ ii (û, J(w∗ )û) ≤ ℓ2 u2ii (1 + ∑ u2i j ), 4 2 j6=i j6=i where ûi j = ui j ( j 6= i) and ûii = 1. These inequalities show that ki = 1 in Ui , therefore Q(K(w), ϕ(w)) = 1. The Jacobian determinant of the blow-up is |g′ (u)| = |uii |d−1 , hence λ = d/2 and m = 1. (Q.E.D.) 5.3 Proof of Theorem 3 Let us define a function Fn (β) of β > 0 by Fn (β) = − log Z n ∏ p(Xi |w)β ϕ(w)dw. i=1 Then, by the definition, F = Fn (1) and Fn′ (β) = Eβw [nLn (w)], Fn′′ (β) = −Eβw [(nLn (w))2 ] + Eβw [nLn (w)]2 . 879 WATANABE By the Cauchy-Schwarz inequality and the assumption that Ln (w) is not a constant function, Fn′′ (β) < 0, which shows (1). Since Fn (0) = 0, F = Fn (1) = Z 1 0 Fn′ (β)dβ. By using the mean value theorem, there exists β∗ (0 < β∗ < 1) such that ∗ F = Fn′ (β∗ ) = Eβw [nLn (w)]. Here Fn′ (β) is a decreasing function, β∗ is unique, which completes Theorem 3. (Q.E.D.) 5.4 First Preparation for Proof of Theorem 4 In this subsection, we prepare the proof of Theorem 4. By using Equation (9) and Equation (10), β the proof of Theorem 4 results in evaluating Ew [nKn (w)]. By Lemma 1, √ Bn + o p (exp(− n)) β √ , Ew [nKn (w)] = (21) An + o p (exp(− n)) where An and Bn are respectively defined by An = Bn = Z Z K(w)<ε K(w)<ε exp(−nβKn (w))ϕ(w)dw, (22) nKn (w) exp(−nβKn (w))ϕ(w)dw. (23) By Theorem 1, an integral over {w ∈ W ; K(w) < ε} is equal to that over M . For a given set of local coordinates {Uα } of M , there exists a set of C∞ class functions {ϕα (g(u))} such that, for an arbitrary u ∈ M , ∑ ϕα (g(u)) = ϕ(g(u)). α∈A By using this fact, for an arbitrary integrable function G(w), Z G(w)ϕ(w)dw = K(w)<ε Z ∑ α∈A Uα G(g(u))ϕα (g(u))|g′ (u)|du. Without loss of generality, we can assume that Uα ∩ supp ϕ(g(u)) is isomorphic to [−1, 1]d . Moreover, by Theorem 1, there exists a function bα (u) > 0 such that ϕα (g(u))|g′ (u)| = |uh |bα (u), in each local coordinate. Consequently, Z G(w)ϕ(w)dw = K(w)<ε ∑ α∈A Z [−1,1]d 880 du G(g(u)) |uh | bα (u). WBIC In each local coordinate, K(g(u)) = u2k . We define a function ξn (u) by 1 n ξn (u) = √ ∑ {uk − a(Xi , u)}. n i=1 Then 1 Kn (g(u)) = u2k − √ uk ξn (u). n Note that uk = holds, because u2k = K(g(u)) = Z Z a(x, u)q(x)dx f (x, g(u))q(x)dx = uk Z a(x, u)q(x)dx. Therefore, for an arbitrary u, E[ξn (u)] = 0. The function ξn (u) can be understood as a random process on M . On Fundamental Conditions (1)-(4), it is proved in Theorem 6.1, Theorem 6.2, and Theorem 6.3 of a book (Watanabe, 2009) that (1) ξn (u) converges to a Gaussian random process ξ(u) in law and E[sup ξn (u)2 ] → E[sup ξ(u)2 ]. u u (2) If q(x) is realizable by p(x|w), and if u2k = 0, then E[ξn (u)2 ] = EX [a(X, u)2 ] = 2. (24) By using the random process ξn (u), the two random variables An and Bn can be represented by integrals over M , An = Bn = ∑ Z d α∈A [−1,1] ∑ Z d α∈A [−1,1] √ du exp(−nβu2k + nβuk ξn (u))|uh |bα (u), √ du (nu2k − nuk ξn (u)) √ × exp(−nβu2k + nβuk ξn (u))|uh |bα (u) To prove Theorem 4, we study asymptotics of these two random variables. 5.5 Second Preparation for Proof of Theorem 4 To evaluate two integrals An and Bn as n → ∞, we have to study the asymptotic behavior of the following Schwartz distribution, δ(t − u2k ) |u|h 881 WATANABE for t → 0. Without loss of generality, we can assume that, in each essential local coordinate, λ= h1 + 1 h2 + 1 hm + 1 h j + 1 = = ··· = < , 2k1 2k2 2km 2k j for an arbitrary j such that m < j ≤ d. A variable u ∈ Rd is denoted by u = (ua , ub ) ∈ Rm × Rd−m . We define a measure du∗ by m d ( ∏ δ(u j )) ( du∗ = j=1 ∏ (u j )µ j ) du j=m+1 2m (m − 1)! (∏mj=1 k j ) , (25) where δ( ) is the Dirac delta function, and µ = (µm+1 , µ2 , ..., µd ) is a multi-index defined by µ j = −2λk j + h j (m + 1 ≤ j ≤ d). Then µ j > −1, hence Equation (25) defines a measure on M . The support of du∗ is {u = (ua , ub ) ; ua = 0}. Definition. Let σ be a d-dimensional variable made of ±1. We use the notation, σ = (σ1 , σ2 , ..., σd ) ∈ Rd where σ j = ±1. The set of all such variables is denoted by S(d). S(d) = {σ ; σ j = ±1 (1 ≤ j ≤ d)}. Also we use the notation σu = (σ1 u1 , σ2 u2 , ..., σd ud ) ∈ Rd . Then (σu)k = σk uk and (σu)2k = u2k . By using this notation, we can derive the asymptotic behavior of δ(t − u2k )|uh | for t → 0. Lemma 4 Let G(u2k , uk , u) be a real-valued C1 -class function of (u2k , uk , u) (u ∈ Rd ). The following asymptotic expansion holds as t → +0, Z [−1,1]d du δ(t − u2k )|u|h G(u2k , uk , u) = t λ−1 (− logt)m−1 ∑ Z d σ∈S(d) [0,1] +O t λ−1 (− logt)m−2 , where du∗ is a measure defined by Equation (25). 882 √ du∗ G(t, σk t, u) (26) WBIC (Proof of Lemma 4) Let Y (t) be the left hand side of Equation (26). Then Y (t) = ∑ Z ∑ Z [0,1]d σ∈S(d) = d σ∈S(d) [0,1] δ(t − (σu)2k )|σu|h G((σu)2k , (σu)k , σu)d(σu) √ δ(t − u2k )|u|h G(t, σk t, u)du. By using Theorem 4.6 of a book (Watanabe, 2009), if u ∈ [0, 1]d , then δ(t − u2k )|u|h du = t λ−1 (− logt)m−1 du∗ +O(t λ−1 (− logt)m−2 ). By applying this relation to Y (t), we obtain Lemma 4. (Q.E.D.) 5.6 Proof of Lemma 2 Let Φ(w) > 0 be an arbitrary C∞ class function on Wε . Let Y (t, Φ) (t > 0) be a function defined by Y (t, Φ) ≡ Z K(w)<ε δ(t − K(w)) f (x, w)Φ(w)ϕ(w)dw, whose value is independent of a choice of a resolution map. By using a resolution map w = g(u), Y (t, Φ) = ∑ ∑ Z d α∈A σ∈S(d) [−1,1] du δ(t − u2k ) uk |u|h a(x, u)Φ(g(u))bα (u)du. By Lemma 4, and σ = (σa , σb ), Y (t, Φ) = t λ−1/2 (logt)m−1 × ∑ ∑ (σa )k α∈A ∗ σa ∈S(m) Z ∑ (σb )k σb ∈S(d−m) du∗ a(x, σu) Φ(g(σu)) bα (σu) [0,1]d λ−1/2 +O(t (logt)m−2 ). By the assumption that a true distribution is realizable by a statistical model, Equation (24) shows that there exists x such that a(x, u) 6= 0 for u2k = 0. On the support of du∗ , σu = (σa ua , σb ub ) = (0, σb ub ), consequently the main order term of Y (t, Φ) is determined by Φ(0, ub ). In order to prove Lemma, it is sufficient to prove that Q(K(g(u)), ϕ(g(u))) = 1 is equivalent to the proposition that, for an arbitrary Φ, the main term of Y (t, Φ) is equal to zero. First, assume that Q(K(g(u)), ϕ(g(u))) = 1. Then at least one k j (1 ≤ j ≤ m) is odd, σka takes both values ±1, hence ∑ σka = 0, σa ∈S(m) which shows that the coefficient of the main order term in Y (t, Φ) (t → +0) is zero for an arbitrary Φ(w). Second, assume that Q(K(g(u)), ϕ(g(u))) = 0. Then all k j (1 ≤ j ≤ m) are even, hence ∑ σa ∈S(m) σka = ∑ σa ∈S(m) 1 6= 0. Then there exists a function Φ(w) such that the main order term is not equal to zero. Therefore Q(K(g(u)), ϕ(g(u))) does not depend on the resolution map. (Q.E.D.) 883 WATANABE 5.7 Proof of Theorem 4 In this subsection, we prove Theorem 4 using the foregoing preparations. We need to study An and Bn in Equation (22) and Equation (23). Firstly, we study An . Z √ An = ∑ du exp(−nβu2k + β nuk ξn (u))|u|h bα (u) d α∈A [−1,1] = ∑ Z d α∈A [−1,1] du Z ∞ 0 dt δ(t − u2k )|u|h bα (u) √ × exp(−nβu2k + β nuk ξn (u)). By substitution t := t/(nβ) and dt := dt/(nβ), ∑ An = For simple notations, we use Z Z bα (u)du Z ∞ dt t δ nβ 0 nβ √ × exp(−nβu2k + β nuk ξn (u)). d α∈A [−1,1] ∗ du M ≡ ∑ ∑ α∈A ∗ σ∈S(d) Z [0,1]d − u2k |u|h bα (u) du∗ , ξ∗n (u) ≡ σk ξn (u), where {Uα ; α ∈ A ∗ } is the set of all essential local coordinates. Then by using Lemma 4, δ(t/nβ − u2k ) can be asymptotically expanded for nβ → 0, hence Z ∞ Z dt t λ−1 t m−1 du∗ An = − log( ) nβ M 0 nβ nβ p (log(nβ))m−2 ) × exp(−t + βt ξ∗n (u)) + O p ( (nβ)λ Z ∞ Z p (log(nβ))m−1 ∗ λ−1 du dt t exp(−t) exp( βt ξ∗n (u)) = (nβ)λ M 0 (log(nβ))m−2 +O p ( ). (nβ)λ Since β = β0 / log n → 0, p p exp( βt ξ∗n (u)) = 1 + βt ξ∗n (u) + O p (β). By using the gamma function, Γ(λ) = Z ∞ t λ−1 exp(−t) dt, 0 it follows that An = Z Z o p (log(nβ))m−1 n 1 ∗ ∗ ∗ βΓ(λ + du ξ (u) du + ) Γ(λ) n 2 (nβ)λ M M (log(nβ))m−2 ). +O p ( (nβ)λ 884 WBIC Secondly, Bn can be calculated by the same way, Bn = ∑ Z du d α∈A [−1,1] √ 2k Z ∞ 0 dt δ(t − u2k )|u|h bα (u) √ ×(nu − nuk ξn (u)) exp(−nβu2k + β nuk ξn (u)). By substitution t := t/(nβ) and dt := dt/(nβ) and Lemma 4, Bn = Therefore, Z du∗ Z ∞ dt t λ−1 − log( t m−1 ) nβ 0 nβ nβ p p 1 (log(nβ))m−2 × (t − βt ξ∗n (u)) exp(−t + βt ξ∗n (u)) + O p ( ) β β(nβ)λ Z ∞ Z p (log(nβ))m−1 ∗ λ−1 βt ξ∗n (u)) exp(−t) = du t (t − λ β(nβ) M 0 p (log(nβ))m−2 × exp( βt ξ∗n (u)) + O p ( ). β(nβ)λ Bn = M Z p Z (log(nβ))m−1 n 3 ∗ ∗ ∗ du ξ (u) ) du + β Γ(λ + Γ(λ + 1) n 2 β(nβ)λ M M Z o p (log(nβ))m−2 1 ). − β Γ(λ + ) du∗ ξ∗n (u) + O p ( 2 β(nβ)λ M Let us define a random variable Θ by R du∗ ξ∗n (u) Θ = MR . ∗ M du By applying results of An and Bn to Equation (21), p 1 Γ(λ + 1) + β Θ {Γ(λ + 3/2) − Γ(λ + 1/2)} β p + O p (1). Ew [nKn (w)] = × β Γ(λ) + β ΘΓ(λ + 1/2) Note that, if a, b, c, d are constants and β → 0, p c+ β d c p ad − bc p + O(β). = + β a2 a+ β b a Then by using an identity, Γ(λ)(Γ(λ + 3/2) − Γ(λ + 1/2)) − Γ(λ + 1)Γ(λ + 1/2) Γ(λ + 1/2) =− , Γ(λ)2 2Γ(λ) we obtain Eβw [nKn (w)] = Θ Γ(λ + 1/2) 1 Γ(λ + 1) −p + O p (1). β Γ(λ) β 2Γ(λ) 885 (27) WATANABE A random variable Un is defined by ΘΓ(λ + 1/2) Un = − √ . 2λΓ(λ) (28) Then it follows that Eβw [nKn (w)] λ = +Un β s λ + O p (1). 2β By the definition of ξn (u), E[Θ] = 0, hence E[Un ] = 0. By using Cauchy-Schwarz inequality, R du∗ ξ∗n (u)2 Θ2 ≤ M R . ∗ M du Lastly let us study the case that q(x) is realizable by p(x|w). The support of du∗ is contained in u2k = 0, hence we can apply Equation (24) to Θ, R du∗ E[ξ∗n (u)2 ] E[Θ ] ≤ M R = 2. ∗ M du 2 The gamma function satisfies Γ(λ + 1/2) √ < λ (λ > 0). Γ(λ) Hence we obtain E[(Un )2 ] ≤ which completes Theorem 4. (Q.E.D.) E[Θ2 ] Γ(λ + 1/2) 2 < 1, 2λ Γ(λ) 5.8 Proof of Corollary 1 By definition Equation (27) and Equation (28), it is sufficient to prove Θ = 0, where ∑ ∑ Θ= α∈A ∗ σ∈S(d) Z [0,1]d bα (u) du∗ σk ξn (u) Z ∑ ∑ d α∈A ∗ σ∈S(d) [0,1] bα (u) du∗ . The support of the measure du∗ is contained in the set {u = (0, ub )}. We use a notation σ = (σa , σb ) ∈ Rm × Rd−m . If Q(q, p, ϕ) = 1 then there exists a resolution map w = g(u) such that σka takes values both +1 and −1 in arbitrary local coordinate, hence σka = 0. ∑ σa ∈S(m) It follows that ∑ σ∈S(d) σk ξn (0, ub ) = ∑ σkb ξn (0, ub ) σb ∈S(d−m) therefore, Θ = 0, which completes Corollary 1. (Q.E.D.) 886 ∑ σa ∈S(m) σka = 0, WBIC 5.9 Proof Corollary 2 By using the optimal inverse temperature β∗ , we define T = 1/(β∗ log n). By the definition, F = β∗ Ew [nLn (w)]. By using Theorem 2 and Theorem 4, p λ log n = T λ log n +Un T λ log n/2 +Vn = 0, where Vn = O p (log log n). It follows that √ Vn Un T = 0. −1+ T+p log n 2λ log n Therefore, √ Since Un = O p (1), Un T = −p + 8λ log n √ resulting that s 1+ (Un )2 Vn − . 8λ log n log n Un 1 T = 1− p + op( p ), 8λ log n λ log n 1 Un + op( p ), β∗ log n = 1 + p 2λ log n λ log n which completes Corollary 2. (Q.E.D.) 5.10 Proof of Corollary 3 By using Theorem 4, p λ + O p ( log n), β1 p λ Eβw2 [nLn (w)] = nLn (w0 ) + + O p ( log n). β2 Eβw1 [nLn (w)] = nLn (w0 ) + Since (1/β1 − 1/β2 ) = O p (log n), λ= β β p Ew1 [nLn (w)] − Ew2 [nLn (w)] + O p (1/ log n), 1/β1 − 1/β2 which shows Corollary 3. (Q.E.D.) 5.11 Proof Theorem 5 By using Equation (9) and Equation (10), Eβw [nLn (w)] = nLn (w0 ) + Eβw [nKn (w)], 887 WATANABE β the proof of Theorem 5 results in evaluating Ew [nKn (w)]. By Lemma 1 for the case r = 1/4, √ Dn + o p (exp(− n)) β √ , Ew [nKn (w)] = Cn + o p (exp(− n)) where Cn and Dn are respectively defined by Cn = Dn = Z Z K<1/n1/4 K<1/n1/4 exp(−nβKn (w))ϕ(w)dw, nKn (w) exp(−nβKn (w))ϕ(w)dw. If a statistical model is regular, the maximum likelihood estimator ŵ converges to w0 in probability. Let Jn (w) be d × d matrices defined by (Jn )i j (w) = ∂2 Kn (w). ∂wi ∂w j There exists a parameter w∗ such that 1 Kn (w) = Kn (ŵ) + (w − ŵ) · Jn (w∗ )(w − ŵ). 2 Since ŵ → w0 in probability and K(w) < 1/n1/4 , w∗ → w0 in probability. Then kJn (w∗ ) − J(w0 )k ≤ kJn (w∗ ) − Jn (w0 )k + kJn (w0 ) − J(w0 )k ∂J (w) n ≤ kw∗ − w0 k sup + kJn (w0 ) − J(w0 )k, ∂w 1/4 K(w)<1/n which converges to zero in probability as n → ∞. Therefore, Jn (w∗ ) = J(w0 ) + o p (1). Since the model is regular, J(w0 ) is a positive definite matrix. Now we define Cn = exp(−nβKn (ŵ)) × Z K(w)<n1/4 exp(− nβ (w − ŵ) · (J(w0 ) + o p (1))(w − ŵ))ϕ(w)dw. 2 By substituting u= it follows that p nβ(w − ŵ), Cn = exp(−nβKn (ŵ))(nβ)−d/2 Z u 1 × exp(− u · (J(w0 ) + o p (1))u)ϕ(ŵ + p )du 2 nβ = (2π)d/2 exp(−nβKn (ŵ))(ϕ(ŵ) + o p (1)) . (nβ)d/2 det(J(w0 ) + o p (1))1/2 888 WBIC H WBIC1 Ave. WBIC1 Std. WBIC2 Ave. WBIC2 Std. BIC Ave. BIC Std. 1 17899.82 1081.30 17899.77 1081.30 17899.77 1081.30 2 3088.90 226.94 3089.03 226.97 3089.03 226.97 3 71.11 3.67 71.18 3.54 71.18 3.54 4 78.21 3.78 75.43 3.89 83.47 3.89 5 83.23 3.97 82.54 4.03 91.86 4.03 6 87.58 4.09 86.83 4.08 94.87 4.08 Table 2: WBIC and BIC in Model Selection In the same way, Dn = exp(−nβKn (ŵ)) Z n × nKn (ŵ) + (w − ŵ) · (J(w0 ) + o p (1))(w − ŵ) 2 K(w)<1/n1/4 nβ × exp − (w − ŵ) · (J(w0 ) + o p (1))(w − ŵ) ϕ(w)dw 2 (2π)d/2 exp(−nβKn (ŵ))(ϕ(ŵ) + o p (1)) d = + o (1) . nK ( ŵ) + p n 2β (nβ)d/2 det(J(w0 ) + o p (1))1/2 Here nKn (ŵ) = O p (1), because the true distribution is regular for a statistical model. Therefore, Eβw [nLn (w)] = nLn (w0 ) + nKn (ŵ) + d + o p (1), 2β which completes Theorem 5. (Q.E.D.) 6. A Method How to Use WBIC In this section we show a method how to use WBIC in statistical model evaluation. The main theorems have already been mathematically proved, hence WBIC has a theoretical support. The following experiment was conducted not for proving theorems but for illustrating a method how to use it. 6.1 Statistical Model Selection Firstly, we study model selection by using WBIC. Let x ∈ RM , y ∈ RN , and w = (A, B), where A is an H × M matrix and B is an N × H matrix. A reduced rank regression model is defined by 1 r(x) 2 p(x, y|w) = exp − 2 ky − BAxk , 2σ (2πσ2 )N/2 where r(x) is a probability density function of x and σ2 is the variance of an output. Let NM (0, Σ) denote the M-dimensional normal distribution with the average zero and the covariance matrix Σ. In an experiment, we set σ = 0.1, r(x) = NM (0, 32 I), where I is the identity matrix, and ϕ(w) = Nd (0, 102 I). The true distribution was fixed as p(x, y|w0 ), where w0 = (A0 , B0 ) was determined so 889 WATANABE that A0 and B0 were respectively an H0 × M matrix and an M × H0 matrix. Note that, in reduced rank regression models, RLCTs and multiplicities were clarified by a research (Aoyagi and Watanabe, 2005) and Q(K(w), ϕ(w)) = 1 for arbitrary q(x), p(x|w), and ϕ(w). In the experiment, M = N = 6 and the true rank was set as H0 = 3. Each element of A0 and B0 was taken from N1 (0, 0.22 ) and fixed. From the true distribution p(x, y|w0 ), 100 sets of n = 500 training samples were generated. The Metropolis method was employed for sampling from the posterior distribution, p(w|X1 , X2 , ..., Xn ) ∝ exp(−βnLn (w) + log ϕ(w)), where β = 1/ log n. Every Metropolis trial was generated from a normal distribution Nd (0, (0.0012)2 I), by which the acceptance probability was 0.1-0.9. First 50000 Metropolis trails were not used. After 50000 trails, R = 2000 parameters {wr ; r = 1, 2, ..., R} were obtained in every 100 Metropolis steps. The expectation value of a function G(w) over the posterior distribution was approximated by Eβw [G(w)] = 1 R ∑ G(wr ). R r=1 The six statistical models H = 1, 2, 3, 4, 5, 6 were compared by the criterion, WBIC = Eβw [nLn (w)], (β = 1/ log n). To compare these values among several models, we show both WBIC1 , WBIC2 , and BIC in Table 2. In the table, the average and the standard deviation of WBIC1 defined by WBIC1 = WBIC − nSn , for 100 independent sets of training samples are shown, where the empirical entropy of the true distribution 1 n Sn = − ∑ log q(Xi ) n i=1 does not depend on a statistical model. Although nSn does not affect the model selection, its standard √ deviation is in proportion to n. In order to estimate the standard deviation of the essential part of WBIC, the effect of nSn was removed. In 100 independent sets of training samples, the true model H = 3 was chosen 100 times in this experiment, which demonstrates a typical application method of WBIC. Also WBIC2 in Table 2 shows the average and the standard deviation of WBIC2 = nLn (ŵ) + λ log n − (m − 1) log log n − nSn , where ŵ is the maximum likelihood estimator, λ is the real log canonical threshold, and m is the multiplicity. The maximum likelihood estimator ŵ = (Â, B̂) is given in reduced rank regression (Anderson, 1951), B̂Â = VV t Y X t (XX t )−1 , where t shows the transposed matrix and X, Y , and V are matrices defined as follows. (1) Xi j is the i-th coefficient of x j (2) Yi j is the i-th coefficient of y j . 890 WBIC H Theory λ Theory m Average λ Std. Dev. λ 1 5.5 1 5.51 0.17 2 10 1 9.95 0.31 3 13.5 1 13.49 0.52 4 15 2 14.80 0.65 5 16 1 15.72 0.66 6 17 2 16.55 0.72 Table 3: RLCTs for the case H0 = 3 (3) The matrix V is made of eigen vectors with respect to the maximum H eigen values of the matrix, Y X t (XX t )−1 XY t . If we know the model that is equal to the true distribution, and if we have theoretical real log canonical threshold and multiplicity, then we can calculate WBIC2 . The value BIC in Table 2 shows the average and the standard deviation of Shwarz BIC, BIC = nLn (ŵ) + d log n − nSn , 2 where d is the essential dimension of the parameter space of the reduced rank regression, d = H(M + N − H). We used this dimension because the number of parameters in reduced rank regression is H(M + N) and it has free dimension H 2 . These results show that WBIC is a better approximator of the Bayes free energy than BIC. 6.2 Estimating RLCT Secondly, we study a method how to estimate an RLCT. By using the same experiment as the foregoing subsection, we estimated RLCTs of reduced rank regression models by using Corollary 3. Based on Equation (19), the estimated RLCT is given by β β Ew1 [nLn (w)] − Ew2 [nLn (w)] λ̂ = , 1/β1 − 1/β2 β where β1 = 1/ log n and β2 = 1.5/ log n and we used Equation (20) in the calculation of Ew2 [ ]. Theory λ in Table 3 shows the theoretical values of RLCTs of reduced rank regression. For the cases when true distributions are unrealizable by statistical models, RLCTs are given by half the dimension of the parameter space, λ = H(M + N − H)/2. In Table 3, averages and standard deviations of λ shows estimated RLCTs. The theoretical RLCTs were well estimated. The difference between theory and experimental results was caused by the effect of the smaller order terms than log n. In the case the multiplicity m = 2, the term log log n also affected the results. 7. Discussion In this section, we discuss the widely applicable information criterion from three different points of view. 891 WATANABE 7.1 WAIC and WBIC Firstly, let us study the difference between the free energy and the generalization error. In the present paper, we study the Bayes free energy F as the statistical model selection criterion. Its expectation value is given by Z q(xn ) n E[F ] = nS + q(xn ) log dx , p(xn ) where S is the entropy of the true distribution, n q(xn ) = ∏ q(xi ), p(xn ) = Z i=1 n ∏ p(xi |w)ϕ(w)dw, i=1 and dxn = dx1 dx2 · · · dxn . Hence minimization of E[F ] is equivalent to minimization of the KullbackLeibler distance from the q(xn ) to p(xn ). There is a different model evaluation criterion, which is the generalization loss defined by G =− Z q(x) log p∗ (x)dx, (29) β where p∗ (x) is the Bayes predictive distribution defined by p∗ (x) = Ew [p(x|w)], with β = 1. The expectation value of G satisfies E[G ] = S + E hZ q(x) log q(x) i dx . p∗ (x) Hence minimization of E[G ] is equivalent to minimization of the Kullback-Leibler distance from q(x) to p∗ (x). Both of F and G are important in statistics and learning theory, however, they are different criteria. The well-known model selection criteria AIC and BIC are respectively defined by d AIC = Ln (ŵ) + , n d BIC = nLn (ŵ) + log n. 2 (30) If a true distribution is realizable by and regular for a statistical model, then 1 E[AIC] = E[G ] + o( ), n E[BIC] = E[F ] + O(1). These relations can be generalized onto singular statistical models. We define WAIC and WBIC by WAIC = Tn +Vn /n, WBIC = Eβw [nLn (w)], β = 1/ log n, 892 WBIC where 1 n ∑ log p∗ (Xi |w), n i=1 o n n = ∑ Ew [(log p(Xi |w))2 ] − Ew [log p(Xi |w)]2 . Tn = − Vn i=1 Then, even if a statistical model is unrealizable by and singular for a statistical model, 1 ), n2 E[WBIC] = E[F ] + O(log log n), E[WAIC] = E[G ] + O( (31) (32) where Equation (31) was proved in a book (Watanabe, 2009, 2010b), whereas Equation (32) has been proved in the present paper. In fact, the difference between the average leave-one-out cross validation and the average generalization error is in proportion to 1/n2 and the difference between the leave-one-out cross validation and WAIC is also in proportion to 1/n2 . The difference between E[WBIC] and E[F ] is caused by the multiplicity m. If a statistical model is realizable by and regular for a statistical model, WAIC and WBIC respectively coincide with AIC and BIC, 1 WAIC = AIC + o p ( ), n WBIC = BIC + o p (1). Theoretical comparison of WAIC and WBIC in singular model selection is an important problem for future study. 7.2 Other Methods How to Evaluate Free Energy Secondly, we discuss several methods how to numerically evaluate the Bayes free energy. There are three methods other than WBIC. Firstly, let {β j ; j = 0, 1, 2, ..., J} be a sequence which satisfies 0 = β0 < β1 < · · · < βJ = 1. Then the Bayes free energy satisfies J β F = − ∑ log Ewj−1 [exp(−n(β j − β j−1 )Ln (w))]. j=1 This method can be used without asymptotic theory. We can estimate F , if the number J is sufβ ficiently large and if all expectation values over the posterior distributions {Ewj−1 [ ]} are precisely calculated. The disadvantage of this method is its huge computational costs for accurate calculation. In the present paper, this method is referred to as ‘all temperatures method’. Secondly, the importance sampling method is often used. Let H(w) be a function which approximates nLn (w). Then, for an arbitrary function G(w), we define an expectation value Êw [ ] by R G(w) exp(−H(w))ϕ(w)dw . Êw [G(w)] = R exp(−H(w))ϕ(w)dw 893 WATANABE Method All Temperatures Importance Sampling Two-Step WBIC Asymptotics Not used Not used Used Used RLCT Not Used Not Used Used Not Used Comput. Cost Huge Small Small Small Table 4: Comparison of Several Methods Then F = − log Êw [exp(−nLn (w) + H(w))] − log Z exp(−H(w))ϕ(w)dw, where the last term is the free energy of H(w). Hence if we find H(w) whose free energy is analytically calculated and if it is easy to generate random samples from Êw [ ], then F can be numerically evaluated. The accuracy of this method strongly depends on the choice of H(w). Thirdly, a two-step method was proposed (Drton, 2010). Assume that we have theoretical values about RLCTs for all cases about true distribution and statistical models. Then, in the first step, a null hypothesis model is chosen by using BIC. In the second step, the optimal model is chosen by using RLCTs with the assumption that the null hypothesis model is a true distribution. If the selected model is different from the null hypothesis model, then the same procedure is recursively applied until the null hypothesis model becomes the optimal model. In this method, asymptotic theory is necessary but RLCTs do not contain fluctuations because they are theoretical values. Compared with these methods, WBIC needs asymptotic theory but it does not theoretical results about RLCT. The theoretical comparison of these methods is summarized in Table 4. The effectiveness of a model selection method strongly depends on a statistical condition which is determined by a true distribution, a statistical model, a prior distribution, and a set of training samples. Under some condition, one method may be more effective, however, under the other condition, another may be. The proposed method WBIC gives a new approach in numerical calculation of the Bayes free energy which is more useful with cooperation with the conventional method. It is a future study to clarify which method is recommended in what statistical conditions. Remark. It is one of the most important problems in Bayes statistics how to make accurate Markov Chain Monte Carlo (MCMC) process. There are several MCMC methods, for example, the Metropolis method, the Gibbs sampler method, the Hybrid Monte Carlo method, and the exchange Monte Carlo method. Numerical calculation of WBIC depends on the accuracy of MCMC process. 7.3 Algebraic Geometry and Statistics Lastly, let us discuss a relation between algebraic geometry and statistics. In the present paper, we define the parity of a statistical Q(K(w), ϕ(w)) and proved that it affects the asymptotic behavior of WBIC. In this subsection we show three mathematical properties of the parity of a statistical model. Firstly, the parity has a relation to the analytic continuation of K(w)1/2 . For example, by using blow-up, (a, b) = (a1 , a1 b1 ) = (a2 b2 , b2 ), it follows that analytic continuation of (a2 +b2 )1/2 is given 894 WBIC by (a2 + b2 )1/2 = a1 q q 1 + b21 = b2 a22 + 1, which takes both positive and negative values. On the other hand, (a4 + b4 )1/2 takes only nonnegative value. The parity indicates such difference. Secondly, the parity has a relation to statistical model with a restricted parameter set. For example, a statistical model 1 (x − a)2 p(x|a) = √ exp(− ) 2 2π whose parameter set is given by {a ≥ 0} is equivalent to a statistical model p(x|b2 ) and {b ∈ R}. In other words, a statistical model which has restricted parameter set is statistically equivalent to another even model which has unrestricted parameter set. We have a conjecture that an even statistical model has some relation to a model with a restricted parameter model. And lastly, the parity has a relation to the difference of K(w) and Kn (w). As is proven (Watanabe, 2001a), the relation − log Z exp(−nKn (w))ϕ(w)dw = − log Z exp(−nK(w))ϕ(w)dw + O p (1) holds independent of the parity of a statistical model. On the other hand, if β = 1/ log n, then Eβw [nKn (w)] = R nK(w) exp(−nβK(w))ϕ(w)dw R exp(−nβK(w))ϕ(w) p +Un log n + O p (1). If the parity is odd, then Un = 0, otherwise Un is not equal to zero in general. This fact shows that the parity shows difference in a fluctuation of the likelihood function. 8. Conclusion We proposed a widely applicable Bayesian information criterion (WBIC) which can be used even if a true distribution is unrealizable by and singular for a statistical model and proved that WBIC has the same asymptotic expansion as the Bayes free energy. Also we developed a method how to estimate real log canonical thresholds even if a true distribution is unknown. Acknowledgments This research was partially supported by the Ministry of Education, Science, Sports and Culture in Japan, Grant-in-Aid for Scientific Research 23500172. References T. W. Anderson. Estimating linear restrictions on regression coefficients for multivariate normal distributions. Annals of Mathematical Statistics, 22:327–351, 1951. M. Aoyagi and K. Nagata. Learning coefficient of generalization error in Bayesian estimation and Vandermonde matrix-type singularity. Neural Computation, 24(6):1569–1610, 2012. 895 WATANABE M. Aoyagi and S. Watanabe. Stochastic complexities of reduced rank regression in Bayesian estimation. Neural Networks, 18(7):924–933, 2005. M. F. Atiyah. Resolution of singularities and division of distributions. Communications of Pure and Applied Mathematics, 13:145–150, 1970. I. N. J. Bernstein. Analytic continuation of distributions with respect to a parameter. Functional Analysis and its Applications, 6(4):26–40, 1972. M. Drton. Likelihood ratio tests and singularities. The Annals of Statistics, 37:979–1012, 2009. M. Drton. Reduced rank regression. In Workshop on Singular Learning Theory. American Institute of Mathematics, 2010. M. Drton, B. Sturmfels, and S. Sullivant. Lecures on Algebraic Statistics. Birkhäuser, Berlin, 2009. I. M. Gelfand and G. E. Shilov. Generalized Functions. Volume I: Properties and Operations. Academic Press, San Diego, 1964. I. J. Good. The Estimation of Probabilities: An Essay on Modern Bayesian Methods. MIT Press, Cambridge, 1965. H. Hironaka. Resolution of singularities of an algebraic variety over a field of characteristic zero. Annals of Mathematics, 79:109–326, 1964. M. Kashiwara. B-functions and holonomic systems. Inventiones Mathematicae, 38:33–53, 1976. F. J. Király, P. Büuau, F. C. Meinecke, D. A. J. Blythe, and K-R Müller. Algebraic geometric comparison of probability distributions. Journal of Machine Learning Research, 13:855–903, 2012. J. Kollár. Singularities of pairs. In Algebraic Geometry, Santa Cruz 1995, Proceedings of Symposia in Pure Mathematics, volume 62, pages 221–286. American Mathematical Society, 1997. S. Lin. Algebraic Methods for Evaluating Integrals in Bayesian Statistics. PhD thesis, Ph.D. dissertation, University of California, Berkeley, 2011. D. Rusakov and D. Geiger. Asymptotic model selection for naive Bayesian network. Journal of Machine Learning Research, 6:1–35, 2005. M. Saito. On real log canonical thresholds. arXiv:0707.2308v1, 2007. M. Sato and T. Shintani. On zeta functions associated with prehomogeneous vector space. Annals of Mathematics, 100:131–170, 1974. G. Schwarz. Estimating the dimension of a model. The Annals of Statistics, 6(2):461–464, 1978. A. Varchenko. Newton polyhedrons and estimates of oscillatory integrals. Functional Analysis and its Applications, 10(3):13–38, 1976. S. Watanabe. Algebraic analysis for singular statistical estimation. Lecture Notes in Computer Sciences, 1720:39–50, 1999. 896 WBIC S. Watanabe. Algebraic analysis for nonidentifiable learning machines. Neural Computation, 13 (4):899–933, 2001a. S. Watanabe. Algebraic geometrical methods for hierarchical learning machines. Neural Networks, 14(8):1049–1060, 2001b. S. Watanabe. Algebraic geometry and statistical learning theory. Cambridge University Press, Cambridge, UK, 2009. S. Watanabe. Asymptotic learning curve and renormalizable condition in statistical learning theory. Journal of Physics Coneference Series, 233(1), 2010a. 012014. doi: 10.1088/17426596/233/1/012014. S. Watanabe. Asymptotic equivalence of Bayes cross validation and widely applicable information criterion in singular learning theory. Journal of Machine Learning Research, 11:3571–3591, 2010b. K. Yamazaki and S. Watanabe. Singularities in mixture models and upper bounds of stochastic complexity. Neural Networks, 16(7):1029–1038, 2003. K. Yamazaki and S. Watanabe. Singularities in complete bipartite graph-type boltzmann machines and upper bounds of stochastic complexities. IEEE Transactions on Neural Networks, 16(2): 312–324, 2005. P. Zwiernik. Asymptotic model selection and identifiability of directed tree models with hidden variables. CRiSM report, 2010. P. Zwiernik. An asymptotic behaviour of the marginal likelihood for general markov models. Journal of Machine Learning Research, 12:3283–3310, 2011. 897