
A Precorrected-FFT method for Electrostatic Analysis of Complicated
... time and memory. In this paper, we describe a precorrectedFFT approach which can replace the fast multipole algorithm for accelerating the Coulomb potential calculation needed to perform the matrix-vector product. The central idea of the algorithm is to represent the long-range part of the Coulomb p ...
... time and memory. In this paper, we describe a precorrectedFFT approach which can replace the fast multipole algorithm for accelerating the Coulomb potential calculation needed to perform the matrix-vector product. The central idea of the algorithm is to represent the long-range part of the Coulomb p ...
Incremental spectral clustering by efficiently updating the eigen
... arrive at very high rate that it is impossible to fit them in the memory or scan them for multiple times as conventional clustering does. Hence it demands for efficient incremental or online methods that cluster massive data by using limited memory and by one-pass scanning. Most of such algorithms d ...
... arrive at very high rate that it is impossible to fit them in the memory or scan them for multiple times as conventional clustering does. Hence it demands for efficient incremental or online methods that cluster massive data by using limited memory and by one-pass scanning. Most of such algorithms d ...
Sure Independence Screening for Ultra
... number of predictors among X1 , · · · , Xp contribute to the response, which amounts to assuming ideally that the parameter vector β is sparse. With sparsity, variable selection can improve estimation accuracy by effectively identifying the subset of important predictors, and also enhance model inte ...
... number of predictors among X1 , · · · , Xp contribute to the response, which amounts to assuming ideally that the parameter vector β is sparse. With sparsity, variable selection can improve estimation accuracy by effectively identifying the subset of important predictors, and also enhance model inte ...
Online Full Text
... representative of elementary blocks of instances, then extracting a subset of interesting instances is related to set weights to each elementary sets obtained through Rough sets theory. An instance in a dataset is inconsistent when it has all their feature values similar to other instance but both o ...
... representative of elementary blocks of instances, then extracting a subset of interesting instances is related to set weights to each elementary sets obtained through Rough sets theory. An instance in a dataset is inconsistent when it has all their feature values similar to other instance but both o ...
Recursive partitioning and Bayesian inference
... tree. The prior is constructed based on a two-stage nested procedure, which in the first stage recursively partitions the predictor space, and then in the second generates the conditional distribution on those predictor blocks using a further recursive partitioning procedure on the response space. T ...
... tree. The prior is constructed based on a two-stage nested procedure, which in the first stage recursively partitions the predictor space, and then in the second generates the conditional distribution on those predictor blocks using a further recursive partitioning procedure on the response space. T ...
Recursive partitioning and Bayesian inference on
... tree. The prior is constructed based on a two-stage nested procedure, which in the first stage recursively partitions the predictor space, and then in the second generates the conditional distribution on those predictor blocks using a further recursive partitioning procedure on the response space. T ...
... tree. The prior is constructed based on a two-stage nested procedure, which in the first stage recursively partitions the predictor space, and then in the second generates the conditional distribution on those predictor blocks using a further recursive partitioning procedure on the response space. T ...
Expectation–maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.