
AZ36311316
... Combination of belief functions is required for getting a fusion result [7], [8]. Combination is performed just on condition that belief functions are related to the same event. It is necessary to distinguish which belief functions are reporting on which event. Here agglomerative algorithm is used f ...
... Combination of belief functions is required for getting a fusion result [7], [8]. Combination is performed just on condition that belief functions are related to the same event. It is necessary to distinguish which belief functions are reporting on which event. Here agglomerative algorithm is used f ...
Dissimilarity-based Sparse Subset Selection
... In this paper, we consider the problem of finding representatives, given pairwise dissimilarities between the elements of a source set, X, and a target set, Y, in an unsupervised framework. In order to find a few representatives of X that well encode the collection of elements of Y, we propose an op ...
... In this paper, we consider the problem of finding representatives, given pairwise dissimilarities between the elements of a source set, X, and a target set, Y, in an unsupervised framework. In order to find a few representatives of X that well encode the collection of elements of Y, we propose an op ...
Count Regression Models in SAS®
... Often count variables are treated as continuous and linear regression is applied. Using linear regression models for count data is very inefficient. It has inconsistent standard errors and may produce negative predictions for the number of events. The least square estimates with a logged dependent v ...
... Often count variables are treated as continuous and linear regression is applied. Using linear regression models for count data is very inefficient. It has inconsistent standard errors and may produce negative predictions for the number of events. The least square estimates with a logged dependent v ...
Expectation–maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.