
Aalborg Universitet
... Smoothing search space method reconstructs the search space by filling local minimum points, to reduce the influence of local minimum points. In this paper, we first design two smoothing operators to reconstruct the search space by filling the minimum ‘traps’ (points) based on the relationship between d ...
... Smoothing search space method reconstructs the search space by filling local minimum points, to reduce the influence of local minimum points. In this paper, we first design two smoothing operators to reconstruct the search space by filling the minimum ‘traps’ (points) based on the relationship between d ...
OutRank: A GRAPH-BASED OUTLIER DETECTION FRAMEWORK
... This normalization ensures that the elements of each row of the transition matrix sum to 1, which is an essential property of a stochastic matrix. It is also assumed that the transition probabilities in S do not change over time. In general, the transition matrix S computed from data might not be ir ...
... This normalization ensures that the elements of each row of the transition matrix sum to 1, which is an essential property of a stochastic matrix. It is also assumed that the transition probabilities in S do not change over time. In general, the transition matrix S computed from data might not be ir ...
GMove: Group-Level Mobility Modeling Using Geo
... idea of group-level mobility modeling. The key is to group the users that share similar moving behaviors, e.g., the students studying at the same university. By aggregating the movements of likebehaved users, GM OVE can largely alleviate data sparsity without compromising the within-group data consi ...
... idea of group-level mobility modeling. The key is to group the users that share similar moving behaviors, e.g., the students studying at the same university. By aggregating the movements of likebehaved users, GM OVE can largely alleviate data sparsity without compromising the within-group data consi ...
Efficient Discovery of Error-Tolerant Frequent Itemsets in High
... there exists at least r ' n transactions in which at least a fraction 1-e of the items from E are present. Problem Statement: Given a sparse binary database D of n transactions (rows) and d items (columns), error tolerance E > 0, and minimum support Kin [0,1], determine all error-tolerant itemsets ( ...
... there exists at least r ' n transactions in which at least a fraction 1-e of the items from E are present. Problem Statement: Given a sparse binary database D of n transactions (rows) and d items (columns), error tolerance E > 0, and minimum support Kin [0,1], determine all error-tolerant itemsets ( ...
IOSR Journal of Computer Engineering (IOSR-JCE)
... unless the profit margin is high. Furthermore, within the set of transactions that contain item A, we want to know how often they contain product B as well; this is the role of rule’s confidence. If we introduce the term frequent for an itemset X that meets the criterion that its support is greater ...
... unless the profit margin is high. Furthermore, within the set of transactions that contain item A, we want to know how often they contain product B as well; this is the role of rule’s confidence. If we introduce the term frequent for an itemset X that meets the criterion that its support is greater ...
A Probabilistic Framework for Semi
... We propose a principled probabilistic framework based on Hidden Markov Random Fields (HMRFs) for semi-supervised clustering that combines the constraint-based and distance-based approaches in a unified model. We motivate an objective function for semi-supervised clustering derived from the posterior ...
... We propose a principled probabilistic framework based on Hidden Markov Random Fields (HMRFs) for semi-supervised clustering that combines the constraint-based and distance-based approaches in a unified model. We motivate an objective function for semi-supervised clustering derived from the posterior ...
High Dimensional Similarity Joins: Algorithms and Performance
... 2 to 3, to nish the processing of the to 2 range, and so on. Corresponding ranges in both les can be processed via the plane sweep algorithm. Figure 2b illustrates the two dimensional version of the algorithm. Generalizing this approach to d-dimensional spaces for data sets involving O(n) mul ...
... 2 to 3, to nish the processing of the to 2 range, and so on. Corresponding ranges in both les can be processed via the plane sweep algorithm. Figure 2b illustrates the two dimensional version of the algorithm. Generalizing this approach to d-dimensional spaces for data sets involving O(n) mul ...
Expectation–maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step.