PDF (Recognizing and Discovering Activities of Daily Living in Smart

... of labeled data, recognition methods can be categorized as either supervised or unsupervised. Designing a comprehensive activity recognition system that works on a real-world setting is extremely challenging because of the difficulty for computers to process the complex nature of the human behaviors ...

... of labeled data, recognition methods can be categorized as either supervised or unsupervised. Designing a comprehensive activity recognition system that works on a real-world setting is extremely challenging because of the difficulty for computers to process the complex nature of the human behaviors ...

fulltext

... This thesis considers computer-assisted troubleshooting of heavy vehicles such as trucks and buses. In this setting, the person that is troubleshooting a vehicle problem is assisted by a computer that is capable of listing possible faults that can explain the problem and gives recommendations of whi ...

... This thesis considers computer-assisted troubleshooting of heavy vehicles such as trucks and buses. In this setting, the person that is troubleshooting a vehicle problem is assisted by a computer that is capable of listing possible faults that can explain the problem and gives recommendations of whi ...

Aalborg Universitet Learning Bayesian Networks with Mixed Variables Bøttcher, Susanne Gammelgaard

... Paper I addresses these issues for Bayesian networks with mixed variables. In this paper, the focus is on learning Bayesian networks, where the joint probability distribution is conditional Gaussian. For an introductory text on learning Bayesian networks, see Heckerman (1999). To learn the parameter ...

... Paper I addresses these issues for Bayesian networks with mixed variables. In this paper, the focus is on learning Bayesian networks, where the joint probability distribution is conditional Gaussian. For an introductory text on learning Bayesian networks, see Heckerman (1999). To learn the parameter ...

Brief Survey on Computational Solutions for Bayesian Inference

... different types of junction trees was evaluated: linear junction trees, balanced trees and random junction trees. Xia et al. [45], [42], [43], [46], [41], [44] followed up in the study of parallel implementations for exact inference in junction trees deployed in CPUs until 2011. In 2010, Jeon, Xia e ...

... different types of junction trees was evaluated: linear junction trees, balanced trees and random junction trees. Xia et al. [45], [42], [43], [46], [41], [44] followed up in the study of parallel implementations for exact inference in junction trees deployed in CPUs until 2011. In 2010, Jeon, Xia e ...

The Nonparametric Kernel Bayes Smoother

... norm, which is useful for various machine learning algorithms [12, 11, 9, 20, 30]; and (ii) estimation is relatively easy compared to nonparametric density estimation, which can be problematic when the domain is high-dimensional or structured. In particular, the kernel mean map [24], defined as an e ...

... norm, which is useful for various machine learning algorithms [12, 11, 9, 20, 30]; and (ii) estimation is relatively easy compared to nonparametric density estimation, which can be problematic when the domain is high-dimensional or structured. In particular, the kernel mean map [24], defined as an e ...

Context-Dependent Incremental Intention Recognition through Bayesian Network Model Construction

... In this work, we use Bayesian Networks (BN) as the intention recognition model. The flexibility of BNs for representing probabilistic dependencies and the efficiency of inference methods for BN have made them an extremely powerful and natural tool for problem solving under uncertainty [16, 17]. We p ...

... In this work, we use Bayesian Networks (BN) as the intention recognition model. The flexibility of BNs for representing probabilistic dependencies and the efficiency of inference methods for BN have made them an extremely powerful and natural tool for problem solving under uncertainty [16, 17]. We p ...

Probability Elicitation Methods for Avoiding Biases: An Exposition

... a long-term program, it naturally involves multiple decisions and to keep the analysis cost to a minimum, only a critical decision (and a subset of crucial decisions) were processed. The set of critical and crucial decisions encompass the building timeframe and the nature of the major test facilitie ...

... a long-term program, it naturally involves multiple decisions and to keep the analysis cost to a minimum, only a critical decision (and a subset of crucial decisions) were processed. The set of critical and crucial decisions encompass the building timeframe and the nature of the major test facilitie ...

Probabilistic Topic Models - UCI Cognitive Science Experiments

... ways: as a generative model and as a problem of statistical inference. On the left, the generative process is illustrated with two topics. Topics 1 and 2 are thematically related to money and rivers and are illustrated as bags containing different distributions over words. Different documents can be ...

... ways: as a generative model and as a problem of statistical inference. On the left, the generative process is illustrated with two topics. Topics 1 and 2 are thematically related to money and rivers and are illustrated as bags containing different distributions over words. Different documents can be ...

Nonsymmetric Bootstrap Confidence Intervals Using SAS/IML® Software on a PC

... sampliry9. distribution of 8 directly: The actual study would be replicated a large number of times, say 200. With each replicallon, a new Biwould be estimated and stored in a ?AS data S?t. The distribution of 8 could be visually Inspected uSing PROC CHART with the HBAR command, which produces a hor ...

... sampliry9. distribution of 8 directly: The actual study would be replicated a large number of times, say 200. With each replicallon, a new Biwould be estimated and stored in a ?AS data S?t. The distribution of 8 could be visually Inspected uSing PROC CHART with the HBAR command, which produces a hor ...

A Game-theoretic Machine Learning Approach for Revenue

... To overcome the above drawbacks, we propose a novel approach, which can naturally combine game theory and machine learning using a bilevel optimization framework, so as to simultaneously avoid the strong assumptions and handle the second-order effect. For ease of reference, we call the approach a g ...

... To overcome the above drawbacks, we propose a novel approach, which can naturally combine game theory and machine learning using a bilevel optimization framework, so as to simultaneously avoid the strong assumptions and handle the second-order effect. For ease of reference, we call the approach a g ...

Bayesian Reasoning - Bayesian Intelligence

... {Ai } turns out to be the case, we can equivalently talk about which state xi the random variable X takes, which we write X = xi . The set of states a variable X can take form its state space, written ΩX , and its size (or arity) is |ΩX |. The discussion thus far has been implicitly of discrete vari ...

... {Ai } turns out to be the case, we can equivalently talk about which state xi the random variable X takes, which we write X = xi . The set of states a variable X can take form its state space, written ΩX , and its size (or arity) is |ΩX |. The discussion thus far has been implicitly of discrete vari ...

Pachinko Allocation: DAG-Structured Mixture Models of Topic

... give high likelihood to the training data, and the highest probability words in each topic provide keywords that briefly summarize the themes in the text collection. In addition to textual data (including news articles, research papers and email), topic models have also been applied to images, biolo ...

... give high likelihood to the training data, and the highest probability words in each topic provide keywords that briefly summarize the themes in the text collection. In addition to textual data (including news articles, research papers and email), topic models have also been applied to images, biolo ...

Statistical Causal Inference

... causal graph is also assumed to be complete in the sense that all of the causal relations among the specified variables are included in the graph. For example, the graph in Fig. 4 has no edge from Y to S, so it is only accurate if the level of nicotine stains does not in any way cause smoking behav ...

... causal graph is also assumed to be complete in the sense that all of the causal relations among the specified variables are included in the graph. For example, the graph in Fig. 4 has no edge from Y to S, so it is only accurate if the level of nicotine stains does not in any way cause smoking behav ...

discrete variational autoencoders

... The smoothing distribution r(ζ|z) transforms the model into a continuous function of the distribution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. Given this expansion, we can simplify Equations 3 and 4 by dropping the depe ...

... The smoothing distribution r(ζ|z) transforms the model into a continuous function of the distribution over z, and allows us to use Equations 2 and 3 directly to obtain low-variance stochastic approximations to the gradient. Given this expansion, we can simplify Equations 3 and 4 by dropping the depe ...

Representing and Querying Correlated Tuples in Probabilistic

... and intuitive querying semantics, and (c) they are easier to store and operate on. However, these models often make simplistic and highly restrictive assumptions about the data (e.g., complete independence among base tuples [19, 12]). In particular, they cannot easily model or handle dependencies/co ...

... and intuitive querying semantics, and (c) they are easier to store and operate on. However, these models often make simplistic and highly restrictive assumptions about the data (e.g., complete independence among base tuples [19, 12]). In particular, they cannot easily model or handle dependencies/co ...

The MADP Toolbox 0.3

... and includes several example applications using the provided functionality. For instance, applications that use JESP or brute-force search to solve problems (specified as .dpomdp files) for a particular planning horizon. In this way, Dec-POMDPs can be solved directly from the command line. Furthermo ...

... and includes several example applications using the provided functionality. For instance, applications that use JESP or brute-force search to solve problems (specified as .dpomdp files) for a particular planning horizon. In this way, Dec-POMDPs can be solved directly from the command line. Furthermo ...

Mixed-effects modeling with crossed random effects for subjects and

... modeling problem, based on statistical theory and computational methods available at the time (e.g., Winer, 1971). This solution involved computing a quasi-F statistic which, in the simplest-to-use form, could be approximated by the use of a combined minimum-F statistic derived from separate partici ...

... modeling problem, based on statistical theory and computational methods available at the time (e.g., Winer, 1971). This solution involved computing a quasi-F statistic which, in the simplest-to-use form, could be approximated by the use of a combined minimum-F statistic derived from separate partici ...

Ordered Stick-Breaking Prior for Sequential MCMC Inference of

... for some 0.5 < µ < 1. νj = (1 − µ)νj−1 and ν1 = γ (γ > 0 as in Eq. (8)). So νj = (1 − µ)j−1 γ. We use, γj = µνj and hence γj = µ(1 − µ)j−1 γ. Thus, we have only two hyper-parameters µ and γ. Equivalence with DPMM. Note that, formulation Eq. (9) trivially becomes equivalent to DPMM (Eq. (8)) in batch ...

... for some 0.5 < µ < 1. νj = (1 − µ)νj−1 and ν1 = γ (γ > 0 as in Eq. (8)). So νj = (1 − µ)j−1 γ. We use, γj = µνj and hence γj = µ(1 − µ)j−1 γ. Thus, we have only two hyper-parameters µ and γ. Equivalence with DPMM. Note that, formulation Eq. (9) trivially becomes equivalent to DPMM (Eq. (8)) in batch ...

Relational Dynamic Bayesian Networks

... of such a Bayesian network would be infeasible. We use Relational Bayesian Networks 1 (RBNs) to compactly represent the uncertainty in the system. The relational Bayesian network specifies the dependency between the predicates at the first-order level by using first-order expressions which include e ...

... of such a Bayesian network would be infeasible. We use Relational Bayesian Networks 1 (RBNs) to compactly represent the uncertainty in the system. The relational Bayesian network specifies the dependency between the predicates at the first-order level by using first-order expressions which include e ...

Getting Started with PROC LOGISTIC

... has on increasing or decreasing the probability that the dependent variable will achieve the value of one in the population from which the data are assumed to have been randomly sampled. The Odds Ratio Exponentiation of the parameter estimate(s) for the independent variable(s) in the model by the nu ...

... has on increasing or decreasing the probability that the dependent variable will achieve the value of one in the population from which the data are assumed to have been randomly sampled. The Odds Ratio Exponentiation of the parameter estimate(s) for the independent variable(s) in the model by the nu ...

Dynamic Restart Policies - Association for the Advancement of

... independent from one another; in the analyses, no information is considered to be carried over from one run to the next.3 However, a careful analysis of informational relationships among multiple runs reveals that runs may be dependent in some scenarios: observing run i influences the probability di ...

... independent from one another; in the analyses, no information is considered to be carried over from one run to the next.3 However, a careful analysis of informational relationships among multiple runs reveals that runs may be dependent in some scenarios: observing run i influences the probability di ...

Dynamic Restart Policies

... independent from one another; in the analyses, no information is considered to be carried over from one run to the next.3 However, a careful analysis of informational relationships among multiple runs reveals that runs may be dependent in some scenarios: observing run i influences the probability di ...

... independent from one another; in the analyses, no information is considered to be carried over from one run to the next.3 However, a careful analysis of informational relationships among multiple runs reveals that runs may be dependent in some scenarios: observing run i influences the probability di ...

Reinforcement Learning in the Presence of Rare Events

... I would like to gratefully acknowledge everyone who inspired and guided me throughout my academic career. Profs. Binay Bhattacharya and Bob Hadley encouraged me to pursue graduate studies, and I thank them for that. I thank Prof. Doina Precup, whose mind and door were always open. I could not imagin ...

... I would like to gratefully acknowledge everyone who inspired and guided me throughout my academic career. Profs. Binay Bhattacharya and Bob Hadley encouraged me to pursue graduate studies, and I thank them for that. I thank Prof. Doina Precup, whose mind and door were always open. I could not imagin ...

Getting Started with PROC LOGISTIC

... LOGISTIC to the Hosmer and Lemeshow data set yielded a parameter estimated for the variable AGE as 0.0275; exponentiation of that estimate gives an odds ratio of 1.028. In this example, a one unit (that is, one year) increase in a patient’s age increases by 2.8 percent the chance they will die (i.e. ...

... LOGISTIC to the Hosmer and Lemeshow data set yielded a parameter estimated for the variable AGE as 0.0275; exponentiation of that estimate gives an odds ratio of 1.028. In this example, a one unit (that is, one year) increase in a patient’s age increases by 2.8 percent the chance they will die (i.e. ...

Getting Started with PROC LOGISTIC

... intuitive and easily understood way to capture the relationship between the independent and dependent variables. This quantity is automatically portrayed in PROC LOGISTIC starting in Release 6.07; users with earlier SAS System releases can easily compute this quantity by hand. The odds ratio gives t ...

... intuitive and easily understood way to capture the relationship between the independent and dependent variables. This quantity is automatically portrayed in PROC LOGISTIC starting in Release 6.07; users with earlier SAS System releases can easily compute this quantity by hand. The odds ratio gives t ...

A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobserved (hidden) states. A HMM can be presented as the simplest dynamic Bayesian network. The mathematics behind the HMM was developed by L. E. Baum and coworkers. It is closely related to an earlier work on the optimal nonlinear filtering problem by Ruslan L. Stratonovich, who was the first to describe the forward-backward procedure.In simpler Markov models (like a Markov chain), the state is directly visible to the observer, and therefore the state transition probabilities are the only parameters. In a hidden Markov model, the state is not directly visible, but output, dependent on the state, is visible. Each state has a probability distribution over the possible output tokens. Therefore the sequence of tokens generated by an HMM gives some information about the sequence of states. Note that the adjective 'hidden' refers to the state sequence through which the model passes, not to the parameters of the model; the model is still referred to as a 'hidden' Markov model even if these parameters are known exactly.Hidden Markov models are especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.A hidden Markov model can be considered a generalization of a mixture model where the hidden variables (or latent variables), which control the mixture component to be selected for each observation, are related through a Markov process rather than independent of each other. Recently, hidden Markov models have been generalized to pairwise Markov models and triplet Markov models which allow consideration of more complex data structures and the modelling of nonstationary data.