
Contributions to Deep Learning Models - RiuNet
... The goal of this thesis is to present some contributions to the Deep Learning framework, particularly focused on computer vision problems dealing with images. These contributions can be summarized in two novel methods proposed: a new regularization technique for Restricted Boltzmann Machines called ...
... The goal of this thesis is to present some contributions to the Deep Learning framework, particularly focused on computer vision problems dealing with images. These contributions can be summarized in two novel methods proposed: a new regularization technique for Restricted Boltzmann Machines called ...
Financial Time Series Forecasting Using Improved Wavelet Neural
... [85] proposes a GRANN-ARIMA hybrid model which combines non-linear Grey Relational Artificial Neural Network and linear ARIMA model for time series forecasting. The experimental results indicate that the hybrid method outperforms ARIMA, Multiple Regression, GRANN, MARMA, MR ANN, ARIMA and traditiona ...
... [85] proposes a GRANN-ARIMA hybrid model which combines non-linear Grey Relational Artificial Neural Network and linear ARIMA model for time series forecasting. The experimental results indicate that the hybrid method outperforms ARIMA, Multiple Regression, GRANN, MARMA, MR ANN, ARIMA and traditiona ...
How We`re Predicting AI—or Failing To
... they are diverse. Starting with Turing’s initial estimation of a 30% pass rate on Turing test by the year 2000 (Turing 1950), computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible (Jacquette 1987) or just around the ...
... they are diverse. Starting with Turing’s initial estimation of a 30% pass rate on Turing test by the year 2000 (Turing 1950), computer scientists, philosophers and journalists have never been shy to offer their own definite prognostics, claiming AI to be impossible (Jacquette 1987) or just around the ...
Wrappers for feature subset selection
... maximal. An optimal feature subset need not be unique because it may be possible to achieve the same accuracy using different sets of features (e.g., when two features are perfectly correlated, one can be replaced by the other). By definition, to get the highest possible accuracy, the best subset th ...
... maximal. An optimal feature subset need not be unique because it may be possible to achieve the same accuracy using different sets of features (e.g., when two features are perfectly correlated, one can be replaced by the other). By definition, to get the highest possible accuracy, the best subset th ...
link - Worcester Polytechnic Institute
... relevance. At their best they have been shown to achieve the same educational gain as one on one human tutoring (Koedinger et al., 1997). They have also received the attention of White House, which mentioned a tutoring platform named ASSISTments in its National Educational Technology Plan (Departmen ...
... relevance. At their best they have been shown to achieve the same educational gain as one on one human tutoring (Koedinger et al., 1997). They have also received the attention of White House, which mentioned a tutoring platform named ASSISTments in its National Educational Technology Plan (Departmen ...
Handling the Class Imbalance Problem in Binary Classification
... Natural processes often generate some observations more frequently than others. These processes result in an unbalanced distributions which cause the classifiers to bias toward the majority class especially because most classifiers assume a normal distribution. The quantity and the diversity of imba ...
... Natural processes often generate some observations more frequently than others. These processes result in an unbalanced distributions which cause the classifiers to bias toward the majority class especially because most classifiers assume a normal distribution. The quantity and the diversity of imba ...
Consensus group stable feature selection
... We study SVM-RFE [9], an algorithm well known for its excellent generalization performance on high-dimensional small-sample data. The main process of SVM-RFE is to recursively eliminate features based on SVM, using the coefficients of the optimal decision boundary to measure the relevance of each fe ...
... We study SVM-RFE [9], an algorithm well known for its excellent generalization performance on high-dimensional small-sample data. The main process of SVM-RFE is to recursively eliminate features based on SVM, using the coefficients of the optimal decision boundary to measure the relevance of each fe ...
Optimal Ensemble Construction via Meta-Evolutionary
... that trains each classifier on a randomly drawn training set. Each classifier’s training set consists of the same number of examples randomly drawn from the original training set, with the probability of drawing any given example being equal. Samples are drawn with replacement, so that some examples ...
... that trains each classifier on a randomly drawn training set. Each classifier’s training set consists of the same number of examples randomly drawn from the original training set, with the probability of drawing any given example being equal. Samples are drawn with replacement, so that some examples ...
Online Full Text
... Various techniques exist for filtering spam. These methods can be generally categorized into techniques that have been influenced by artificial intelligence and machine learning, and other techniques. These other techniques tend to be older and less robust. For example, use of white lists, black lis ...
... Various techniques exist for filtering spam. These methods can be generally categorized into techniques that have been influenced by artificial intelligence and machine learning, and other techniques. These other techniques tend to be older and less robust. For example, use of white lists, black lis ...
Noise Tolerant Data Mining
... changed data entries make the succeeding data mining algorithms insufficient to discover the genuine knowledge models. For many content sensitive domains, such as medical, financial, or security databases, this kind of methods is simply not a good option. Second, most noise handling methods take th ...
... changed data entries make the succeeding data mining algorithms insufficient to discover the genuine knowledge models. For many content sensitive domains, such as medical, financial, or security databases, this kind of methods is simply not a good option. Second, most noise handling methods take th ...
A Bayes Optimal Approach for Partitioning the Values of Categorical
... algorithm first sorts the categories according to the probability of the first class value, and then searches for the best split in this sorted list. This algorithm has a time complexity of O(I log(I)), where I is the number of categories. Based on the ideas presented in (Lechevallier, 1990; Fulton ...
... algorithm first sorts the categories according to the probability of the first class value, and then searches for the best split in this sorted list. This algorithm has a time complexity of O(I log(I)), where I is the number of categories. Based on the ideas presented in (Lechevallier, 1990; Fulton ...
Title An Evolutionary Approach to Automatic Kernel Construction
... This particular kernel tree was generated from experiments on the Ionosphere dataset. The diagram shows that the kernel tree is split into two parts, the vector and the scalar tree. The inputs to the vector tree are the two samples, x and z, for which the kernel is being evaluated. These inputs are ...
... This particular kernel tree was generated from experiments on the Ionosphere dataset. The diagram shows that the kernel tree is split into two parts, the vector and the scalar tree. The inputs to the vector tree are the two samples, x and z, for which the kernel is being evaluated. These inputs are ...
Ensemble Learning Techniques for Structured
... classification models such as decision trees, artificial neural networks, Naïve Bayes, as well as many other classifiers (Kim, 2009). Ensemble learning, based on aggregating the results from multiple models, is a more sophisticated approach for increasing model accuracy as compared to the traditiona ...
... classification models such as decision trees, artificial neural networks, Naïve Bayes, as well as many other classifiers (Kim, 2009). Ensemble learning, based on aggregating the results from multiple models, is a more sophisticated approach for increasing model accuracy as compared to the traditiona ...
no - CENG464
... This attribute minimizes the information needed to classify the tuples in the resulting partitions and reflects the least randomness or impurity in these partitions ...
... This attribute minimizes the information needed to classify the tuples in the resulting partitions and reflects the least randomness or impurity in these partitions ...
Matching Conflicts: Functional Validation of Agents
... maybe only able to transform the input vector of some certain dimensions. The actual numerical computations carried out vary from algorithm to algorithm so that different round-off errors are accumulated leading to slightly different answers. Moreoverdifferent numerical implementations of some basi ...
... maybe only able to transform the input vector of some certain dimensions. The actual numerical computations carried out vary from algorithm to algorithm so that different round-off errors are accumulated leading to slightly different answers. Moreoverdifferent numerical implementations of some basi ...
Combining Clustering with Classification for Spam Detection in
... clustering to reduce the training time of a classifier when dealing with large data sets. In particular, while SVM classifiers (see [3] for a tutorial) have proved to be a great success in many areas, their training time is at least O(N 2 ) for training data of size N , which makes them non favourab ...
... clustering to reduce the training time of a classifier when dealing with large data sets. In particular, while SVM classifiers (see [3] for a tutorial) have proved to be a great success in many areas, their training time is at least O(N 2 ) for training data of size N , which makes them non favourab ...
A Stochastic Algorithm for Feature Selection in Pattern Recognition
... The second approach (wrapper methods) is computationally demanding, but often is more accurate. A wrapper algorithm explores the space of features subsets to optimize the induction algorithm that uses the subset for classification. These methods based on penalization face a combinatorial challenge w ...
... The second approach (wrapper methods) is computationally demanding, but often is more accurate. A wrapper algorithm explores the space of features subsets to optimize the induction algorithm that uses the subset for classification. These methods based on penalization face a combinatorial challenge w ...
Inductive Intrusion Detection in Flow-Based
... Sequential forward selection which resulted in the selection of three features, namely BAD. After each iteration, the feature yielding the best intermediate error rate is added to the list of features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sequential backward elimination which resu ...
... Sequential forward selection which resulted in the selection of three features, namely BAD. After each iteration, the feature yielding the best intermediate error rate is added to the list of features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sequential backward elimination which resu ...
sociallocker - Projectsgoal
... Thereafter, we propose a dual training (DT) algorithm and a dual prediction (DP) algorithm respectively, to make use of the original and reversed samples in pairs for training a statistical classifier and make predictions. In DT, the classifier is learnt by maximizing a combination of likelihoods of ...
... Thereafter, we propose a dual training (DT) algorithm and a dual prediction (DP) algorithm respectively, to make use of the original and reversed samples in pairs for training a statistical classifier and make predictions. In DT, the classifier is learnt by maximizing a combination of likelihoods of ...
Baseball Prediction Using Ensemble Learning by Arlo Lyle (Under
... In this thesis, I explored the use of machine learning techniques for baseball predictions as opposed to the purely statistical methods that currently dominate the landscape. While the idea of using machine learning to predict players’ statistics is not a new one, as seen by the Vladimir system, the ...
... In this thesis, I explored the use of machine learning techniques for baseball predictions as opposed to the purely statistical methods that currently dominate the landscape. While the idea of using machine learning to predict players’ statistics is not a new one, as seen by the Vladimir system, the ...
Big Data Analytics Using Neural networks
... Sharma, Chetan, "Big Data Analytics Using Neural networks" (2014). Master's Projects. Paper 368. ...
... Sharma, Chetan, "Big Data Analytics Using Neural networks" (2014). Master's Projects. Paper 368. ...
Technical Note Naive Bayes for Regression
... Why does naive Bayes perform well even when the independence assumption is seriously violated? Most likely it owes its good performance to the zero-one loss function used in classification (Domingos & Pazzani, 1997). This function defines the error as the number of incorrect predictions. Unlike othe ...
... Why does naive Bayes perform well even when the independence assumption is seriously violated? Most likely it owes its good performance to the zero-one loss function used in classification (Domingos & Pazzani, 1997). This function defines the error as the number of incorrect predictions. Unlike othe ...
Medical Diagnosis with C4.5 Rule Preceded by Artificial
... artificial neural network ensemble, it is still impressive that Table 1 indicates the generalization ability of C4.5 Rule-PANE is about 23% (((.2726-.2306)/.2726 + (.1581-.1034)/.1581 + (.0567-.0460)/.0567) / 3 = .2296) better than that of the wellestablished method C4.5 Rule on these three case stu ...
... artificial neural network ensemble, it is still impressive that Table 1 indicates the generalization ability of C4.5 Rule-PANE is about 23% (((.2726-.2306)/.2726 + (.1581-.1034)/.1581 + (.0567-.0460)/.0567) / 3 = .2296) better than that of the wellestablished method C4.5 Rule on these three case stu ...
A Comparative Analysis of Classification with Unlabelled Data using
... reality, data is always in short supply. On one aspect, we would like to use as much of the data as possible for training. But one the other aspect, we want to use as much of it as possible for testing. There already exist some technologies to deal with this issue and it is still controversial till ...
... reality, data is always in short supply. On one aspect, we would like to use as much of the data as possible for training. But one the other aspect, we want to use as much of it as possible for testing. There already exist some technologies to deal with this issue and it is still controversial till ...
portable document (.pdf) format
... Technical Details and an Applied Example Many researchers have experienced nonconvergence errors where, for one reason or another, a maximum likelihood solution can not be calculated or does not exist. Conditions for nonconvergence include sparseness of data, multiple maximas, unspecified boundary c ...
... Technical Details and an Applied Example Many researchers have experienced nonconvergence errors where, for one reason or another, a maximum likelihood solution can not be calculated or does not exist. Conditions for nonconvergence include sparseness of data, multiple maximas, unspecified boundary c ...