Evolutionary Optimization of Radial Basis Function Classifiers for
... vector of the network is “similar” (depending on the value of the radius) to the center of its basis function. The center of a basis function can, therefore, be regarded as a prototype of a hyperspherical cluster in the input space of the network. The radius of the cluster is given by the value of t ...
... vector of the network is “similar” (depending on the value of the radius) to the center of its basis function. The center of a basis function can, therefore, be regarded as a prototype of a hyperspherical cluster in the input space of the network. The radius of the cluster is given by the value of t ...
ÇUKUROVA UNIVERSITY INSTITUTE OF NATURAL AND APPLIED
... method they had performed a fine classification when a pair of the spatial coordinate of the observation data in the observation space and its corresponding feature vector in the feature space is provided (Kubota et al, 2008). Anbeek et al had developed a method that uses a KNN classification techni ...
... method they had performed a fine classification when a pair of the spatial coordinate of the observation data in the observation space and its corresponding feature vector in the feature space is provided (Kubota et al, 2008). Anbeek et al had developed a method that uses a KNN classification techni ...
novel sequence representations Reliable prediction of T
... The details of this encoding are described later in section 3.3. The sparse versus the Blosum sequence-encoding scheme constitutes two different approaches to represent sequence information to the neural network. In the sparse encoding the neural network is given very precise information about the s ...
... The details of this encoding are described later in section 3.3. The sparse versus the Blosum sequence-encoding scheme constitutes two different approaches to represent sequence information to the neural network. In the sparse encoding the neural network is given very precise information about the s ...
Yarn tenacity modeling using artificial neural networks and
... the first term in fitness function would be zero. The second term stands for the production cost function. As the cost increases, the fitness function will increase, too. Regarding the amount of K in Eq. 7, since the main objective is to reach the desired tenacity, K has to be determined in such a w ...
... the first term in fitness function would be zero. The second term stands for the production cost function. As the cost increases, the fitness function will increase, too. Regarding the amount of K in Eq. 7, since the main objective is to reach the desired tenacity, K has to be determined in such a w ...
Reliable prediction of T-cell epitopes using neural networks with
... The details of this encoding are described later in section 3.3. The sparse versus the Blosum sequence-encoding scheme constitutes two different approaches to represent sequence information to the neural network. In the sparse encoding the neural network is given very precise information about the s ...
... The details of this encoding are described later in section 3.3. The sparse versus the Blosum sequence-encoding scheme constitutes two different approaches to represent sequence information to the neural network. In the sparse encoding the neural network is given very precise information about the s ...
Classification with Incomplete Data Using Dirichlet Process Priors
... p(xm |xo ) is learned, various imputation methods may be performed. As a Monte Carlo approach, Bayesian multiple imputation (MI) (Rubin, 1987) is widely used, where multiple (M > 1) samples from p(xm |xo ) are imputed to form M “complete” data sets, with the complete-data algorithm applied on each, ...
... p(xm |xo ) is learned, various imputation methods may be performed. As a Monte Carlo approach, Bayesian multiple imputation (MI) (Rubin, 1987) is widely used, where multiple (M > 1) samples from p(xm |xo ) are imputed to form M “complete” data sets, with the complete-data algorithm applied on each, ...
A Closest Fit Approach to Missing Attribute Values in Preterm Birth
... given case with a missing attribute value, we may look for the closest fitting cases within the same concept, as defined by the case with missing attribute value, or in all concepts, i.e., among all cases. The former algorithm is called concept closest fit, the latter is called global closest fit. S ...
... given case with a missing attribute value, we may look for the closest fitting cases within the same concept, as defined by the case with missing attribute value, or in all concepts, i.e., among all cases. The former algorithm is called concept closest fit, the latter is called global closest fit. S ...
ANN - Loughborough University Institutional Repository
... pursued depending upon its accuracy. In MATLAB toolbox the initial weights of nodes are assigned randomly, so repeated training may result in different ANN performance. In this work, each ANN was trained multiple times. The number of hidden neurons was varied gradually, since large neuron numbers wi ...
... pursued depending upon its accuracy. In MATLAB toolbox the initial weights of nodes are assigned randomly, so repeated training may result in different ANN performance. In this work, each ANN was trained multiple times. The number of hidden neurons was varied gradually, since large neuron numbers wi ...
A Review of Class Imbalance Problem
... imbalanced classes. Section III, explain various evaluation metrics used in imbalanced classes. In Section IV, we explain various solutions introduced for dealing with imbalance class’s problem II. Feature Selection in Imbalance Problems Feature selection is another critical issue in machine learnin ...
... imbalanced classes. Section III, explain various evaluation metrics used in imbalanced classes. In Section IV, we explain various solutions introduced for dealing with imbalance class’s problem II. Feature Selection in Imbalance Problems Feature selection is another critical issue in machine learnin ...
sv-lncs
... predefined stopping criterion is reached. The algorithm outputs the last current best subset Sbest as the final result. Since the filter model applies independent evaluation criteria without involving any learning algorithm, it does not inherit any bias of a learning algorithm and it is also compu ...
... predefined stopping criterion is reached. The algorithm outputs the last current best subset Sbest as the final result. Since the filter model applies independent evaluation criteria without involving any learning algorithm, it does not inherit any bias of a learning algorithm and it is also compu ...
On the effect of data set size on bias and variance in classification
... applied for the variance outcomes. As no predictions were made with respect to bias, two-tailed tests are for applied for the bias outcomes1. Results were considered significant if the outcome of the binomial test is less than 0.05. As can be seen in Table 1, variance is shown to have a statisticall ...
... applied for the variance outcomes. As no predictions were made with respect to bias, two-tailed tests are for applied for the bias outcomes1. Results were considered significant if the outcome of the binomial test is less than 0.05. As can be seen in Table 1, variance is shown to have a statisticall ...
Relational Topographic Maps - Institut für Informatik, TU Clausthal
... of a fixed lattice structure as for SOM is necessary and the risk of topographic errors is minimized. For NG, an optimum (nonregular) data topology is induced such that browsing in a neighborhood becomes directly possible [24]. In the last years, a variety of extensions of these methods has been pro ...
... of a fixed lattice structure as for SOM is necessary and the risk of topographic errors is minimized. For NG, an optimum (nonregular) data topology is induced such that browsing in a neighborhood becomes directly possible [24]. In the last years, a variety of extensions of these methods has been pro ...
Multi-objective optimization of support vector machines
... set, the first one is used for building the SVM and the second for assessing the performance of the classifier. In L-fold cross-validation (CV) the available data is partitioned into L disjoint sets D1 , . . . , DL of (approximately) equal size. For given hyperparameters, the SVM is trained L times. ...
... set, the first one is used for building the SVM and the second for assessing the performance of the classifier. In L-fold cross-validation (CV) the available data is partitioned into L disjoint sets D1 , . . . , DL of (approximately) equal size. For given hyperparameters, the SVM is trained L times. ...
IOSR Journal of Computer Engineering (IOSRJCE)
... c := DominatingClass (topkClasses) test_targets(i) := c end ...
... c := DominatingClass (topkClasses) test_targets(i) := c end ...
document - Catholic Diocese of Wichita
... Despite of the encouraging results achieved by related studies, and its ability to hold classification problems, there was a direction by academic and practitioners for having more developed models with improved accuracy. However, they have been developing scoring models based on new advanced techni ...
... Despite of the encouraging results achieved by related studies, and its ability to hold classification problems, there was a direction by academic and practitioners for having more developed models with improved accuracy. However, they have been developing scoring models based on new advanced techni ...
Fulltext - Brunel University Research Archive
... Despite of the encouraging results achieved by related studies, and its ability to hold classification problems, there was a direction by academic and practitioners for having more developed models with improved accuracy. However, they have been developing scoring models based on new advanced techni ...
... Despite of the encouraging results achieved by related studies, and its ability to hold classification problems, there was a direction by academic and practitioners for having more developed models with improved accuracy. However, they have been developing scoring models based on new advanced techni ...
One-class to multi-class model update using the class
... AI Researcher Symposium (STAIRS). The papers from PAIS are included in this volume, while the papers from STAIRS are published in a separate volume. ECAI 2016 also featured a special topic on Artificial Intelligence for Human Values, with a dedicated track and a public event in the Peace Palace in T ...
... AI Researcher Symposium (STAIRS). The papers from PAIS are included in this volume, while the papers from STAIRS are published in a separate volume. ECAI 2016 also featured a special topic on Artificial Intelligence for Human Values, with a dedicated track and a public event in the Peace Palace in T ...
A New Approach to Classification with the Least Number of Features
... dimensions x1 , . . . , xk were drawn normally distributed as xi = N (µ · y, 1). The remaining features xk+1 , . . . , xd were noise drawn as xi = N (0, 1). Training and test sets were sampled according to the above procedure, each containing n data points. Figure 1 shows the mean results after 100 ...
... dimensions x1 , . . . , xk were drawn normally distributed as xi = N (µ · y, 1). The remaining features xk+1 , . . . , xd were noise drawn as xi = N (0, 1). Training and test sets were sampled according to the above procedure, each containing n data points. Figure 1 shows the mean results after 100 ...
Predicting Classifier Combinations
... Figure 2 shows a box plot of the accuracies achieved by three strategies for selecting the classifier combination: (1) the optimal combination achieving the highest possible accuracy, (2) the combination that achieved the highest average accuracy over all datasets (KNN+MLP+SVM), and (3) the combinat ...
... Figure 2 shows a box plot of the accuracies achieved by three strategies for selecting the classifier combination: (1) the optimal combination achieving the highest possible accuracy, (2) the combination that achieved the highest average accuracy over all datasets (KNN+MLP+SVM), and (3) the combinat ...
A Genetic Algorithm for Expert System Rule Generation
... zero. Both probabilities are normalized by the same “significance factor,” defined to be unity for integral classification values, and 10% of the full scale classification range for continuous values. To simplify “calibration” of the mutation operator, the adjustable terms were restricted to the mis ...
... zero. Both probabilities are normalized by the same “significance factor,” defined to be unity for integral classification values, and 10% of the full scale classification range for continuous values. To simplify “calibration” of the mutation operator, the adjustable terms were restricted to the mis ...
PDF (free)
... regression (LR) to classify and predict criminal behaviors in smuggling. At the same time, it shows the difference between ANN and human inspection (HI), also the difference between LR and HI. This study establishes models for vessels of different tonnage and operation purposes that can provide law ...
... regression (LR) to classify and predict criminal behaviors in smuggling. At the same time, it shows the difference between ANN and human inspection (HI), also the difference between LR and HI. This study establishes models for vessels of different tonnage and operation purposes that can provide law ...
Learning from Heterogeneous Sources via
... users’ profiles can be used to build recommendation systems. In addition, a model can also use users’ historical behaviors and social networks to infer users’ interests on related products. We argue that it is desirable to collectively use any available multiple heterogeneous data sources in order t ...
... users’ profiles can be used to build recommendation systems. In addition, a model can also use users’ historical behaviors and social networks to infer users’ interests on related products. We argue that it is desirable to collectively use any available multiple heterogeneous data sources in order t ...
Document
... distributed problem instances. In other words, the training and application samples come from the same population (distribution) with identical probability to be selected for inclusion and this population/distribution is time-invariant. (Note: if not time invariant then by incorporating time as inde ...
... distributed problem instances. In other words, the training and application samples come from the same population (distribution) with identical probability to be selected for inclusion and this population/distribution is time-invariant. (Note: if not time invariant then by incorporating time as inde ...
Improving CNN Performance with Min-Max Objective
... weight decay, drop ratio, etc), we follow the published configurations of the original networks. All the models are implemented using the Caffe platform [Jia et al., 2014] from scratch without pre-training. During the training phase, some parameters of the Min-Max objective need to be determined. Fo ...
... weight decay, drop ratio, etc), we follow the published configurations of the original networks. All the models are implemented using the Caffe platform [Jia et al., 2014] from scratch without pre-training. During the training phase, some parameters of the Min-Max objective need to be determined. Fo ...