Download Title of Paper (14 pt Bold, Times, Title case)

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Gene expression programming wikipedia , lookup

Hierarchical temporal memory wikipedia , lookup

Pattern recognition wikipedia , lookup

Catastrophic interference wikipedia , lookup

Convolutional neural network wikipedia , lookup

Transcript
1
ITB J. ………….. Vol. XX …, No. X, 20XX, XX-XX
Handwritten Java Characters Recognition Using Several
Methods of Artificial Neural Network
Gregorius Satia Budhi1, Rudy Adipranata2
Informatics Department
Petra Christian University
Surabaya, Indonesia
Email: [email protected], [email protected]
Abstract. Java character is one of the traditional characters in Indonesia and
used to write the Javanese language. Javanese language is the language used by
many people on the island of Java. The use of Java character becomes more and
more diminished because of the difficulty of studying the Java character itself.
Java character consists of basic characters, numbers, complementary characters,
and so on. In this research we developed a system to recognize Java characters.
Input for the system is a digital image containing several handwritten Java
characters. Preprocessing and segmentation are performed on the input image to
get each Java character. For each Java character, feature extraction is done using
ICZ-ZCZ method. Output from feature extraction will become input for artificial
neural network. We used several methods of artificial neural networks, namely
bidirectional
associative
memory,
counterpropagation,
evolutionary,
backpropagation, and backpropagation combine with chi2. From the
experimental results, the combination of chi2 and backpropagation performs
better recognition accuracy than the other methods.
Keywords: Java characters recognition, bidirectional associative
counterpropagation, evolutionary neural network, backpropagation, chi2.
1
memory,
Introduction
Many people on the island of Java use Javanese language for their conversation.
Javanese language has its own form of letters that differ from Roman character,
referred to the character of Java. Java characters recognition has its own
difficulty level because of basis characters, vowels, complementary, and so on.
Because it is difficult to recognize, then lately not many people can do the
writing or reading of Java characters. For many people, Java characters
eventually regarded as a decoration only and do not mean anything. This will
further erode gradually the existence of Java characters and will ultimately also
affect the Javanese culture in general.
Received ________, Revised _________, Accepted for publication __________
2
Gregorius Satia Budhi, Rudy Adipranata
In this research, we developed a system that can automatically recognize Java
characters in the form of digital image, and turn them into a document written
with the hanacaraka font. The first process is digital image preprocessing,
followed by segmentation and feature extraction. The features will be used as
input for recognition system. In this research we use and compare five methods
of artificial neural networks for recognition, namely bidirectional associative
memory, counterpropagation network, evolutionary neural network,
backpropagation neural network, and combination between chi2 and
backpropagation neural network.
In addition as the basis of further research, with this system, we hope that Java
character can be preserved, and studied back easily. This system makes it easy
to save articles in Java characters into electronic documents, also can help
teachers to teach Java language in school. And finally for those who do not
know about the Java characters, this system can be used to identify and interpret
the writings of Java characters that they encounter in tourism sites.
2
Related Works
Some researchers have conducted research on this Java character recognition.
Nurmila [1] used backpropagation neural network and the accuracy result was
about 61%. Other researcher, Priyatma used fuzzy logic for recognition [2] and
the recognition results are satisfactory. Also in relation with the Java characters
studies, Rudy, et.al [3] has developed an application to translate the results of
Roman letter keyboard typing into Java characters using the hanacaraka font.
That application later can be combined with the result of this research to create
optical character recognition and editor for Java characters. This research is an
extension of previous research [4, 5] in addition to the use of backpropagation
neural network for recognition and some improvement in digital image
preprocessing.
Several studies on the implementation of artificial neural networks have also
been done, including automatic classification of sunspot groups for space
weather analysis [6] using backpropagation and other neural network methods.
Other studies are the use of backpropagation method to recognize the type of
automobile [7], to identify abnormalities of the pancreas through the image of
the iris which have recognition result more than 90% [8], and other uses of
backpropagation to recognize the characters on a digital image [9].
Handwritten Java Char. Recog. Using Several Methods of ANN 3
3
Java Characters
Compare with Roman characters, Java characters have different structure and
shape. The basis of Java characters is Carakan, which consist of 20 syllables
called Dentawyanjana. It can be seen in Figure 1 [10].
Figure 1 Basic (carakan) characters
In Figure 2, we can see Numbers in Java characters [11].
Figure 2 Symbol of numbers
Sandhangan characters are special characters that used as complementary
character, vowel or consonant that are commonly used in everyday language.
Sandhangan can be seen in Figure 3 [12].
Sandhangan name
Pepet
Java character
Description
Vowel ê
Taling
Vowel é
Wulu
Vowel i
Taling tarung
Vowel o
Suku
Vowel u
Figure 3 Sandhangan characters
4
4
Gregorius Satia Budhi, Rudy Adipranata
Image Segmentation
One of the important processes used to transform the input image to the output
image is segmentation. Segmentation is taken based on the attributes of the
image. Image is divided into several regions based on its intensity, so it can
distinguish between objects and background. If each object has been isolated or
clearly visible, segmentation should be discontinued [13]. In this research, we
used thresholding and skeletonizing for segmentation process.
Thresholding is one way to separate the objects (foreground) in the image from
the background by selecting a threshold value T. In thresholding, we should
choose the value of T, which is used to separate all points (x, y) in an image. For
all points (x,y) where f (x, y) > T, can be called an object point and besides it is
called a background point or vice versa [13].
To get rid of the extra pixels and produces images that are more modest,
skeletonizing process is used. The goal of skeletonizing process is to make
simpler image, so that the image can be analyzed further in the way of its shape
and suitability. The problem encountered in skeletonizing process is how to
determine the redundant pixels. The skeletonizing process is more likely to an
erosion process where erosion can cause a region is deleted if we cannot
determine redundant pixel. Skeleton should have some basic properties such as
[14]:
•
•
•
5
Pixels that form the skeleton should be located near the middle area of the
region cross section.
Must consist of several thin regions, and each region have only 1 pixel
width.
Skeletal pixel must be connected to each other to form several regions and
the number of those regions should be equal to the number of regions in
original image.
Bidirectional Associative Memory
Bart Kosko in 1988 proposed artificial neural network called bidirectional
associative memory (BAM) [15]. BAM is hetero-associative memory and
content-addressable. A BAM consists of two bipolar binary layers of neurons,
say A and B. The neurons in first layer (A) are fully interconnected to the
neurons in second layer (B). There is no interconnection between neurons in the
same layer. Because the memory process information in time and involves
bidirectional data flow, it differs in principle from a linear association, although
Handwritten Java Char. Recog. Using Several Methods of ANN 5
both networks are used to store association pairs [16]. General diagram of BAM
can be seen in Figure 4.
Figure 4 Bidirectional Associative Memory: General Diagram
BAM Algorithm [15]:
Step 1: The associations between pattern pairs are stored in the memory in the
form of bipolar binary vectors with entries -1 and 1.
{(a(1), b(1)), (a(2), b(2)), …, (a(p), b(p))}
(1)
Vector a store pattern and is n-dimensional, b is m-dimensional which stores
associated output.
Step 2: Calculate weight using equation 2.
(2)
Step 3: Test vector pair a and b is given as input.
Step 4: In the pass forward, b is given as input and calculate a using equation 3.
(3)
Calculate each element of vector a using equation 4.
(4)
6
Gregorius Satia Budhi, Rudy Adipranata
Step 5: Vector a is now given as input to the second layer during backward
pass. Calculate output of this layer using equation 5.
(5)
Calculate each element of vector b using equation 6.
(6)
Step 6: Stop the process if there is no further update. Otherwise repeat step 4
and 5.
BAM Storage Capacity:
Maximal of the pattern pairs which can be stored and successfully retrieved is
min (m, n). This estimate is heuristic. The memory storage capacity of BAM is
[15].
P ≤ min (m, n)
6
(7)
Counterpropagation Network
Robert Hecht-Nielsen in 1987 define Counterpropagation network (CPN) [17].
CPN is widely used because of its simplicity and ease on the training process.
CPN also has good stats in the representation of the input layer for a wide range
of environment. CPN combines unsupervised training method on Kohonen
Layer and supervised on Grossberg layer [17].
6.1.1
Forward Only Counterpropagation
The training for forward-only counterpropagation is same as training for full
counterpropagation. It consists of two phases. The first phase should be
completed before proceeding to second phase. The first phase is Kohonen
learning and the second phase is Grossberg learning. In the Figure 5 we can see
forward-only counterpropagation network architecture.
Handwritten Java Char. Recog. Using Several Methods of ANN 7
v11
X
1
1
v1j
w11
Z
vi1
w1k
wj1
vn1
vij
Xi
Y
1
wp
1
Zj
wjk
Y
k
v1p
vnj
X
n
vnp
w1
v1p
wpk
Z
wjm
m
wp
m
Ym
p
Figure 5 Forward only counterpropagation architecture
Only the winner unit is allowed to learn or update the weights in the learning
phase of Kohonen training. The minimum distance between the weight vector
and input vector is calculated to determine the winning unit. One of equation 8 or
9 can be used to calculate the distance between two vectors:
Dot Product ( z_inj =  xi vij +  yk wkj)
(8)
Euclidean Distance ( Dj =  (xi – vij)2 +  (yk – wkj)2 )
(9)
If using dot product, then look for results that have the largest value, because
the larger value of dot product, the angle between two vectors will get smaller,
provided that both vectors must be normalized. Meanwhile, if using Euclidean
distance then look for results that have the smallest value, because Euclidean
distance calculates the physical distance between the two vectors.
Training algorithm for forward-only counterpropagation is as follows [18]:
Step 0: Initialize learning rate and all weights.
Step 1: Do step 2-7 while the stop condition of first phase is not met.
Step 2: Do step 3-5 for each pair of input x:y.
Step 3: Enter the input vector x in the input layer X.
Step 4: Find the winner of kohonen layer unit, save its index into variable J.
Step 5: Using equation 10, update weight for unit Zj.
vijnew = ( 1 -  )vijold + xi , i = 1 . . . n
Step 6: Decrease the learning rate ().
Step 7: Check if the stop condition is met for the first phase.
(10)
8
Gregorius Satia Budhi, Rudy Adipranata
Step 8: Do step 9-15 while the stop condition of second phase has not been met
( and  is very small and constant during second phase).
Step 9: Do step 10-13 for each pair of input x:y.
Step 10: Enter input vector x in input layer X.
Step 11: Find the winner of kohonen layer unit, save its index into variable J.
Step 12: Update the weights that go into Zj using equation 11.
vijnew = ( 1 -  )vijold + xi , i = 1 . . . n
(11)
Step 13: Update weights out of Zj to output layer using equation 12.
wjknew = ( 1 -  )wjkold + yk , k = 1 . . . m
(12)
Step 14: Decrease the learning rate ().
Step 15: Check if the stop condition is met for the second phase.
After the training process complete, the forward only counterpropagation
network can be used to map x into y using the following algorithm [18].
Step 0: Use the weight training results.
Step 1: Enter input vector x in input layer X.
Step 2: Find the winner index unit, save in J.
Step 3: Calculate output using equation 13.
Yk = wjk
7
(13)
Evolutionary Neural Network
Evolutionary neural network (ENN) is a combination of a neural network with
evolutionary algorithm. A common limitation of neural network is usually
associated with network training. Backpropagation learning algorithms had
serious drawbacks, which cannot guarantee that the optimal solution is given.
Another difficulty in neural network implementation is related to select the
optimal network topology. Network architecture that is appropriate for certain
cases more often chosen from heuristic methods. This shortcoming can be
addressed using evolutionary algorithm.
Evolutionary algorithm refers to a probabilistic adaptation algorithm inspired
from natural evolution. This method follows the statistical search strategies in a
population of individuals, each representing a possible solution to the problem.
Evolutionary algorithm divides into three main forms, namely: evolution
strategies, genetic algorithms, and evolutionary programming [19].
The evolutionary algorithm used in this research is the genetic algorithm.
Genetic algorithm is an effective optimization technique that could help both
Handwritten Java Char. Recog. Using Several Methods of ANN 9
the optimization of weight and selecting the network topology. A problem must
be represented as a chromosome in order to use genetic algorithm. When we
want to look for a set of optimal weight of a multilayer feed forward neural
network, the first step in solving this problem is the system should make the
process of encoding of the network into a chromosome. This process can be
seen in Figure 6 [20].
Figure 6 Encoding a network into a chromosome
The second step is to define the fitness function to evaluate the performance of
the chromosome. This function must be calculated given the performance of the
neural network. A simple function from squared errors can be implemented.
Each chromosome weight is given to each link in the network to evaluate the
fitness of the chromosomes. Training of examples collections are then presented
to the network, and the number of squared errors is calculated. The genetic
algorithm seeks to find a set amount of weight that has the smallest squared
errors.
The third step is to choose crossover and mutation, the genetic operators.
Crossover operator requires two parent chromosomes and creates a child with
genetic material from both of its parent. Each gene of the child chromosome is
represented by the corresponding genes of randomly selected parent. The
system is ready to apply genetic algorithms when mutation operator randomly
selects a gene and replaces it with a random result between -1 to 1. Users need
to define the number of networks with different weights, the probability of
crossover and mutation, the number of population, and the number of
generation [20].
10
8
Gregorius Satia Budhi, Rudy Adipranata
ICZ-ZCZ
ICZ (Image Centroid and Zone) and ZCZ (Centroid Zone and Zone) are feature
extraction methods that utilize the type of zoning and zone centroid of the
image that has been divided into an image. These methods begin with dividing
an image into several same zones.
After dividing the image into the same zone, ICZ method is calculate the
centroid of the image. For each zone, the average distance between black image
pixel and the zone centroid is calculated. In the method of ZCZ, the centroid of
the image is calculated, instead of centroid of each zone. And again, for each
zone, the average distance between black image pixel and the image centroid is
calculated. The average distances are then used as features for classification and
recognition [21].
9
Backpropagation
Backpropagation neural network is a neural network that uses a multilayer feedforward architecture. This method is widely used to solve many problems such
as classification, pattern recognition and generalization [22].
Training algorithm in Backpropagation are as follows [23]:
Feed-forward phase (7 steps):
Step 0: Initialize weight (random value between 0-1) and the learning rate α
Step 1: Do step 2-13 while the stop condition is not met
Step 2: Perform steps 3-13 as the desired number of training
Step 3: Do steps 4-13 for each hidden layer and output layer
Step 4: Calculate the input of each node in the hidden layer using equation 14
n
z _ in j   xi * wij
i 1
(14)
Step 5: Calculate the output of each node in the hidden layer activation function
using equations 15 and 16
z j  f ( z _ in j )
f1 ( x) 
1
1  exp(  x)
(15)
(16)
Step 6: Calculate the input of each node in the output layer using equation 17
Handwritten Java Char. Recog. Using Several Methods of ANN11
n
y _ ink   z j * w jk
j 1
(17)
Step 7: Calculate the output at each node in the output layer using equation 18
yk  f ( y _ ink )
(18)
Error-backpropagation phase (6 steps):
Step 8: Calculate the error of each node in the output layer with the deactivation
function using equations 19 and 20
 k  (tk  yk )* f '( y _ ink )
(19)
f1' ( x)  f1 ( x)[1  f1 ( x)]
(20)
Step 9: Calculate the change in weights in each output node in each layer using
equation 21
w jk   *  k
(21)
Step 10: Calculate the error of each node in the hidden layer to deactivate the
function using the equation 22
n
 j  (  k * w jk ) * f '( z _ ink )
k 1
(22)
Step 11: Calculate the change in weight on each node in each hidden layer using
equation 23
wij   *  j
(23)
Step 12: Update the weights at each node in the output layer using equation 24
w jk (new)  w jk (old )  w jk
(24)
Step 13: Update the weights on each node in each hidden layer using equation
25
wij (new)  wij (old )  wij
10
(25)
Chi2
Chi2 algorithm [24] is an algorithm that uses the χ2 statistic to discretize
numeric valued attributes. This algorithm is quite effective if used in the
selection of the important features of a group of numerical attributes. By using
the features that are relevant, then this algorithm can speed up the training
12
Gregorius Satia Budhi, Rudy Adipranata
process and improve the prediction accuracy of classification algorithms in
general. And as addition, there are many classification algorithms that require
and work better on discrete training data.
In use, the Chi2 algorithm is divided into two phases. The first phase begins
with the high enough significance value, e.g 0.5, on all attributes for
discretization. The process of merging the data will continue for χ2 value does
not exceed the specified value of significance (0.5, yielding a value 0.455 with
degree of freedom equal to 1). This phase will be repeated by reducing the value
of significance, until the number of inconsistent data in the discretization
exceeds the specified limits. The equation to calculate the value of χ2 can be
seen at equation 26.
(26)
k = number of classification,
Aij = number of pattern at interval - i, classification - j
Eij = the pattern expected from Aij =
, if
or
equal to 0,
Eij should change to 0.1
Ri = number of pattern at interval - i =
Cj = number of pattern at interval - j =
N = total number of pattern =
The second phase is the optimization stage of the first phase. The most visible
difference is the calculation of inconsistency. Calculation is done after all the
attributes through the merger process in second phase. Inconsistent value is
calculated at the end of each attribute discretization process while in the first
phase. The second phase will be repeated until there are no values of attributes
can be discretized or combined. Inconsistency occurs when there are several
samples that all of its attributes have the same value, but they belong to
different groups.
Handwritten Java Char. Recog. Using Several Methods of ANN13
11
Design and Implementation
The input for system is a Java characters digital image. Grayscale processing
and filtering are done to reduce noise. After that, we apply skew detection and
correction to straighten skewed images. Later, the segmentation process is done
to get each Java character using thresholding and skeletonizing. Feature
extraction process is done by using ICZ-ZCZ [21] and the feature will be used
as inputs to the artificial neural network.
Each Java character image is divided into 4*5 zones and ICZ-ZCZ will be
performed for each zone, so there are 40 ICZ-ZCZ output values. These value
later will become artificial neural network input node.
The overall system workflow can be seen in Figure 7.
Grayscaling
Input: Javanese
Characters
Digital Image
START
Filtering
END
Segmentation
Feature
Extraction
Output: Sequential
Pattern of Javanese
Characters in the form
of Hanacaraka font
Pattern Recognition
Using Artificial
Neural Networks
Figure 7 System workflow
Application interface can be seen in Figure 8.
14
Gregorius Satia Budhi, Rudy Adipranata
Figure 8 Application interface
Having obtained the image of each Java character from the document and after
feature extraction process is carried out on each image, the next step performed
by the system is to identify those characters using artificial neural network.
After successfully recognized, later the Java character images are converted to a
sequence of hanacaraka font and formed into a document.
11.1
Dataset for Experiment
For experiment, we use two kinds of dataset of handwritten Java characters, one
dataset for training and the other set is for testing. For training of CPN, BPNN
and ENN, the number of data in dataset is 20 samples for each character. There
are total 31 Java characters, so the overall data are 620 characters. The dataset
for testing also consist of 20 sample data for each Java characters, the overall
data are 620 characters. The examples of sample data could be seen in Figure 9.
Every sample data will be processed by feature extraction processing that is
implemented using ICZ-ZCZ method. The 40 value results of feature extraction
will became inputs for 40 nodes of neural network input neurons.
Figure 9 Examples of data samples for training and testing process
Handwritten Java Char. Recog. Using Several Methods of ANN15
12
Experimental Results
For pattern recognition process of Java character, we used five kinds of artificial
neural network, namely bidirectional associative memory, counterpropagation,
evolutionary, backpropagation and backpropagation combine with chi2.
Experimental results of bidirectional associative memory (BAM) can be seen in
Table 1.
Table 1
Number of sample
2
3
4
6
8
Experimental Result of BAM
Input node
Output node
6
15
30
6
15
30
6
15
30
6
15
30
6
15
30
4
10
10
4
10
10
4
10
10
4
10
10
4
10
10
Accuracy (%)
100.00
0.00
0.00
100.00
33.33
0.00
100.00
0.00
0.00
66.67
0.00
0.00
75.00
0.00
0.00
From the experimental results above, we can see that the BAM is inaccurate to
use for Java characters recognition. That is because for input node, we need at
least 40 nodes, while BAM only works well when the number of input are the
same or less than 6 nodes. And for the output nodes, we need 31 nodes in this
experiment because the total number Java character used are 31 characters,
while BAM works well for 4 nodes or less only.
Another experiments use counterpropagation network (CPN), backpropagation
(BPNN) and evolutionary neural network (ENN) 1 layer and 2 layers. We
perform experiment using the dataset that has been trained before and the
dataset that hasn't been trained before.
Neural network output layer consist of 31 neurons. Those 31 neurons are in
accordance with the number of Java character used in this research: 20 basic
(carakan) characters, 4 sandhangan characters, and 7 number characters (not all
10 number characters, because 3 number characters have the same form with
16
Gregorius Satia Budhi, Rudy Adipranata
the carakan characters). Each neuron has a value of 0 or 1. For the first
character, then the first neuron is 1, while other neurons are 0. For the second
character, then the second neuron is 1, whereas other neurons are 1, etc. The
number of neuron in other layer is 60 neurons.
From experimental result, the average of recognition accuracy of CPN is only
about 71% for training data and 6% for testing data (data hasn’t been trained
before). The average of recognition accuracy of ENN is about 94% for training
data and about 66% for testing data. Parameters used for ENN are: the number
of neuron for each layer: 60, crossover probability: 100%, mutation probability:
50%, maximum population: 50, maximum epoch: 10 million and error limit:
0.1. The average of recognition accuracy of BPNN is about 79% for training
data and 33% for testing data. Parameter used for BPNN: input neuron: 40,
learning rate: 0.1, error threshold: 0.001.
We try to improve accuracy of recognition by using additional method Chi2 and
combine it with backpropagation neural network. Chi2 method is used to further
increase the variation of the features of each data trained or be recognized. With
more data features is expected to widen the differences in the features of each
Java character. Experiment using combination Chi2 and BPNN is done using
parameters: feed-forward network with one hidden layer, number of neuron in
hidden layer: 60, maximum epoch: 1000, learning rate: 0.1, initial input neuron:
40 features, combined with output of Chi2 algorithm to become 440 input
neurons (using N=10, for each input will produce 10 output), output neuron: 31.
The overall experimental results can be seen in Table 2 for experiment using
data has been trained before and Table 3 for experiment using testing data.
Table 2
Java characters type
Basic character / carakan
Numbers
Sandhangan
All characters type
CPN
61.25
73.57
77.50
71.45
Table 3
Java characters type
Basic character / carakan
Numbers
Sandhangan
All characters type
CPN
4.75
6.43
7.50
6.29
Experimental results using training data
BPNN
66.25
72.14
78.75
79.03
Recognition accuracy (%)
ENN 1 layer
ENN 2 layers
97.75
96.00
97.14
97.86
93.75
90.00
94.19
92.26
Chi2 and BPNN
98.25
97.86
97.50
98.71
Experimental results using testing data
BPNN
32.25
32.14
36.25
33.87
Recognition accuracy (%)
ENN 1 layer
ENN 2 layers
48.75
52.75
58.57
63.57
66.25
68.75
50.32
66.29
Chi2 and BPNN
65.75
79.29
83.75
73.71
Handwritten Java Char. Recog. Using Several Methods of ANN17
13
Conclusion
In this research, we have developed handwritten Java character recognition
system using several methods of artificial neural network and compare their
recognition results. From the experimental that has been done, it can be
concluded that bidirectional associative memory and counterpropagation
network could not be used for recognition of Java characters because the
average of accuracy is very low. Combination of Chi2 and backpropagation
neural network could perform better accuracy than evolutionary neural network
1 layer or 2 layers for Java characters recognition. Its recognition accuracy
could reach 98% for data has been trained before and 73% for data hasn’t been
trained before. For future research, the accuracy may be improved by using
another method for segmentation and feature extraction that can distinguish
similar Java character. Also the combination of Chi2 and ENN may be used to
improve accuracy.
Acknowledgment
This research was funded by Research Competitive Grant DIPA-PT
Coordination of Private Higher Education Region VII, East Java, Indonesia
(20/SP2H/PDSTRL_PEN/LPPM-UKP/IV/2014), fiscal year 2014 and 2015.
This research was also funded by Research Center, Petra Christian University,
Surabaya, Indonesia, through the Internal Research Grant (05/PenLPPM/UKP/2012), fiscal year 2012. We also thank Edwin Prasetio Nandra,
Danny Setiawan Putra, Eric Yogi Tjandra, Evan Sanjaya, Jeffry Hartanto, Ricky
Fajar Adi Edna P., and Christopher H. Imantaka for their help in doing the
system coding
References
[1]
[2]
[3]
[4]
Nurmila, N., Sugiharto, A., and Sarwoko, E. A., Back Propagation
Neural Network Algorithm For Java Character Pattern Recognition,
Jurnal Masyarakat Informatika vol 1, no 1, pp 1-10, 2010.
Priyatma, J. E. and Wahyuningrum, S. E., Java Character Recognition
Using Fuzzy Logic, SIGMA vol 8, No 1, pp 75-84, 2005.
Adipranata, Rudy., Gregorius S. Budhi, Rudy Thedjakusuma, Java
Characters Word Processing, in Proceeding of The 3rd International
Conference on Soft Computing. Intelligent System and Information
Technology, Bali, Indonesia, 2012.
Budhi, Gregorius S., Rudy Adipranata, Comparison of Bidirectional
Associative Memory, Counterpropagation and Evolutionary Neural
Network for Java Characters Recognition, in Proceeding of The 1st
International Conference on Advance Informatics: Concepts, Theory and
Applications, Bandung, Indonesia, 2014
18
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
Gregorius Satia Budhi, Rudy Adipranata
Budhi, Gregorius S., Rudy Adipranata, Java Characters Recognition
using Evolutionary Neural Network and Combination of Chi2 and
Backpropagation Neural Network, International Journal of Applied
Engineering Research, vol 9, no 22, 2014.
Adipranata, Rudy., Gregorius S. Budhi, Bambang Setiahadi, Automatic
Classification of Sunspot Groups for Space Weather Analisys,
International Journal of Multimedia and Ubiquitous Engineering, vol. 8,
no. 3, pp. 41-54, 2013.
Budhi, Gregorius S., Rudy Adipranata, F. Jimmy Hartono, The Use of
Gabor Filter and Backpropagation Neural Network for Automobile Types
Recognition, in Proceeding of The 2nd International Conference on Soft
Computing. Intelligent System and Information Technology, Bali,
Indonesia, 2010.
Budhi, Gregorius S., Purnomo, Mauridhi H., Pramono, Marsetio.,
Recognition of Pancreatic Organ Abnormal Changes Through Iris Eyes
Using Feed-Forward Backpropagation Artificial Neural Network, in
Proceeding of The 7th National Conference on Design and Application of
Technology, 2008.
Budhi, Gregorius S., Ibnu Gunawan, Steven Jaowry, Backpropagation
Artificial Neural Network Method for Recognition of Characters on
Digital Image, in Proceeding of The 3rd National Conference on Design
and Application of Technology, 2004.
Java Character Hanacaraka, http://nusantaranger.com/referensi/bukuelang/chapter-4merah/aksara-jawa-hanacaraka/, last access October 2014
Javanese Alphabet (Carakan),
http://www.omniglot.com/writing/javanese.htm, last access October
2014.
Java Characters, http://id.wikipedia.org/wiki/Aksara_Jawa, last access
January 2013
Gonzalez, R.C., and Woods, R.E., Digital Image Processing 3rd Edition,
New Jersey: Prentice-Hall, Inc., 2008.
Parker, J.R., Algorithm for Image Processing and Computer Vision, New
York: John Wiley and Sons, Inc., 2010.
Kosko, B., Bidirectional Associative Memories, IEEE Transactions on
Systems, Man, and Cybernetics, vol. 18, no. 1. pp. 49-60, 1988.
Singh, Y.P., Yadav, V.S., Gupta, A., Khare A., Bi Directional Associative
Memory Neural Network Method In The Character Recognition, Journal
of Theoretical and Applied Information Technology. pp. 382-386. 2009
Boyu, W., Feng W., and Lianjie S., A Modified Counter-Propagation
Network for Process Mean Shift Identification, in Proceeding of IEEE
International Conference on Systems, Man and Cybernetics. pp. 3618–
3623, 2008.
Handwritten Java Char. Recog. Using Several Methods of ANN19
[18] Fu, L. M., Neural Networks in Computer Intelligence, New York:
McGraw-Hill, Inc, 1994
[19] Dewri, R., Evolutionary Neural Networks: Design Methodologies,
http://ai-depot.com/articles/evolutionary-neural-networks-designmethodologies/, last access January 2013.
[20] Negnevitsky, M, Artificial Intelligence: A Guide to Intelligence Systems
(2nd ed.), New York: Addison Wesley, 2005.
[21] Rajashekararadhya, S.V., Ranjan, Vanaja, Efficient zone based feature
extraction algorithm for handwritten numeral recognition of four popular
South Indian scripts, Journal of Theoritical and Applied Information
Technology 4(12), pp. 1171-1181, 2005.
[22] Rao, Hayagriva V. and Valluru B. Rao., C++ Neural Networks And
Fuzzy Logic, New York: Henry Holt and Company, 1993.
[23] Fausett, Laurene., Fundamentals Of Neural Networks, New Jersey:
Prentice Hall, 1994.
[24] Liu, H. and R. Setiono, Chi2: Feature Selection and Discretization of
Numeric Attributes, In Proceeding of The 7th International Conference on
Tools with Artificial Intelligence, pp. 388-391, 1995.