Download x - Tadeusz Łuba

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Methods of Logic Synthesis and
Their Application in Data Mining
Prezentacja wygłaszana na KNU Daegu (Korea), 25.11.2012
Tadeusz Łuba
Institute of Telecommunications
Faculty of Electronics
and Information Technology
Warsaw University of Technology
1
Logic synthesis vs. Data Mining
• applicability of the logic synthesis algorithms in Data
Mining
• data mining extends the application of LS:
–
–
–
–
–
–
medicine
pharmacology
banking
linguistics
telecommunication
environmental engineering
Data Mining is the process of automatic
discovery of significant and previously
unknown information from large databases
2
Data mining
is also called knowledge discovery in databases
It is able to
make a survey
It is able to
classify data
It is able to
diagnose the patient
It is able to
decide of granting a loan
to the bank customer
3
Gaining the knowledge from databases
at the abstract level of data mining algorithms it means using the procedure of:
– reduction of attributes,
– generalization of decision rules,
– making a hierarchical decision
These algorithms are similar to those used in
logic synthesis!
4
Data mining vs. logic synthesis
 generalization of
 minimization of the
decision rules
Boolean function
 reduction of
 reduction of
attributes
arguments
 hierarchical decision  functional
making
decomposition
(rule induction)
(logic minimization)
5
Data mining systems
ROSETTA
http://logic.mimuw.edu.pl/~rses/
http://www.lcb.uu.se/tools/rosetta/
Rough Set Toolkit for Analysis of
Data: Biomedical Centre (BMC),
Uppsala, Sweden.
6
Diagnosis of breast cancer
Breast Cancer Database:
• Number of instances: 699
training cases
• Number of attributes : 10
• Classification (2 classes)
1. Clump Thickness
2. Uniformity of Cell Size
3. Uniformity of Cell
Shape
….
9. Mitoses
Sources:
Dr. WIlliam H. Wolberg (physician); University
of Wisconsin Hospital ;Madison; Wisconsin;
USA
7
Breast Cancer database
ID
1000025
1002945
1015425
1016277
1017023
1017122
1018099
1018561
1033078
1033078
1035283
1036172
1041801
1043999
1044572
x1
5
5
3
6
4
8
1
2
2
4
1
2
5
1
8
x2
1
4
1
8
1
10
1
1
1
2
1
1
3
1
7
x3
1
4
1
8
1
10
1
2
1
1
1
1
3
1
5
x4
1
5
1
1
3
8
1
1
1
1
1
1
3
1
10
x5
2
7
2
3
2
7
2
2
2
2
1
2
2
2
7

x6
1
10
2
4
1
10
10
1
1
1
1
1
3
3
9
x7
3
3
3
3
3
9
3
3
1
2
3
2
4
3
5
x8
1
2
1
7
1
7
1
1
1
1
1
1
4
1
5
x9
1
1
1
1
1
1
1
1
5
1
1
1
1
1
4
x10
2
2
2
2
2
4
2
2
2
2
2
2
4
2
4
8
RULE_SET breast_cancer
RULES 35
(x9=1)&(x8=1)&(x2=1)&(x6=1)=>(x10=2)
(x9=1)&(x2=1)&(x3=1)&(x6=1)=>(x10=2)
(x9=1)&(x8=1)&(x4=1)&(x3=1)=>(x10=2)
(x9=1)&(x4=1)&(x6=1)&(x5=2)=>(x10=2)
…………………..
REDUCTS (27)
{ x1, x2, x3, x4, x6 }
{ x1, x2, x3, x5, x6 }
{ x2, x3, x4, x6, x7 }
{ x1, x3, x4, x6, x7 }
{ x1, x2, x4, x6, x7 }
…………….
{ x3, x4, x5, x6, x7, x8 }
{ x3, x4, x6, x7, x8, x9 }
{ x4, x5, x6, x7, x8, x9 }
(x9=1)&(x6=10)&(x1=10)=>(x10=4)
(x9=1)&(x6=10)&(x5=4)=>(x10=4)
(x9=1)&(x6=10)&(x1=8)=>(x10=4)
Increasing requirements
We are overwhelmed
with data!
References: [1]
10
UC Irvine Machine Learning Repository
Are existing methods and algorithms for data mining
sufficiently efficient?
Breast Cancer Database – 10 attr
Audiology Database – 71 attr
Dermatology Database – 34 attr
Why does it take place?
How these algorithms can be improved?
11
Classic method
Discernibility matrix (DM)
Discernibility function (DF)
References: [9]
Conjunction of clauses
Clause is a disjunction of attributes
The key issue is to transform the DF:
NP- HARD
CNF
DNF
Every monomial corresponds to a reduct
12
The method can be significantly improved ...
by using the typical logic synthesis procedure for Boolean
function complementation
Instead of transforming CNF to DNF
we represent CNF in binary matrix M
BM is treated as Boolean function F
Complement of function F
F is always unate function!
13
Using Complement Theorem…
M:
1
0

1

1
1
0
1
0
.i 4
.o 1
.p 4
11-1
--11
11-1--1
.end
0
1
0
0
1
1
0

1
fM = x1x2x4 + x3x4 + x1x2 + x1x4 = x1x4 + x3x4 + x1x2
x3x4
x1x2
1
1
1
1
00
01
11
00
1
01
1
11
10
1
1
1
1
1
10
The same result!
1
fM  x1x3  x2 x4  x1x4
Discernibility function
F = (x1 + x2 + x4) (x3 + x4) (x1 + x2)(x1 + x4) =
(x1 + x2)(x1 + x4) (x3 + x4) = (x1 + x2x4) (x3 + x4) = x1x3 + x2x4 + x1x4
14
The key element
Fast Complementation Algorithm
F = x j Fx j  x j Fx j
F  x j Fx j  x j Fx j
Recursive Complementation Theorem:
F  x j Fx j  Fx j
where Fx j is called the cofactor of F with respect to variable xj
The problem of complementing function F is transformed
into the problem of finding the complements of two simpler
cofactors
15
Unate Complementation
The entire process reduces to three simple calculations:
 The choice of the splitting variable
 Calculation of Cofactors
 Testing rules for termination
xj
Matrix M
xj
Cofactor 1
Cofactor
Complement
F  x j Fx j  Fx j
Cofactor 0
Cofactor
Complement
….
Complement
F = x j  F0 + F1
….
Merging
16
An Example
F = x4 x6(x1 + x2) (x3 + x5 + x7)(x2 + x3)(x2 + x7)
1 2 3 4 5 6 7
 1 1 0 0 0 0 0


0 0 1 0 1 0 1


0 1 1 0 0 0 0


0 1 0 0 0 0 1
.i 7
.o 1
.p 6
11------1-1-1
-11----1----1
---1-------1.end
1
1
1
1
1
1
17
An Example…
1 2 3 4 5 6 7
1 0 0 0 0 0 0
0 0 1 0 1 0 1
0 0 1 0 0 0 0 


0
0
0
0
0
0
1


C  1 0 1 0 0 0 1
x1
0
0
0

0
0
0
0
0
0
1
1
0
0
0
0
0
C
0
1
0
0
0
0
0
0
0
1

0

1
1 2 3 4 5 6 7
 1 1 0 0 0 0 0


0 0 1 0 1 0 1


0 1 1 0 0 0 0


0 1 0 0 0 0 1
0 1 1 0 0 0 0


0 1 0 0 1 0 0
C

0 1 0 0 0 0 1


1 0 1 0 0 0 1
x2
x2
0 0 1 0 1 0 1
x1
0 0 1 0 1 0 1


0 0 1 0 0 0 0


0 0 0 0 0 0 1
C  0 0 1 0 0 0 1
C
x7
0 0 1 0 1 0 0 
0 0 1 0 0 0 0 
0 0 0 0 0 0 0 


x2, x3,
x2, x5,
x2, x7,
x1, x3, x7
x7
0
0 0 1 0 0 0 0 
C  0 0 0 0 1 0 0 
0 0 0 0 0 0 1 


0 1 0 0 0 0
C
Reducts:
{x2,x3,x4,x6}
{x2,x4,x5,x6}
{x2,x4,x6,x7}
{x1,x3,x4,x6,x7}
18
Verification
Calculating reducts using the standard method:
(x1 + x2) (x3 + x5 + x7) (x2 + x3) (x2 + x7) =
= (x2 +x1)(x2 + x3)(x2 + x7)(x3 + x5 + x7) =
=(x2 +x1x3x7) (x3 + x5 + x7) =
= x2x3 + x2x5 +x2x7 + x1x3x7
 {x4,x6}
{x2,x3,x4,x6} {x2,x4,x5,x6} {x2,x4,x6,x7} {x1,x3,x4,x6,x7}
The same set of reducts!
19
.type fr
.i 21
.o 1
.p 31
100110010110011111101
111011111011110111100
001010101000111100000
001001101100110110001
100110010011011001101
100101100100110110011
001100100111010011011
001101100011011011001
110110010011001001101
100110110011010010011
110011011011010001100
010001010000001100111
100110101011111110100
111001111011110011000
101101011100010111100
110110000001010100000
110110110111100010111
110000100011110010001
001001000101111101101
100100011111100110110
100011000110011011110
110101000110101100001
110110001101101100111
010000111001000000001
001001100101111110000
100100111111001110010
000010001110001101101
101000010100001110000
101000110101010011111
101010000001100011001
011100111110111101111
.end
Boolean function KAZ
1
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
All solutions :
01010
10110
00100
01001
01000
11010
10011
01110
10100
11000
11011
10000
00010
01111
00011
11111
00000
01101
00110
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
0
0
0
With the smallest number of
arguments: 35,
With the minimum number of
arguments: 5574
Computation time:
RSES = 70 min.
Proposed method = 234 ms.
18000 times faster!
20
Conclusion
The new method reduces the
computation time a couple of
orders of magnitude
How this acceleration will affect the
speed of calculations for typical
databases?
21
Experimental results
house
breast-cancer
-wisconsin
KAZ
17
232
RSES/
ROSETTA
1s
10
699
2s
823ms
27
826ms
24 (5 attr)
22
31
234ms
5574
15ms
35 (5 attr)
trains
33
10
70min
out of memory
(5h 38min)
6ms
689
1ms
1 (1 attr)
29min
4m 47s
507
4m 51s
3 (4 attr)
2s 499ms
1 (2 attr)
920ms
1 (1attr)
1s 474ms
27 (6 attr)
486ms
613 (4 attr)
database
agaricus-lepiota
-mushroom
attr. inst.
23 8124
urology
36
audiology
71
dermatology
35
lung-cancer
57
compl.
method
187ms
4
compl. method
(least)
171ms
1 (8 attr)
reducts
out of memory
42s 741ms 23437
(12h)
out of memory
200
14s 508ms 37367
(1h 17min)
out of memory
366
3m 32s
143093
(3h 27min)
out of memory
32
111h 57m 3604887
(5h 20min)
500
reducts
The absolute triumph
of the complementation method!
22
Further possibilities…
of application of logic synthesis methods
in issues of Data Mining
• generalization of
decision rules
• hierarchical decision
making
• minimization of the
Boolean function
• Functional
decomposition
23
RSES vs Espresso
ESPRESSO
.i 7
.o 1
.type fr
.p 9
1000101 0
1011110 0
1101110 0
1110111 0
0100101 1
1000110 1
1010000 1
1010110 1
1110101 1
.e
f  x 4 x7  x2 x6
RSES
TABLE extlbis
ATTRIBUTES 8
x1 numeric 0
x2 numeric 0
x3 numeric 0
x4 numeric 0
x5 numeric 0
x6 numeric 0
x7 numeric 0
x8 numeric 0
OBJECTS 9
10001010
10111100
11011100
11101110
01001011
10001101
10100001
10101101
11101011
(x1=1)&(x5=1)&(x6=1)&(x2=1)=>(x8=0)
(x1=1)&(x2=0)&(x5=1)&(x3=0)&(x4=0)&(x6=0)=>(x8=0)
(x4=0)&(x1=1)&(x2=0)&(x7=0)=>(x8=1)
(x2=1)&(x4=0)&(x5=1)&(x6=0)=>(x8=1)
f  x4 x1x2 x7  x2 x4 x5 x6
24
Hierarchical decision making
Is it possible to use the decomposition to solve
difficult tasks of data mining?
attributes
attributes
A
Decomposition
decision
table
B
DT (G)
intermediate
decision
F = H(A,G(B))
DT (H)
G  P(B)
decision
P(A)  G  PD
final decision
25
Data compression
democrat n y y n y y n n n n n n y y y y
republican n y n y y y n n n n n y y y n y
democrat y y y n n n y y y n y n n n y y
democrat y y y n n n y y y n n n n n y y
democrat y n y n n n y y y y n n n n y y
democrat y n y n n n y y y n y n n n y y
democrat y y y n n n y y y n y n n n y y
republican y n n y y n y y y n n y y y n y
………………………..........
democrat y y y n n n y y y n y n n n y y
republican y y n y y y n n n n n n y y n y
republican n y n y y y n n n y n y y y n n
democrat y n y n n n y y y y y n y n y y
democrat y n y n n n y y y n n n n n n y
democrat y n y n n n y y y n n n n n y y
G
Decomposition
The data set HOUSE
(1984 United States
Congressional
Voting Records
Database)
H
68% space
reduction
Decomposition
26
Summary
• Typical logic synthesis algorithms and methods are
effectively applicable to seemingly different modern
problems of data mining
• Also, it is important to study theoretical foundations of new
concepts in data mining e.g. functional decomposition
• Solving these challenges requires the cooperation of
specialists from different fields of knowledge
27
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
References
Abdullah, S., Golafshan, L., Mohd Zakree Ahmad Nazri: Re-heat simulated annealing algorithm for rough set
attribute reduction. International Journal of the Physical Sciences 6(8), 2083–2089 (2011)
Agrawal, R., Mannila, H., Srikant, R., Toivonen, H., Verkamo, A.I.: Fast Discovery of Association Rules. In: Advances
in KDD, pp. 307–328. AAAI,Menlo Park (1996)
An, A., Shan, N., Chan, C., Cercone, N. and Ziarko, W. Discovering rules for water demand prediction: an enhanced
rough-set approach, Engineering Application and Articial Intelligence, 9, 645-653, 1996.
Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wróblewski, J.: Rough set algorithms in classification problem. In:
Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems, vol. 56,
pp. 49–88. Physica-Verlag, Heidelberg (2000)
Bazan, J., Skowron, A., Synak, P.: Dynamic Reducts as a Tool for Extracting Laws from Decision Tables. In: Ra´s,
Z.W., Zemankova, M. (eds.) ISMIS 1994. LNCS (LNAI), vol. 869, pp. 346–355. Springer, Heidelberg (1994)
Bazan, J.G., Szczuka, M.S.: RSES and RSESlib - A Collection of Tools for Rough Set Computations. In: Rough Sets and
Current Trends in Computing, pp. 106–113 (2000)
Bazan, J.G., Nguyen, H.S., Nguyen, S.H., Synak, P. and Wroblewski, J. Rough set algorithms in classi¯cation problem,
in: Polkowski, L., Tsumoto, S. and Lin, T.Y. (Eds.), Rough Set Methods and Applications, 49-88, 2000. 32
Information Sciences, 178(17), 3356-3373, Elsevier B.V., 2008.
Beynon, M. Reducts within the variable precision rough sets model: a further investigation, European Journal of
Operational Research, 134, 592-605, 2001.
Borowik, G., Łuba, T., Zydek, D.: Features Reduction using logic minimization techniques. In: Intl. Journal of
Electronics and Telecommunications, vol. 58, No.1, pp. 71-76, (2012)
Brayton, R.K., Hachtel, G.D., McMullen, C.T., Sangiovanni-Vincentelli, A.: Logic Minimization Algorithms for VLSI
Synthesis. Kluwer Academic Publishers (1984)
Brzozowski, J.A., Łuba, T.: Decomposition of boolean functions specified by cubes. Journal of Multi-Valued Logic &
Soft Computing 9, 377–417 (2003)
Dash, R., Dash, R., Mishra, D.: A hybridized rough-PCA approach of attribute reduction for high dimensional data
set. European Journal of Scientific Research 44(1), 29–38 (2010)
Feixiang, Z., Yingjun, Z., Li, Z.: An efficient attribute reduction in decision information systems. In: International
Conference on Computer Science and Software Engineering. pp. 466–469. Wuhan, Hubei (2008), DOI:
10.1109/CSSE.2008.1090
Grzenda, M.: Prediction-Oriented Dimensionality Reducition of Industrial Data Sets. in: Modern Approaches in
Applied Inteligence, Mehrotra, K.G.; Mohan, C.K.; Oh, J.C.; Varshney, P.K.; Ali, M. (Eds.), LNAI 6703, 232-241 (2011)
Hedar, A.R., Wang, J., Fukushima, M.: Tabu search for attribute reduction in rough set theory. Journal of Soft
Computing – A Fusion of Foundations, Methodologies and Applications 12(9), 909–918 (Apr 2008), DOI:
10.1007/s00500-007-0260-1
Herbert, J.P. and Yao, J.T. Rough set model selection for practical decision making, Proceedings of the 4th
International Conference on Fuzzy Systems and Knowledge Discovery, 203-207, 2007.
Huhtala, Y., Karkkainen, J., Porkka, P., Toivonen, H.: TANE: An Efficient Algorithm for Discovering Functional and
Approximate Dependencies. The Computer Journal 42(2), 100–111 (1999)
Inuiguch, M. Several approaches to attribute reduction in variable precision rough set model, Modeling Decisions
for Arti¯cial Intelligence, 215-226, 2005.
Jelonek, J., Krawiec, K., Stefanowski, J.: Comparative study of feature subset selection techniques for machine
learning tasks. In: Proceedings of IIS, Malbork, Poland, pp. 68–77 (1998)
Jensen R., Shen Q. Semantics-preserving dimensionality reduction: Rough and fuzzy rough-based approaches. IEEE
Transactions on Knowledge and Data Engineering, vol. 16, pp. 1457–1471, (2004)
Jing, S., She, K.: Heterogeneous attribute reduction in noisy system based on a generalized neighborhood rough
sets model. World Academy of Science, Engineering and Technology 75, 1067–1072 (2011)
Kalyani, P., Karnan, M.: A new implementation of attribute reduction using quick relative reduct algorithm.
International Journal of Internet Computing 1(1), 99–102 (2011)
Katzberg, J.D. and Ziarko, W. Variable precision rough sets with asymmetric bounds, in: W. Ziarko (Ed.) Rough Sets,
Fuzzy Sets and Knowledge Discovery, Springer, London, 167-177, 1994.
Kryszkiewicz, M., Cicho´n, K.: Towards scalable algorithms for discovering rough set reducts. In: Peters, J.,
Skowron, A., Grzyma la-Busse, J., Kostek, B., Świniarski, R., Szczuka, M. (eds.) Transactions on Rough Sets I, Lecture
Notes in Computer Science, vol. 3100, pp. 120–143. Springer Berlin / Heidelberg, Berlin (2004), DOI: 10.1007/9783-540-27794-1 5

40.
41.
42.
43.

Pawlak, Z. and Skowron, A. Rudiments of rough sets, Information Sciences, 177, 3-27, 2007.
Pei, X., Wang, Y.: An approximate approach to attribute reduction. International Journal of Information
Technology 12(4), 128–135 (2006)
Rawski, M., Borowik, G., Łuba, T., Tomaszewicz, P., Falkowski, B.J.: Logic synthesis strategy for FPGAs
with embedded memory blocks. Electrical Review 86(11a), 94–101 (2010)
Shan, N., Ziarko, W., Hamilton, H.J., Cercone, N.: Discovering Classification Knowledge in Databases
Using Rough Sets. In: Proceedings of KDD, pp. 271–274 (1996)
Skowron, A.: Boolean Reasoning for Decision Rules Generation. In: Komorowski, J., Ra´s, Z.W. (eds.)
ISMIS 1993. LNCS, vol. 689, pp. 295–305. Springer, Heidelberg (1993)
Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In: Słowiński,
R. (ed.) Intelligent Decision Support – Handbook of Application and Advances of the Rough Sets Theory.
Kluwer Academic Publishers (1992)
Slezak, D.: Approximate Reducts in Decision Tables. In: Proceedings of IPMU, Granada, Spain, vol. 3, pp.
1159–1164 (1996)
Slezak, D.: Searching for Frequential Reducts in Decision Tables with Uncertain Objects. In: Polkowski,
L., Skowron, A. (eds.) RSCTC 1998. LNCS, vol. 1424, pp. 52–59. Springer, Heidelberg (1998)
Slezak, D.: Association Reducts: Complexity and Heuristics. In: Greco, S., Hata, Y., Hirano, S., Inuiguchi,
M., Miyamoto, S., Nguyen, H.S., S_lowi´nski, R. (eds.) RSCTC 2006. LNCS, vol. 4259, pp. 157–164.
Springer, Heidelberg (2006)
Slezak, D. and Ziarko, W. Attribute reduction in the Bayesian version of variable precision rough set
model, Electronic Notes in Theoretical Computer Science, 82, 263-273, 2003.
Słowinski, R. (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough
Sets Theory, vol. 11. Kluwer Academic Publishers, Dordrecht (1992)
Słowiński, K., Sharif, E.: Rough Sets Analysis of Experience in Surgical Practice. International Workshop:
Rough Sets: State of The Art and Perspectives, Poznan-Kiekrz (1992)
Stepaniuk, J.: Approximation Spaces, Reducts and Representatives. In: Rough Sets in Data Mining and
Knowledge Discovery. Springer, Berlin (1998)
Swiniarski, R.W. Rough sets methods in feature reduction and classi¯cation, International Journal of
Applied Mathematics and Computer Science, 11, 565- 582, 2001.
Swiniarski, R.W. and Skowron, A. Rough set methods in feature selection and recognition, Pattern
Recognition Letters, 24, 833-849, 2003
Wang, C., Ou, F.: An attribute reduction algorithm based on conditional entropy and frequency of
attributes. In: Proceedings of the 2008 International Conference on Intelligent Computation Technology
and Automation. ICICTA ’08, vol. 1, pp. 752–756. IEEE Computer Society, Washington, DC, USA (2008),
DOI: 10.1109/ICICTA.2008.95
Wang, G., Yu, H. and Yang, D. Decision table reduction based on conditional information entropy,
Chinese Journal of Computers, 25, 759-766, 2002.
Wang, G.Y., Zhao, J. and Wu, J. A comparitive study of algebra viewpoint and information viewpoint in
attribute reduction, Foundamenta Informaticae, 68, 1-13, 2005.
Wróblewski, J.: Finding Minimal Reducts Using Genetic Algorithms. In: Proceedings of JCIS,Wrightsville
Beach, NC, September/October 1995, pp. 186–189 (1995)
Wu, W.Z., Zhang, M., Li, H.Z. and Mi, J.S. Knowledge reduction in random information systems via
Dempster-Shafer theory of evidence, Information Sciences, 174, 143-164, 2005.
Yao, Y., Zhao, Y.: Attribute reduction in decision-theoretic rough set models. Information Sciences
178(17), 3356–3373 (2008), DOI: 10.1016/j.ins.2008.05.010
Zhang, W.X., Mi, J.S. and Wu, W.Z. Knowledge reduction in inconsistent information systems, Chinese
Journal of Computers, 1, 12-18, 2003.
Zhao, Y., Luo, F., Wong, S.K.M. and Yao, Y.Y. A general definition of an attribute reduction, Proceedings
of the Second Rough Sets and Knowledge Technology, 101-108, 2007.
ROSE2 – Rough Sets Data Explorer, http://idss.cs.put.poznan.pl/site/ rose.html
ROSETTA – A Rough Set Toolkit for Analysis of Data, http://www.lcb.uu.se/ tools/rosetta/
RSES – Rough Set Exploration System, http://logic.mimuw.edu.pl/~rses/
4. Skowron, A., Rauszer, C.: The
44.
discernibility matrices and
45.
functions
46.
in information
systems.
47.
In: Słowiński,
R. (ed.) Intelligent
48.
Decision Support – Handbook of
1. ROSETTA – A Rough Set Toolkit Application and Advances of the
49.
for Analysis of Data,
Rough Sets Theory. Kluwer
50.
http://www.lcb.uu.se/ tools/rosetta/
Academic Publishers (1992)
51.
2. RSES – Rough Set Exploration5.System,
Tadeusiewicz
R.: Rola technologii
http://logic.mimuw.edu.pl/~rses/ cyfrowych52.w komunikacji
53.
3. Borowik, G., Łuba, T., Zydek, D.: społecznej oraz w kulturze i
54.
Features Reduction using logic edukacji. PPT presentation.
55.
minimization techniques. In:
Intl. Journal of Electronics and
56.
Telecommunications, vol. 58, No.1,
57.
pp. 71-76, (2012)
58.
59.
60.
61.
62.
63.
64.
65.
Related documents