Download Appendix: Pruning Search Space for Weighted

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Concept learning wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Intelligence explosion wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Pattern recognition wikipedia , lookup

Machine learning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Logic programming wikipedia , lookup

Transcript
Appendix: Pruning Search Space for Weighted
First Order Horn Clause Satisfiability
Naveen Nair123 , Chander Jayaraman2 ,
Kiran TVS2 , and Ganesh Ramakrishnan21
1
2
1
IITB-Monash Research Academy, Old CSE Building, IIT Bombay
Department of Computer Science and Engineering, IIT Bombay
3
Faculty of Information Technology, Monash University
{naveennair,ganesh}@cse.iitb.ac.in
{chanderjayaraman,t.kiran05}@gmail.com
Glossary
Atoms: They are predicates in pure form, for eg: parent(ann,mary), female(X).
Body: Right side of (:-) (if) is called body of the clause.
Clause: Disjunction of literals for eg: (parent(ann,mary) ∨ ¬ female(ann)) ≡
(parent(ann,mary) :- female(ann)).
Clause Representation: Any clause can be written in the form of
Comma separated positive literals :- Comma separated negated
literals,
where (:-) is pronounced as if. For example in propositional logic the clause
a ∨ b ∨ ¬ c ≡ a,b :- c.
Conjunctive Normal Form (CN F ): Every formulae in propositional logic or
first-order logic is equivalent to a formula that can be written as a conjunction of disjunctions i.e, something like (a (X) ∨ b (X)) ∧ (c (X) ∨ d (X)) ∧ · · ·.
When written in this way the formula is said to be in conjunctive normal
form or CNF.
Constants: A constant symbol represents an individual in the world. In first
order logic it is represented by small letter eg: jane, 1, a etc.
Definite Clause: Clauses with exactly one positive literal eg: p(X) :- c(X),d(Y).
Facts: Body less horn clauses, for eg: female(ann); daughter(mary).
Functions: Take input as tuple of objects and return another object eg: motherOf(ann), parentOf(mary).
Ground Clause: Clauses formed as a result of replacing each variable by all
possible constants in each predicate of a clause.
Head: Left side of (:-) (if ) is called head of the clause.
Herbrand Interpretation: A (Herbrand) interpretation is a truth assignment to all the atoms formed as a result of replacing the variables in a
predicate by all the possible constants (objects).
Herbrand Model: a Herbrand model is simply a Herbrand interpretation
that makes a wellformed formula true.
Horn Clause: Clause with atmost one positive literal for eg: (:- parent(ann,joan),
female(joan).) , (parent(ann,kan) :- female(mary).)
In a horn clause in CNF, all the atoms preceeded by a ¬ form the body
part and the atom not preceded by a ¬ is the head. Here ¬A means (not)A.
Knowledge Base: A Knowledge Base is a set of clauses which represents a
theory.
Literals: They are predicates in either pure form or negated form, for eg:
¬parent(ann,mary).
MaxSAT: It is a local search method used for satisfying the maximum number
of clauses, which starts with random truth assignments to all ground atoms
and improve the solution step by step by flipping one literal at a time which
makes some clauses satisfied and some others unsatisfied.
Model: An interpretation which makes the clause true. For eg: P : −Q, R, the
models are M = φ, {P, Q, R}.
Statistical Relational Learning (SRL): Statistical relational learning deals
with machine learning and data mining in relational domains where observations may be missing, partially observed, and/or noisy.
Variables: Starts with the capital letters for eg: X,Abs, etc.
Weighted MaxSAT: Given a first order Knowledge Base (σ), find a Herbrand
Interpretation that maximizes (or minimizes) the sum of weights of satisfied
(unsatisfied) clauses.
References
1. Bart Selman, Hector Levesque, David Mitchell: A New Method for Solving Hard
Satisfiability Problems. In: AAAI-92, San Jose, CA, 440-446 (1992)
2. Bart Selman, Henry Kautz, Bram Cohen: Local Search Strategies for Satisfiability
Testing. In: Second DIMACS Implementation Challenge on Cliques, Coloring and
Satisfiability (1993)
3. Brian Borchers, Judith Furman: A Two-Phase Exact Algorithm for MAX-SAT and
Weighted MAX-SAT Problems. In: Journal of Combinatorial Optimization 2, 299306 (1999)
4. Bradley L. Richards and Raymond J. Mooney: Automated Refinement of FirstOrder Horn-Clause Domain Theories. In: Journal of Machine Learning Volume 19 ,
Issue 2 95 - 131 (1995)
5. Christopher John Hogger: Essentials of logic programming. Oxford University Press,
New York, USA (1990)
6. David Poole: First-order probabilistic inference. In: Proceedings of the 8th International Joint Conference on Artificial Intelligence 985-991 (2003)
7. Dietterich T, Getoor L, Murphy K: (eds). In: Proceedings of the ICML Workshop
on Statistical Relational Learning and its Connections to Other Fields (2004)
8. Federico Heras, Javier Larrosa, Albert Oliveras: MINIMAXSAT: an efficient
weighted max-SAT solver. In: Journal of Artificial Intelligence Research Volume
31, Issue 1 1-32 (2008)
9. Jacek Kisynski, David Poole: Lifted aggregation in directed first-order probabilistic models. In: Proceedings of the 21st international jont conference on Artifical
intelligence, California, USA 1922-1929 (2009)
10. Kristian Kersting: An inductive logic programming approach to statistical relational learning. Volume 148, Frontiers in Artificial Intelligence and Applications,
IOS Press, Amsterdam, Netherlands (2006)
11. Marijn Heule, Joris van Zwieten, Mark Dufour, Hans van Maaren: March eq: Implementing Additional Reasoning into an Efficient Lookahead Sat Solver. In: Springer
LNCS 3542, 345-359 (2005)
12. Martin Davis, Hilary Putnam, George Logemann and Donald W. Loveland: A
Machine Program for Theorem Proving. In: Communications of the ACM 5 (7):
394-397 (1962)
13. Matthew Richardson and Pedro Domingos: Markov Logic Networks. In: Machine
Learning, 62, pp 107-136.(2006)
14. Mauricio Resende, Leonidas Pitsoulis, Panos Pardalos: Approximate solutions of
weighted MAX-SAT problems using GRASP. In: Du, D.-Z., Gu, J., Pardalos, P.
(eds) Satisfiability Problem: theory and applications, vol. 35 of DIMACS Series in
Discrete Mathematics and Theoretical Computer Science, pp. 393-405, American
Mathematical Society (1997)
15. Parag Singla, Pedro Domingos: Discriminative training of Markov Logic Networks.
In: AAAI-05, 868-873 (2005)
16. Parag Singla, Pedro Domingos: Memory-Efficient Inference in Relational Domains.
In: Proceedings of the Twenty-First National Conference on Artificial Intelligence
(pp. 488-493), 2006. Boston, MA, AAAI Press(2006)
17. Pierre Hansen, Brigitte Jaumard: Algorithms for the maximum satisfiability problem. In Computing 44, 279-303 (1990)
18. Richard Wallace and Eugene Freuder: Comparative Studies of Constraint Satisfaction and Davis-Putnam Algorithms for Maximum Satisfiability Problems. In:
D. Johnson, M. Trick, editors, Cliques, Coloring and Satisfiability, volume 26, pp.
587-615, American Mathematical Society (1996)
19. Roberto Battiti, Marco Protasi: Reactive search, a history-sensitive heuristic for
MAX-SAT. In: ACM Journal of Experimental Algorithmics, 2, Art. 2 (1997)
20. Rodrigo de Salvo Braz, Eyal Amir, Dan Roth: Lifted first-order probabilistic inference. In: Lise Getoor and Ben Taskar(ed.), Introduction to Statistical Relational
Learning, 433-450. MIT Press (2007)
21. Teresa Alsinet, Felip Many, Jordi Planes: An efficient solver for weighted Max-SAT.
In: Journal of Global Optimization, Springer Netherlands, 61-73 (2008)
22. http://alchemy.cs.washington.edu/