Download 記錄 編號 6668 狀態 NC094FJU00392004 助教 查核 索書 號 學校

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Soar (cognitive architecture) wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Concept learning wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Machine learning wikipedia , lookup

Incomplete Nature wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Reinforcement learning wikipedia , lookup

Agent-based model wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Cognitive model wikipedia , lookup

Transcript
記錄
編號
6668
狀態
NC094FJU00392004
助教
查核
索書
號
學校
名稱
輔仁大學
系所
名稱
資訊工程學系
舊系
所名
稱
學號
493516046
研究
蔡明倫
生(中)
研究
Ming Lan Tsai
生(英)
論文
名稱
(中)
基於適應性學習之目標演化於智慧型代理人
論文
名稱
(英)
Goal Evolution based on Adaptive Q-learning for Intelligent Agent
其他
題名
指導
教授
(中)
許見章博士 郭忠義博士
指導
教授
(英)
Ph.Chien-Chang Hsu Ph.Kuo Jong Yih
校內
全文
開放
日期
校外
全文
開放
日期
全文
不開
放理
由
電子
全文
送交
國圖.
國圖
全文
開放
日期.
檔案
說明
電子
全文
學位
類別
碩士
畢業
學年
度
94
出版
年
語文
別
中文
關鍵
智慧型代理人 適應性 Q 學習 BDI 模型
字(中)
關鍵
Intelligent Agent adaptive Q-learning BDI model
字(英)
摘要
(中)
摘要
本篇論文提出以適應性學習方法完成智慧型代理人之目標演化,當代理
人被建置時,代理人即具備一些目標及少數功能,每一種功能可能由一
個或多個行為所組成,藉此些功能是採取行為以滿足目標,他們努力適
應於僅有的功能。 強效式學習方法被用於演化代理人之目標,一種抽象
代理人程式語言(An Abstract Agent Programming Language 3APL)被提出以
建造代理人之心智狀態。 我們提出以強效式學習精煉最原始的目標(toplevel goals)。 並以機器人足球比賽用來說明我們的方法。 而且,我們顯
示如何精煉以強效式學習演化目標於足球員之心智狀態
This paper presents an adaptive approach to address the goal evolution of the
(英)
論文
目次
參考
文獻
intelligent agent. When agents are initially created, they have some goals and few
capabilities. Each capability composes by one or more actions. These capabilities
can perform some actions to satisfy their goals. They strive to adapt themselves to
the low capabilities. Reinforcement learning method is used to the evolution of
agent goal. An Abstract Agent Programming Language (3APL) is introduced to
build the agent mental states. We propose reinforcement learning to refine the toplevel goals. A robot soccer game is used to explain our approach. Moreover, we
show how a refinement of the soccer player’s mental state is derived from the
evolving goals by reinforcement learning.
Chapter 1 Introduction 1.1 MOTIVATION 1.2 OBJECTIVE 1.3
ORGANIZATION Chapter2 Related Work 2.1 ROBOT SOCCER LEARNING
2.2 ROBOT SOCCER STRATEGY Chapter 3 Agent Evolution 3.1 AGENT
EVOLUTION MODEL 3.1.1 Agent Domain Knowledge 3.1.2 Agent Rule Bases
3.2 AGENT EVOLUTION PROCESS Chapter 4 Case Study 4.1 SYSTEM
DESIGN 4.1.1 SYSTEM ENVIRONMENT 4.1.1.1 Hardware Specification and
Configuration 4.1.1.2 Software Specification and Configuration 4.1.1.3
Operational Environment 4.2 CASE STUDY Chapter 5 Experiment Results 5.1
EXPERIMENT 5.2 DISCUSSION Chapter 6 Conclusion
References [1] A. Bonarini, “Evolutionary learning, reinforcement learning, and
fuzzy rules for knowledge acquisition in agent-based systems”, Proceedings of
the IEEE, 2001, Vol.89, Issue 9, pp.1334-1346. [2] B. van Riemsdijk, M.
Dastani., F. Dignum,. Meyer, J.-J. Ch., “Dynamics of Declarative Goals in
Agent Programming”, Proceedings in Declarative Agent Languages and
Technologies (DALT), New York, 2004. [3] C. Castillo, M. Lurgi, I. Martinez,
“Chimps: an evolutionary reinforcement learning approach for soccer agents” ,
IEEE International Conference on Systems, Man and Cybernetics, 2003, Vol. 1,
pp.60-65. [4] E. Alonso, M. D’Inverno, D. Kudenko, M. Luck, J. Noble,
“Learning in Multi-Agent Systems”, The Knowledge Engineering Review,
2001, Vol.16, No.3, pp.277-284. [5] M. Dastani, F. Dignum, J.J. Meyer,
“Autonomy and Agent Deliberation”, Proceedings in The First International
Workshop on Computatinal Autonomy-Potential, Risks, Solutions, 2003,
Melbourne. [6] M. Dastani, B. van Riemsdijk, F. Dignum, J.J. Meyer, “A
Programming Language for Cognitive Agents: Goal Directed 3APL”,
Proceedings of the First Workshop on Programming Multiagent Systems:
Languages, frameworks, techniques, and tools, 2003, Melbourne. [7] M. Dastani
and L. van der Torre, “Programming BOID Agents: a deliberation language for
conflicts between mental attitudes and plans”, N. R. Jennings, C. Sierra, L.
Sonenberg, M. Tambe (eds.) Proceedings in the Third International Joint
Conference on Autonomous Agents and Multi Agent Systems (AAMAS'04),
ACM, p. 706-713, 2004. [8] M. Dastani, J. Hulstijn, F. Dignum, Meyer, J-J. Ch.,
“Issues in Multiagent System Development”, N. R. Jennings, C. Sierra, L.
Sonenberg, M. Tambe (eds.) Proceedings in the Third International Joint
Conference on Autonomous Agents and Multi Agent Systems (AAMAS'04),
ACM, p. 922-929, 2004. [9] M. D'Inverno, K. Hindriks, M. Luck, “A Formal
Architecture for the 3APL Agent Programming Language”, in ZB2000 ,Lecture
Notes in Computer Science, Springer, 2000, pp.168-187. [10] E. Gelenbe, E.
Seref, Z. Xu, “Simulation with learning agents “, Proceedings of the IEEE,
2001, Vol. 89, Issue 2, pp.148-157. [11] J. Hulstijn, F. d. Boer, M. Dastani, F.
Dignum, M. Kroese, J.J. Meyer, “Agent-based Programming in 3APL”,
Presented at the ICS Researchday, Conferentiecentrum Woudschoten, The
Netherlands, 2003. [12] K. S. Hwang; S.W. Tan; C.C. Chen, “Cooperative
strategy based on adaptive Q-learning for robot soccer systems”, IEEE
Transactions on Fuzzy Systems, 2004, Vol.12, Issue 4, pp.569-576. [13] S.
Kinoshita, Y. Yamamoto. “Team 11monkeys Description”, proceeding in
Coradeschi et. al., editors, Proceeding on RoboCup-99: Team Descriptions, 1999,
pp. 154-156. [14] J. Y. Kuo, “A document-driven agent-based approach for
business”, processes management. Information and Software Technology, 2004,
Vol. 46, pp. 373-382. [15] J. Y. Kuo, S.J. Lee and C.L. Wu, N.L. Hsueh, J. Lee.
Evolutionary Agents for Intelligent Transport Systems, International Journal of
Fuzzy Systems, 2005, Vol. 7, No. 2, pp.85-93. [16] Y. Maeda, “Modified Qlearning method with fuzzy state division and adaptive rewards”, Proceedings of
the IEEE World Congress on Computational Intelligence, FUZZ-IEEE2002, Vol.
2, pp.1556-1561. [17] T. Nakashima, M. Takatani, M. Udo, H. Ishibuchi, “An
evolutionary approach for strategy learning in RoboCup soccer Systems”, IEEE
International Conference on Man and Cybernetics, 2004, Vol. 2, pp.2023-2028.
[18] S. Shen, G.M.P. O'Hare, R. Collier, “Decision-making of BDI agents, a
fuzzy approach”, The Fourth International Conference on Computer and
Information Technology, 2004, pp.1022-1027. [19] M. Wooldridge, N. Jennings,
“Agent theories, architectures and languages: a survey”. Lecture Notes in
Artificial Intelligence890, pp.1-39. [20] C.J.C.H Watkins, “Automatic learning
of efficient behaviour”, First IEE International Conference on Conference on
Artificial Neural Networks, 1989, No. 313, pp.395 – 398. [21] T. Yamaguchi, R.
Marukawa, “Interactive Multiagent Reinforcement Learning with Motivation
Rules”, Proceeding on 4th International Conference on Computational
Intelligence and Multimedia Applications, 2001, pp.128-132. [22] J. Y. Kuo, M.
L. Tsai, and N. L. Hsueh. 2006. “Goal Evolution based on Adaptive Q-learning
for Intelligent Agent”, IEEE International Conference on Systems, Man and
Cybernetics. Taipei, Taiwan. [23] M. Yoshinaga, Y. Nakamura, E. Suzuki,
“Mini-Car-Soccer as a Testbed for Granular Computing”, IEEE International
Conference on Granular Computing, 2005, Vol. 1, 25-27, pp.92 – 97. [24] Y.
Sato, T. Kanno, “Event-driven hybrid learning classifier systems for online
soccer games”, The 2005 IEEE Congress on Evolutionary Computation, 2005,
Vol. 3, 2-5, pp.2091 – 2098. [25] K. Wickramaratna, M. Chen, S.C. Chen, M. L.
Shyu, “Neural network based framework for goal event detection in soccer
videos”, Seventh IEEE International Symposium on Multimedia, 2005. [26] S.
Hirano, S. Tsumoto, “Grouping of soccer game records by multiscale
comparison technique and rough clustering”, 2005. Fifth International
Conference on Hybrid Intelligent Systems, 2005. [27] D. Barrios-Aranibar, P. J.
Alsina, “Recognizing behaviors patterns in a micro robot soccer game”, 2005.
Fifth International Conference on Hybrid Intelligent Systems, 2005. [28] B. R
Liu; Y. Xie; Y. M. Yang; Y. M. Xia; Z. Z. Qiu, ”A Self-Localition Method with
Monocular Vision for Autonomous Soccer Robot”, 2005. ICIT 2005. IEEE
International Conference on Industrial Technology, 2005, pp.888 – 892.
論文
頁數
44
附註
全文
點閱
次數
資料
建置
時間
轉檔
日期
全文
檔存
取記
錄
異動
記錄
M admin Y2008.M7.D3 23:18 61.59.161.35