Download 記錄編號 6668 狀態 NC094FJU00392004 助教查核 索書號 學校名稱

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Soar (cognitive architecture) wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

Concept learning wikipedia , lookup

Agent-based model in biology wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Machine learning wikipedia , lookup

Incomplete Nature wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Reinforcement learning wikipedia , lookup

Agent-based model wikipedia , lookup

Agent (The Matrix) wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

Wizard of Oz experiment wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Cognitive model wikipedia , lookup

Transcript
記錄
6668
編號
狀態 NC094FJU00392004
助教
查核
索書
號
學校
輔仁大學
名稱
系所
資訊工程學系
名稱
舊系
所名
稱
學號 493516046
研究
生(中 蔡明倫
)
研究
生(英 Ming Lan Tsai
)
論文
名稱( 基於適應性學習之目標演化於智慧型代理人
中)
論文
名稱( Goal Evolution based on Adaptive Q-learning for Intelligent Agent
英)
其他
題名
指導
教授( 許見章博士 郭忠義博士
中)
指導
教授( Ph.Chien-Chang Hsu Ph.Kuo Jong Yih
英)
校內
全文
開放
日期
校外
全文
開放
日期
全文
不開
放理
由
電子
全文
送交
國圖.
國圖
全文
開放
日期.
檔案
說明
電子
全文
學位
碩士
類別
畢業
學年 94
度
出版
年
語文
中文
別
關鍵
字(中 智慧型代理人 適應性Q學習 BDI模型
)
關鍵
字(英 Intelligent Agent adaptive Q-learning BDI model
)
本篇論文提出以適應性學習方法完成智慧型代理人之目標演化,當代理人被建置
時,代理人即具備一些目標及少數功能,每一種功能可能由一個或多個行為所組
成,藉此些功能是採取行為以滿足目標,他們努力適應於僅有的功能。 強效式學
摘要(
習方法被用於演化代理人之目標,一種抽象代理人程式語言(An Abstract Agent
中)
Programming Language 3APL)被提出以建造代理人之心智狀態。 我們提出以強效
式學習精煉最原始的目標(top-level goals)。 並以機器人足球比賽用來說明我們的
方法。 而且,我們顯示如何精煉以強效式學習演化目標於足球員之心智狀態
This paper presents an adaptive approach to address the goal evolution of the intelligent
agent. When agents are initially created, they have some goals and few capabilities.
Each capability composes by one or more actions. These capabilities can perform some
actions to satisfy their goals. They strive to adapt themselves to the low capabilities.
摘要(
Reinforcement learning method is used to the evolution of agent goal. An Abstract
英) Agent Programming Language (3APL) is introduced to build the agent mental states.
We propose reinforcement learning to refine the top-level goals. A robot soccer game is
used to explain our approach. Moreover, we show how a refinement of the soccer
player’s mental state is derived from the evolving goals by reinforcement learning.
Chapter 1 Introduction 1.1 MOTIVATION 1.2 OBJECTIVE 1.3 ORGANIZATION
Chapter2 Related Work 2.1 ROBOT SOCCER LEARNING 2.2 ROBOT SOCCER
STRATEGY Chapter 3 Agent Evolution 3.1 AGENT EVOLUTION MODEL 3.1.1
Agent Domain Knowledge 3.1.2 Agent Rule Bases 3.2 AGENT EVOLUTION
論文
PROCESS Chapter 4 Case Study 4.1 SYSTEM DESIGN 4.1.1 SYSTEM
目次 ENVIRONMENT 4.1.1.1 Hardware Specification and Configuration 4.1.1.2 Software
Specification and Configuration 4.1.1.3 Operational Environment 4.2 CASE STUDY
Chapter 5 Experiment Results 5.1 EXPERIMENT 5.2 DISCUSSION Chapter 6
Conclusion
References [1] A. Bonarini, “Evolutionary learning, reinforcement learning, and fuzzy
rules for knowledge acquisition in agent-based systems”, Proceedings of the IEEE,
2001, Vol.89, Issue 9, pp.1334-1346. [2] B. van Riemsdijk, M. Dastani., F. Dignum,.
Meyer, J.-J. Ch., “Dynamics of Declarative Goals in Agent Programming”, Proceedings
in Declarative Agent Languages and Technologies (DALT), New York, 2004. [3] C.
Castillo, M. Lurgi, I. Martinez, “Chimps: an evolutionary reinforcement learning
approach for soccer agents” , IEEE International Conference on Systems, Man and
Cybernetics, 2003, Vol. 1, pp.60-65. [4] E. Alonso, M. D’Inverno, D. Kudenko, M.
Luck, J. Noble, “Learning in Multi-Agent Systems”, The Knowledge Engineering
參考
Review, 2001, Vol.16, No.3, pp.277-284. [5] M. Dastani, F. Dignum, J.J. Meyer,
文獻 “Autonomy and Agent Deliberation”, Proceedings in The First International Workshop
on Computatinal Autonomy-Potential, Risks, Solutions, 2003, Melbourne. [6] M.
Dastani, B. van Riemsdijk, F. Dignum, J.J. Meyer, “A Programming Language for
Cognitive Agents: Goal Directed 3APL”, Proceedings of the First Workshop on
Programming Multiagent Systems: Languages, frameworks, techniques, and tools,
2003, Melbourne. [7] M. Dastani and L. van der Torre, “Programming BOID Agents: a
deliberation language for conflicts between mental attitudes and plans”, N. R. Jennings,
C. Sierra, L. Sonenberg, M. Tambe (eds.) Proceedings in the Third International Joint
Conference on Autonomous Agents and Multi Agent Systems (AAMAS'04), ACM, p.
706-713, 2004. [8] M. Dastani, J. Hulstijn, F. Dignum, Meyer, J-J. Ch., “Issues in
Multiagent System Development”, N. R. Jennings, C. Sierra, L. Sonenberg, M. Tambe
(eds.) Proceedings in the Third International Joint Conference on Autonomous Agents
and Multi Agent Systems (AAMAS'04), ACM, p. 922-929, 2004. [9] M. D'Inverno, K.
Hindriks, M. Luck, “A Formal Architecture for the 3APL Agent Programming
Language”, in ZB2000 ,Lecture Notes in Computer Science, Springer, 2000, pp.168187. [10] E. Gelenbe, E. Seref, Z. Xu, “Simulation with learning agents “, Proceedings
of the IEEE, 2001, Vol. 89, Issue 2, pp.148-157. [11] J. Hulstijn, F. d. Boer, M. Dastani,
F. Dignum, M. Kroese, J.J. Meyer, “Agent-based Programming in 3APL”, Presented at
the ICS Researchday, Conferentiecentrum Woudschoten, The Netherlands, 2003. [12]
K. S. Hwang; S.W. Tan; C.C. Chen, “Cooperative strategy based on adaptive Qlearning for robot soccer systems”, IEEE Transactions on Fuzzy Systems, 2004, Vol.12,
Issue 4, pp.569-576. [13] S. Kinoshita, Y. Yamamoto. “Team 11monkeys Description”,
proceeding in Coradeschi et. al., editors, Proceeding on RoboCup-99: Team
Descriptions, 1999, pp. 154-156. [14] J. Y. Kuo, “A document-driven agent-based
approach for business”, processes management. Information and Software Technology,
2004, Vol. 46, pp. 373-382. [15] J. Y. Kuo, S.J. Lee and C.L. Wu, N.L. Hsueh, J. Lee.
Evolutionary Agents for Intelligent Transport Systems, International Journal of Fuzzy
Systems, 2005, Vol. 7, No. 2, pp.85-93. [16] Y. Maeda, “Modified Q-learning method
with fuzzy state division and adaptive rewards”, Proceedings of the IEEE World
Congress on Computational Intelligence, FUZZ-IEEE2002, Vol. 2, pp.1556-1561. [17]
T. Nakashima, M. Takatani, M. Udo, H. Ishibuchi, “An evolutionary approach for
strategy learning in RoboCup soccer Systems”, IEEE International Conference on Man
and Cybernetics, 2004, Vol. 2, pp.2023-2028. [18] S. Shen, G.M.P. O'Hare, R. Collier,
“Decision-making of BDI agents, a fuzzy approach”, The Fourth International
Conference on Computer and Information Technology, 2004, pp.1022-1027. [19] M.
Wooldridge, N. Jennings, “Agent theories, architectures and languages: a survey”.
Lecture Notes in Artificial Intelligence890, pp.1-39. [20] C.J.C.H Watkins, “Automatic
learning of efficient behaviour”, First IEE International Conference on Conference on
Artificial Neural Networks, 1989, No. 313, pp.395 – 398. [21] T. Yamaguchi, R.
Marukawa, “Interactive Multiagent Reinforcement Learning with Motivation Rules”,
Proceeding on 4th International Conference on Computational Intelligence and
Multimedia Applications, 2001, pp.128-132. [22] J. Y. Kuo, M. L. Tsai, and N. L.
Hsueh. 2006. “Goal Evolution based on Adaptive Q-learning for Intelligent Agent”,
IEEE International Conference on Systems, Man and Cybernetics. Taipei, Taiwan. [23]
M. Yoshinaga, Y. Nakamura, E. Suzuki, “Mini-Car-Soccer as a Testbed for Granular
Computing”, IEEE International Conference on Granular Computing, 2005, Vol. 1, 2527, pp.92 – 97. [24] Y. Sato, T. Kanno, “Event-driven hybrid learning classifier systems
for online soccer games”, The 2005 IEEE Congress on Evolutionary Computation,
2005, Vol. 3, 2-5, pp.2091 – 2098. [25] K. Wickramaratna, M. Chen, S.C. Chen, M. L.
Shyu, “Neural network based framework for goal event detection in soccer videos”,
Seventh IEEE International Symposium on Multimedia, 2005. [26] S. Hirano, S.
Tsumoto, “Grouping of soccer game records by multiscale comparison technique and
rough clustering”, 2005. Fifth International Conference on Hybrid Intelligent Systems,
2005. [27] D. Barrios-Aranibar, P. J. Alsina, “Recognizing behaviors patterns in a micro
robot soccer game”, 2005. Fifth International Conference on Hybrid Intelligent
Systems, 2005. [28] B. R Liu; Y. Xie; Y. M. Yang; Y. M. Xia; Z. Z. Qiu, ”A SelfLocalition Method with Monocular Vision for Autonomous Soccer Robot”, 2005. ICIT
2005. IEEE International Conference on Industrial Technology, 2005, pp.888 – 892.
論文
44
頁數
附註
全文
點閱
次數
資料
建置
時間
轉檔
日期
全文
檔存
取記
錄
異動
M admin Y2008.M7.D3 23:18 61.59.161.35
記錄