Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
計算機概論 100701035 應數一黃楚耘 報告內容 • 11.4 Additional Areas of Research 其他研究領域 • 11.5 Artificial Neural Networks 類神經網路 11.4 Additional Areas of Research 其他研究領域 • Representing and manipulating knowledge 知識的表示及處理 • Learning 學習 • Genetic algorithms 基因演算法 前言 • We explore issues of handling knowledge(處理知 識), learning(學習), and dealing with complex problems, which continue to challenge researchers in the field of artificial intelligence. These activities involve capabilities that appear to be easy for human minds but apparently tax the capabilities of machines. • For now, much of the progress in developing “intelligent” agents has been achieved essentially by avoiding direct confrontation with these issues – perhaps by applying clever shortcuts or limiting the scope in which a problems arises. Representing and Manipulating knowledge 知識的表示及處理 • Real–world knowledge(現實知識): understanding images requires a significant amount of knowledge about the items in the image and that the meaning of a sentence might depend on its context. • Giving machines this capability is a major challenge in artificial intelligence. • This is complicated by the fact that knowledge occurs in both declarative(宣告式) and procedural(程序式) forms. • Procedural knowledge(程序式知識) involves a trial and error process by which an agent learns appropriate actions by being punished for poor actions and rewarded for good ones • Declarative knowledge(宣告式知識) takes the forms of expanding(擴充) or altering(改變) the “fact” in an agent’s store of knowledge • It is questionable to find a single scheme(方案) ultimately for representing all forms of knowledge. The knowledge must also be readily accessible(可被取用), and achieving this accessibility is a challenge. • Semantic nets(語意網絡):used as a means of knowledge representation and storage, but exacting information(萃取資料) from them can be problematic. • Example1:”Mary hit John.” extracting such information during contextual analysis(文脈分析) could require a significant amount of searching(大量的搜索) through the net. • Example2 :”Did Arthur win the race?” “No” “No, he came down with the flu and was not able to compete.” Associative Memory(關聯式記憶) • Related Information(相關資訊) • Relevant Information(切題資訊) • Example: ”No, he was born in January and his sister’s name is Lisa.” • Meta-reasoning(後設推理) – Insert various forms of reasoning into the extraction process (reasoning about reasoning – 將各種推理形式插入萃取過程(有關推理本身的推理 行為) • Closed-world Assumption(封閉式假說) – A statement is false unless it can be explicitly(明確 地) derived from the information available. • Example:it is a closed-world assumption that allows a database to conclude that Nicole Smith does not subscribe to a particular magazine even though the database does not contain any information at all about Nicole. • On the surface the close-world assumption appears trivial(瑣細;不重要的), but it has consequences that demonstrate how apparently innocent meta-reasoning techniques can have subtle, undesirable effects. 後果卻展現出一個表面上不起眼的後設推 理技術如何產生重大而出乎意料的影響。 • Example: – ”Mickey is a mouse OR Donald is a duck.” • Frame problem(框架問題) – Keeping stored knowledge up to date in a changing environment • If an intelligent agent is going to use its knowledge to determine its behavior, then that knowledge must be current. • But the amount of knowledge required to support intelligent behavior can be enormous, and maintaining that knowledge in a changing environment can be a massive undertaking. Learning(學習) • We would like to give intelligent agents the ability to acquire new knowledge. • We would like intelligent agents to be able to learn on their own. Level • Imitation(模仿) • Supervised training(監督式訓練) • Reinforcement(在強化) Imitation 模仿 • A person directly demonstrates(展示) the steps in a task (或許是透過一系列的電腦 操作,或由實體移動機器人作完一系列的 動作) and the computer simply records the steps. • Spreadsheets(試算表); word processors(文字處理器) ex:LaTex • Learning by imitation places little responsibility on the agent. • 電子試算表,簡稱試算表 (英文:spreadsheet), 又稱電子數據表,是一類模擬紙上計算表格的計 算機程序。它會顯示由一系列行與列構成的網格。 單元格內可以存放數值、計算式、或文本。電子 表格通常用於財務信息,因為它能夠頻繁的重新 計算整個表格。 • VisiCalc是第一個電子表格程序,用於蘋果II型電 腦。 Lotus 1-2-3是用於IBM PC上DOS時代主要 的電子表格程序。 而Numbers和Excel則分別是 Mac OS X和Windows系統上主要的電子表格程 序。 Calc則是可在多種平台上執行的Open Office 與LibreOffice中的電子表格程序。 Supervised training 監督式訓練 A person identifies the correct response for a series of examples and then the agent generalizes from those examples to develop an algorithm that applies to new cases – 一個指出對一系列例子的正確回應,而代理人 從這些例子做一般化而得到可用在新案例的演 算法 • Training set(訓練集):the series of examples • Learning to recognize a person’s handwriting or voice, learning to distinguish between junk and welcome email, and learning how to identify a disease from a set of symptoms. Reinforcement再強化 • The agent is given a general rule(通用規則) to judge for itself when it has succeeded or failed at a task during trial and error. • Play chess西洋棋,checkers圍棋 • Success or Failure is easy to define • Allows the agent to act autonomously(自主 地) as it learns to improve its behavior over time. • Learning remains a challenging field of research since no general, universal principle has been found that covers all possible learning activities. – 既然還找不到一個能夠涵蓋所有可能學習活動 的通用原則,學習仍然是一個極具挑戰性的領 域。 • ALVINN (Autonomous Land Vehicles in a Neural Net) – 從一個人類駕駛收集資料,並且用那個資料來 調整自己駕駛的決策。 Discovery • Learning is “target based目標導向” whereas discovery is not - unexpected(意外) • Developing agents with the ability to discover efficiently requires that the agent be able to identify potentially fruitful “trains of thought(一系 列想法)”, relies on the ability to reason and the use of heuristics(經驗法則). • Require that an agent be able to distinguish meaningful results from insignificant ones. • Examples: ohm’s law of electricity Kepler’s third law of planetary motion conservation of momentum(動能守恆) AUTOCLASS:use infrared spectral data to discover new classes of stars that were previously unknown in astronomy Genetic Algorithms 基因演算法 • A solution can sometimes be discovered through an evolutionary process(演進的過 程) involving many generations of trial solutions. • In essence, genetic algorithms will discover a solution by random behavior combined with a simulation(模擬) of reproductive(生殖) theory and the evolutionary process of natural selection(天澤). • A genetic algorithm begins by generating a random pool(庫) of trial solution (a random sequence of tile movements). • Each solution is just a guess. Each trial solution is called a chromosome(染色體) and each component of the chromosome is called a gene(基因). • Since each initial chromosome is a random guess, it is very unlikely that it will represent a solution to the problem at hand. Thus, the genetic algorithm proceeds to generate a new pool of chromosomes whereby each chromosome is an offspring (child) of two chromosomes (parents) of the previous pool. • The parents are randomly selected from the pool giving a probabilistic preference to those chromosomes that appear to provide the best chance of leading to a solution, thereby emulating(盡力趕上 ) the evolutionary principle of survival of the fittest(適者生存). • Each offspring is a random combination of genes from the parents. • In addition, a resulting offspring may occasionally be mutated(變動) in some random way. Hopefully, by repeating this process over and over, better and better trial solutions will evolve(逐步形成 ) until a very good one, if not the best, is discovered. • Unfortunately, there is no assurance that the genetic algorithm will ultimately find a solution, yet research has demonstrated that genetic algorithms can be effective in solving a surprisingly wide range of complex problems. • When applied to the task of program development, the genetic algorithm approach is known as evolutionary programming(演進式程式設計). • Here the goal is to develop programs by allowing them to evolve rather than by explicitly writing them.(允許程式演化而不 是直接地撰寫) • Researchers have applied(應用) evolutionary programming techniques to the program-development process(程式開 發程序) using functional programming languages. • The approach has been to start with a collection of programs that contain a rich variety of functions. • The functions in this starting collection form the “gene pool”(基因庫) from which future generations of programs will be constructed. • One then allows the evolutionary process to run for many generations, hoping that by producing each generation from the best performers in the previous generation, a solution to the target problem will evolve. 11.5 Artificial Neural Networks 類神經網路 • Basic properties 基本性質 • Training artificial neural network 訓練類神 經網路 • Associative memory 關聯記憶 前言 • Sequences of instructions(指令) do not seem capable of perceiving(感知) and reasoning(推理) at levels comparable to those of the human mind. • Artificial Neural Network 類神經網路 Basic Properties 基本性質 • Artificial Neural Networks 類神經網路 – provide a computer processing model that mimics networks of neurons in living biological systems • neurons神經單位(細胞); 神經元 • dendrites樹突 • axon 軸突 • these dendrites pick up signals from the axons of other cells across small gaps known as synapses(神經鍵;突觸 ). • Whether the particular input signal will have an exciting(發動) or inhibiting(抑制) effect on the neuron is determined by the chemical composition of the synapse. • It is believed that a biological neural network learns by adjusting(調整) these chemical connections between neurons. • a neuron in an artificial neural network is a software unit that mimics this basic understanding of a biological neuron • it produces an output of 1 or 0, depending on whether its effective input exceeds a given value, which is called the neuron’s threshold value(臨界值) • effective input(有效輸入值) – a weighted sum of the actual inputs Neuron Compute effective input: Compare effective Input to threshold value Produce output Of 0 or 1 weights權重 • if this sum exceeds the neuron’s threshold value, the neuron produces an output of 1(模擬一個神經細胞的發動狀態); otherwise the neuron produce a 0 as its output(模擬抑制狀態) -2 3 1.5 -1 • A weight can be positive or negative means that the corresponding input can have either an inhibiting or exciting effect on the receiving neuron • The actual size of the weight controls the degree(程度) to which the corresponding input is allowed to inhibit or excite the receiving neuron. hidden input output 1 1 0 1 1 1 1.5 -2 1 1 1 0 1 1 0.5 hidden input output 1 1 0 0.35 1 0 1.5 0 1 0.5 0.35 0 1 1 0 • the network configuration in figure 11.18 is far more simplistic than an actual biological network Training Artificial Neural Networks 訓練類神經網路 • An important feature of artificial neural networks is that they are not programmed in the traditional sense(並非以傳統的思維 來撰寫程式) but instead are trained(被訓 練) – a programmer does not determine the values of the weights needed to solve a particular problem and the “plug”(插入) those values into the network. • Instead, an artificial neural network learns the proper weight values via supervised training(監督式訓練) involving a repetitive process in which inputs from the training set are applied to the network and then the weights are adjusted by small increments so that the network’s performance approaches the desired behavior. • It is interesting to note how genetic algorithm techniques have been applied to the task of training artificial neural networks. • In particular, to train a neural network, a number of sets of weights for the network can be randomly generated – each set of which will serve as a chromosome for the genetic algorithm. • Then, in a step-by-step process, the network can be assigned the weights represented by each chromosome and tested over a variety of inputs. • The chromosomes producing fewest errors during this testing can then be given a greater probability of being selected as parents for the next generation. • In numerous experiments this approach has ultimately led to a successful set of weights. • Let us consider an example in which training an artificial neural network to solve a problem has been successful and perhaps more productive than trying to provide a solution by means of traditional programming techniques. The problem is one that might be faced by a robot when trying to understand its environment via the information it receives from its video camera. • Suppose, for example, that the robot must distinguish between the walls of a room, which are white, and the floor, which is black. • At first glance, this would appear to be an easy task: Simply classify the white pixels(圖素) as part of a wall and the black pixels at part of the floor. • However, as the robot looks in different directions or moves around the room, various lighting conditions can cause the wall to appear gray in some cases whereas in other cases the floor may appear gray. • Thus, the robot needs to learn to distinguish between walls and floor under a wide variety of lighting conditions. • To accomplish this, we could build an artificial neural network whose inputs consist of values indicating(指出) the color characteristics(特性 ) of an individual pixel in the image as well as a value indicating the overall brightness of the entire image. • We could then train the network by providing it with numerous examples of pixels representing parts of walls and floors under various lighting condition. • Beyond simple learning problems, artificial neural networks have been used to learn sophisticated(精密的 ) intelligent behavior, as testified by the ALVINN project cited in the previous section. • 其輸入是由一個30乘32的感應器陣列來獲 得其輸入,其中每一個觀看前方道路影像 的一個特定部份,而且將其發現的回報給 四個處理單位中的每一個。(共960個輸入) • 這四個單元中的每個輸出連接到30個輸出 單位的每一個,而這些單元的輸出指示出 駕駛的方向。此30個單元列的一端的發動 處理單元指示向左急轉,而在另一端的發 動單元指示向右急轉。 • ALVINN訓練的方式是看著一個人類駕駛, 做出自己駕駛的決策,在將其決策與人類 做比較,再對其權重作微小的調整以將其 決策與人類所做的拉近。 • 雖然ALVINN依循此簡單技術學習駕駛, ALVINN沒有學到如何從錯誤中回復。 • 因此,從人類收集來的資料被以人工的方 式充實以同時包含回復的狀況。 Associative Memory 關聯記憶 • The human mind has the amazing ability to retrieve information that is associated with a current topic of consideration. • Associative Memory – The retrieval of information that is associated with, or related to, the information at hand. – 取得與目前擁有的資訊有關聯或相關的資訊 • One approach is to apply techniques of artificial neural networks. • Example: consider a network consisting of many neurons that are interconnected(互相連接 ) to form a web with no inputs or outputs. • In such a system, the excited neurons will tend to excite other neurons, whereas the inhibited neurons will tend to inhibit others. • In turn, the entire system may be in a constant state of change, or it may be that the system will find its way to a stable configuration(組態) where the excited neurons remain excited and the inhibited neurons remain inhibited. • If we start the network in a nonstable configuration that is close to a stable one, we would expect it to wander to that stable configuration. • When given a part of a stable configuration, the network might be able to complete the configuration. • Suppose that we represent an excited state(發動狀態) by 1 and an inhibited state(抑制狀態) by 0 so that the condition of the entire network at any time can be envisioned as a configuration of 0s and 1s. • Then, if we set the network to a bit pattern that is close to a stable pattern, we could expect the network to shift to the stable pattern. • Thus if some of the bits are used to encode smells and others are used to encode childhood memories, then initializing the smell bits according to a certain stable configuration could cause the remaining bits to find their way to the associated childhood memory. • 圖中每個圈代表一個處理單元,而其臨界值則記 在圈中。 • 連接圓的線代表相關處理單元之間的連結。每個 連結都是雙向的,亦即一條線連接兩個圓,代表 每個單元的輸出連到對方成為輸入。 • 因此中間單元的輸出連到其周圍的其他單元成為 輸入,而周圍各單元的輸出也都連到中央單元成 為輸入。兩個連接的單元將其輸出與相同的權重 相關聯,這個共用的權重被標示在連接兩個單元 的線條旁。 • 這個網路以離散的步驟運作,其中所有的 處理單元以同步的方式對其輸入做反應。 為從其目前組態決定網路的下一個組態, 我們先決定整個網路中每個單元的有效輸 入,然後再讓所有單元同時對其輸入做反 應。 • 整個網路依循著一個協調的順序,先計算 其有效輸入,再對輸入做反應,計算有效 輸入,再對輸入做反應等,以此類推。 • 如果以最右邊的兩個單元為抑制狀態,而 其他單元為發動狀態,來初始設定一個網 路,那麼發生的事件順序(圖11.23a)最左的 兩個單元的有效輸入會是1,因此,他們會 持續在發動狀態。但是在外圍他們的鄰居 的有效輸入則會是0,因此,他們會變成抑 制狀態。 • 相同的,中央單位的有效輸入將會是-4,因 此,它會保持在抑制狀態。因此整個網路 將會轉換到圖11.23b的組態。 • 既然中央單位現在是抑制的,最左的兩個 單元的發動狀態會引起頂端及底部的單元, 再次變成發動狀態。同時,中央單元將會 保持抑制狀態。既然其有效輸入會是-2,因 此,此網路會移轉到圖11.23c的組態,而再 引導至11.23d的組態。 • 穩定狀態: – 中央單元處於發動,其他抑制 – 中央單元處於抑制,其他發動 • 此網路代表一個基本的關聯記憶