Download Sources - HCPSS Connect

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human–computer interaction wikipedia , lookup

Perceptual control theory wikipedia , lookup

Incomplete Nature wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

AI winter wikipedia , lookup

Semantic Web wikipedia , lookup

Existential risk from artificial general intelligence wikipedia , lookup

Philosophy of artificial intelligence wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Knowledge representation and reasoning wikipedia , lookup

History of artificial intelligence wikipedia , lookup

Transcript
Works Cited
“Artificial intelligence.” World of Computer Science. N.p.: n.p., 2007. N. pag. Gale Science in
Context. Web. 2 Oct. 2013.
The article starts off with the definition of artificial intelligence (AI) and a brief history of
AI development, which includes an explanation of the Turing Test. It then describes modern
branches of AI. Some branches are concerned with reasoning and study rule-based/expert
systems and data structures to develop intelligence. Others attempt to mimic nature, i.e. neural
networks and genetic algorithms. AI draws on many disciplines, and knowledge from them is
essential to advancing the sporadic progress in the field.
This wasn’t very helpful. Sure, it gave plenty of background information I never knew,
but it was more suited to those who wanted a broad overview of the field. Although each small
topic had a decent amount of specific info on it, it didn’t mention distributed AI (swarming). It
might come in handy if I need a reference on different AI techniques.
Berdahl, Andrew. “Science paper: Emergent sensing of complex environments by mobile animal
groups.” Collective Animal Behavior: CouzingLab@PrincetonUniversity. Couzin Lab,
2013. Web. 29 Oct. 2013. <http://icouzin.princeton.edu/newer-science-paper-emergentsensing-of-complex-environments-by-mobile animal-groups/>.
Aside from the brief text on the page, there was a lovely video of emergent behavior in
fish schools. It gave a nice visual and change of pace from all of the stuff I’d been reading. The
fish follow two simple rules, slow down in dark areas, and swim toward other fish. In a large
school, it enables the fish to seek out dark patches among light fairly quickly.
It was mostly review for me; but I hadn’t heard of a study on fish before. The two simple
rules were classic and a nice addition to my mental catalog. Nothing new though.
Bell, Donald. “UML basics: An introduction to the Unified Modeling Language.”
developerWorks. IBM developerWorks, n.d. Web. 23 Mar. 2014.
- - -. “UML basics: The class diagram.” developerWorks. IBM developerWorks, n.d. Web. 23
Mar. 2014.
- - -. “UML basics: The sequence diagram.” developerWorks. IBM developerWorks, n.d. Web.
23 Mar. 2014.
Bonabeau, Eric, et al. “Swarm intelligence theory: A snapshot of the state of the art.” Theoretical
Computer Science 411 (2010): 2081-83. Print.
A quick overview of what’s going on in that particular journal, it told me what some of
the cooler new things were, including advancements in the studies in PSO, ACO, and flocking of
birds. One of them even had a new way to make an algorithm more efficient by modifying the
parameters as it runs.
Not useful for me in particular, but it does provide a fascinating guide of what’s going on.
If I need to look up a particular article in this issue of the journal, it will be great, but otherwise,
not really.
“Chapter 1: Propositional Logic.” Department of Mathematics. California Institute of
Technology, 2010. Web. 29, Oct. 2013. http://www.math.caltech.edu/~201011/3term/ma006c/10ma6cnotes1.pdf>.
Defines propositional logic terms and situations involving it (examples). Includes: and,
or, not, implies, and causal logic operations, as well as instructions on how to write syntactically
correct statements (well formed formulas).
This is great. A lot of the papers I read deal with set theory, math, and logic – i.e.
propositional logic. One needs to understand these first before understanding anything else in
these papers.
“Chapter Two: Set Theory.” St. Lawrence University. St. Lawrence University. n.d. Web. 23
Mar. 2014. <http://myslu.stlawu.edu/~svanderv/chaptwo.pdf>.
“Civil Space.” John Hopkins Applied Physics Laboratory. John Hopkins University Applied
Physics Laboratory, 2013. Web. 1 Dec. 2013. <http://www.jhuapl.edu/ourwork/
civilspace/default.asp>.
It tells me that we essentially work for NASA, and that we focus on “space physics and
planetary science.” It also tells me that we work on end-to-end solutions and practical
applications of new knowledge.
Not very useful. Although it got the part about applying knowledge correct, the project
I’m working on pertains more to the navy than to space or even the space department. Even then,
I knew most of its message already, albeit not in such precise language.
Collective Animal Behavior: CouzinLab@PrincetonUniversity. Couzin Lab, 2013. Web. 1 Dec.
2013. <http://icouzin.princeton.edu/>.
The website is a place where the lab constantly updates the exciting, cool new things that
they do – their research into swarming in animals – in plain (science) English. It’s essentially
swarming in biology – from cells to animals to humans.
It’s pretty cool and interesting – but it has absolutely nothing to do with diagnosing a
navy ship. It’s just very interesting and fascinating light read.
Collins, Kristine. Personal interview. N.d.
Ms. Collins has more recently come out of college than her other colleagues, so she
knows more accurately what’s needed in computer science, as well as which programming
languages are the ones to be studying now. In addition, she knows what the work experience is
like, especially what it’s like for a person just coming in from college, i.e. the level of
independence needed.
Ms. Collins is a great resource, especially if one wants to know what more recent trends
in the job are like. She also very helpful if one needs help programming, as she programs a lot in
the languages close to C and Python more often.
Currie, Justin. MIT Godel Escher Bach Lecture 1. YouTube. N.p., n.d. Web. 2 Oct. 2013.
<http://www.youtube.com/watch?v=IWZ2Bz0tS-s>.
My first impression about this what that the teacher was a student; my second impression
was that the “math” in it was much more philosophical than hard logic. It was certainly logical,
but it was all about very abstract logic. It went over mainly four things, isomorphism, paradoxes,
recursion, and formal systems, with an emphasis on the latter three. He went over four types of
paradoxes, veridical, falsidical, zenos, and antinomies. Recursion was briefly discussed through
fractals, and how Sierpinski’s triangle is actually exists in an irrational number of dimensions...
The formal systems section questioned the nature of mathematical syntax, especially logic rules
and number representation. During the entire lecture, Mr. Currie discussed the nature of when
simple logic rules, formal systems, become something beyond that, such as the nature of the
universe, intelligent life, and even cultural customs…
Helpful if you’re trying to understand the fundamental question of the field of AI. That is
to say, it’s not very helpful if you’re putting the finishing touches on an algorithm, but certainly
very helpful if you’re starting to make an AI and need to see the big picture before building it.
For my purposes, this isn’t very helpful because I often think about philosophy and am familiar
with most of these concepts.
- - -. MIT Godel Escher Bach Lecture 2. YouTube. N.p., n.d. Web. 2 Oct. 2013.
<http://www.youtube.com/watch?v=HqmUuHnvJ98>.
This did nothing but go over examples of fractals and its occurrences in nature and
applications today. The explainer spent too much time drawing the fractals and expounding on
interesting facts about them, such as fractal ferns. He wasn’t very good at explaining them either
(he was substitute teacher).
This wasn’t useful at all. Sure, I now know that coastlines and mountains are nothing
more than randomized Koch curves in two and three dimensions respectively, and I am familiar
with the Mandelbrot set, but what does that have to do with making AI? It doesn’t.
- - -. MIT Godel Escher Bach Lecture 3. YouTube. N.p., n.d. Web. 2 Oct. 2013.
<http://www.youtube.com/watch?v=86AHslduncM>.
Describes formal systems more in detail, and explores what happens when we have
problems with formal systems, such as un-provable truths and paradoxes. For example, when we
question the un-provable truth, “parallel lines never intersect”, we get non-Euclidean geometry,
which is extremely useful. Paradoxes are useful if we are to make an intelligent AI. It has to do
with how paradoxes are created when a propositional statement can reference itself, such as,
“This statement is false.” Compare this with humans, who can talk in reference to themselves all
the time… It also briefly discusses different types of infinities.
If one is trying to make any sort of logical system or even build an AI which can talk to
you, this is great in the sense that gives you several caveats about building them. In a rule-based
system, you can’t have objects/statements reference themselves, and if you’re building a program
that can talk, (i.e. to pass the Turing test) you must take care that the program can handle
referencing itself.
Feldman, Alexander, Gregory Provan, and Arjan van Gemund. “A Model-Based Active Testing
Approach to Sequential Diagnosis.” Journal of Artificial Intelligence Research 39
(2010): 301-34. Print.
Exactly what is says on the cover. It details sequential diagnosis in excruciating detail
and very big words. It shows a way to reduce the cardinality of possible test cases (hypotheses),
and explores different methods to do so.
This is extremely useful. Mr. Scheidt explicitly states that we will eventually go back to
this later on in the year and that it is essential to have at least a basic understanding of this.
John Hopkins Applied Physics Laboratory. John Hopkins U Applied Physics Laboratory, 2013.
Web. 29 Oct. 2013. <http://www.jhuapl.edu/>.
The JHUAPL website gives general info on the history, the people, and the topics they
research. It often has quick biographies of the employees and the various programs they have,
such as cybersecurity.
Not particularly useful for me. The level of info on here is too general and not technical
enough to be of any use in my project.
Johnson, Steven. Emergence: The Connected Lives of Ants, Brains, Cities, and Software. New
York: Simon, 2002. Print.
The book gives an in depth explanation of what it means to have emergent intelligence (a
complex adaptive system with emergent behavior), with plenty of concrete examples from
biology, sociology, and computer science.
“Here Comes Everybody!” (Introduction) poses the problem: how difficult it is to think in terms
of the collective. Take slime mold. When put in a maze with food, this simple organism will plot
the most efficient (shortest) way to all the food without a brain or any nervous system. Even
more interesting is how it (they?) knows to aggregate. Normally, slime mold exists as individual
protists, but when food is scarce, they will aggregate into a visible blob, a swarm. The question is
how? Scientists had known that they secrete acrasin (cyclic AMP) and it was somehow involved
in aggregation. For decades they had thought that pacemakers, “elite” cells, began production of
AMP and then other started producing it in response to the first. AMP would wash over the
protists in pulses; then they aggregated. With some applied math, it becomes apparent that there
are no pacemakers. The protists secrete a trail of AMP of varying length based on how scarce
food is. The scarcer the food, the longer the trail, and the more likely other protists are to find it.
If a protist encounters enough trails, it will begin to cluster with others, and soon slime mold
forms. However, the biologists didn’t understand the math behind it; they kept on looking for a
pacemaker until a set of experiments convincingly showed the absence of one.
“The Myth of the Ant Queen” (Chapter 1) gives an overview of what and how emergent
behavior emerges. It begins with ants. People may assume that the ant queen directs all activity
in the colony, when in fact she does nothing more than eat and reproduce for the colony. Each
ant is exceedingly stupid and only acts as instinct dictates, yet somehow as a whole, they are able
to do complex tasks. For example, they are able to pick out a site for a cemetery and a midden
(garbage dump), as well as maximize the distance of the midden from the cemetery and the
colony. Somehow they are able to solve a geometry puzzle some humans may not be able to
solve without anyone directing them to; they each do their own thing. There is no need for a
centralized intelligence to do an intelligent task; a swarm can do it without anything directing it
to. The same happens with humans and computers. In one of the earliest human cities,
Manchester, people built houses wherever they pleased. Yet somehow, they organized
themselves into so rigidly defined districts that a rich person could walk the whole of his area
without encountering any poor. In computers, this takes the form of genetic algorithms.
“Street Level” (Chapter 2) is about exactly that – the “street level”, or more accurately, local
interaction. The behavior of ants is determined by what each individual ant experiences on a
local level, but it creates global behavior. For example, an ant stops foraging if she encounters
too many foragers, thus creating a rough upper limit on the number of foragers. In addition, over
the course of the colony’s 15 year life span, the colony learns behaviors, although the ants only
live a year at most. Younger colonies are more fickle (respond to famine differently every time)
and territorial, while older ones are more stable (respond to famine the same, as well as mutually
agree to boundary lines). There are five rules to keep in mind to make a swarm work. 1) More is
different. The behavior (and how we perceive it) is different if there are more ants. Ten ants
can’t determine how many foragers are needed, but 2,000 can. Similarly, we can’t figure out
what an ant is doing if we studied one individually – you must study it as part of the whole
colony. 2) Ignorance is useful. It keeps them doing their jobs; the stupidity of the ants is a
feature. You wouldn’t want an ant suddenly sprouting sentience in the same way you don’t want
one of your neurons to sprout it. 3) Encourage random encounters. Decentralization relies on
lots of random encounters. There are so many encounters; it allows an individual to learn about
the macrostate of the system (statistics makes sure of this). 4) Look for patterns in the signs.
Ants don’t need extensive vocabulary, but they do need to be aware of gradients in pheromones
– the pheromone for, “food this way!” gets stronger in what direction. 5) Pay attention to your
neighbors, or “Local information leads to global wisdom.” Without the ants paying attention to
what pheromones are being secreted by other ants, the ability of the swarm to regulate itself is
severely hampered. Human cells work this way too. They know what parts of DNA to read based
on what the cells around them are reading.
“The Pattern Match” (Chapter 3) is all about emergent behavior characteristics. They are patterns
in time and space, and are not always contingent on consciousness. For example, a city is a form
of emergent behavior. People organize themselves into districts (silk weavers in Florentine have
stayed in a spot for centuries), and as a group, they can function as information storage and
retrieval (a city quickly makes use of powerful technologies, effectively storing information
about it). Emergent behavior emerges when the energy flowing through a system increases – i.e.
increase in temp. = field of buttercups. In the same way, cities spontaneously emerge when
energy (food supply) increases. Currently, the Internet isn’t exactly a pattern – a connection
between two places. Consumers can go to websites, but there’s no way for the websites to know
which individuals came. However, there’s a way to connect them. By tracking a consumer’s
history, one can make associations between the websites and the consumer, effectively tracking
his interests. This mutual feedback is essential to emergent systems.
“Listening to Feedback” (Chapter 4) is about positive and negative feedback. Positive feedback
tends to be inimical to emergent systems, giving too much power to a particular group instead of
the people. For example, when the media begins talking about Gennifer Flowers, soon all of it is
talking about her. A single media story is popular enough that other stations want to broadcast it,
and soon, you will lose viewers if you don’t broadcast the story. A single story propagates itself
again and again, multiplying its power, which in turn multiplies its power more. Negative
feedback, in contrast, allows things to reach a balance - homeostasis. A shift in one direction
results in a force in the opposite direction. For example, if a website contributor decides to spam,
he may be voted out of his position, thereby resulting in less spam. The key to positive and
negative feedback is that both sides have to be able to give it, and there’s the right amount of it.
For example, both the media and public must be able to directly influence each other – i.e. the
public votes on which media is worth watching, thereby boosting “good” media and relegating
“bad” media. There can’t be too much or too little feedback either. Small communities are
diverse and dynamic; any larger than 5,000 or so people and its growth becomes like cancer –
unwanted and out of control. On the net, it keeps boards lively and meaningful. However, if
there’s too little feedback from the masses (lurkers), then this gives power to cranks, critics of
the posting majority – they assume the lurkers are on their side. If there’s too much feedback,
other things may happen – for example, the clearly defined leaders, pranksters, respectful
minority in an online community disappear into the giant flood of people.
“Control Artist” (Chapter 5) describes the nature of trying to control emergent systems. It’s not a
creationist, i.e. a programmer who builds his program for a specific function, and knows it will
not do anything more. Controlling emergent systems is more like Darwinism – you set the initial
conditions, and see what rises out of the primordial code. It’s very hard to predict what will
happen. For example, just by looking at a simulation’s code (StarLogo) of an individual slime
mold cell, you wouldn’t be able to tell that they will more likely aggregate into clusters if the
population density increases. It just happens. Trying to optimize solutions from this also takes
another Darwinistic approach: predators. Once a relatively optimal solution is reach, the program
has no incentive to search for a (possibly) better solution, so you must introduce predator
programs that lowers the success rate (directly) if it reaches a relative optimal solution. It’s also
important that you don’t make your system too open-ended – otherwise it becomes too hard to
program it. In gaming, it’s important to make sure your system doesn’t find the optimal solution
– i.e. chess – so that a supercomputer like Deep Blue doesn’t take the fun out of it by playing for
you.
“The Mind Readers” (Chapter 6) explains self-awareness. It argues that self-awareness isn’t the
result of increased intelligence; rather, it is the result of social awareness of others. In complex
social hierarchies (human or animal), one must be able to predict what another of one’s kind is
thinking and why. It puts evolutionary pressure on building theories of other minds. Such
pressure eventually leads to the ability to figure what others but also what oneself is thinking
about. The chapter also presents a possible future – one where programs figure out what TV you
like via group associations, such as the envisioned Internet in chapter 3. The ads in this future
will also present themselves to you based on your likes, making them much more useful to the
individual and the seller.
“See What Happens” (Chapter 7) speculates more about the future and states that there are
multiple levels of organization within self-organized complexity. Atoms form chemicals which
form cells which form humans which form societies which form the world…
A fascinating read, Emergence is absolutely brilliant. It’s almost like a textbook for
emergence, without any of the droll reading. Although a bit long, and almost philosophical (the
examples balance this out), it teaches a basic understanding of emergent behavior without any of
the technicalities of making one. It’s useful in so many fields; it’s highly useful for any mediumlevel beginner in A.I.
“Knowledge representation.” World of Computer Science. N.p.: n.p., 2007. N. pag. Gale Science
in Context. Web. 2 Oct. 2013.
Knowledge representation (KR) is usually considered to be a problem in AI, but its
methods can also be used in philosophy, linguistics, and psychology, and math. Many AI today
use very specific techniques for KR, such as offering possible diagnoses of patient symptoms.
On the other hand, humans can generalize their understanding from many sources of knowledge
and apply it to many different areas. This is the core problem of KR – to replicate the human way
to store knowledge. Solutions to the problem are partly inspired by the computational theory of
the mind – that the human brain does nothing more than manipulate symbols in a logical and
syntactic way. The question is how. Short of modeling the brain, other attempted solutions have
included using propositional logic, predicate logic, ideas from fuzzy logic or Bayesian inference,
and concepts from object oriented programming. Another problem in the field is the Frame
Problem. The question is essentially, how does the computer recognize cause and effect?
Humans recognize flipping a switch turns on a light, but a computer doesn’t.
More useful than other things, it still covers mostly background info. It articulates the
problem in KR very well, but it offers no truly insightful insights on how to solve the issue. The
issue is much more tailored to AI, and beginner AI creators will find the problems and concepts
posed in here useful but not always necessary to know.
"Metrics and Test Plan for Experimentation of Control Algorithms on the NSWC Tabletop and
JHU/APL Tabletop Simulator (Short Version)." 15 June 2009. TS.
Lists and describes metrics – measurable quantities – of the operability of each of the
simulated Tabletop’s components. For example, for the water cooled radars to be on, the radars
must be less than 125 degree Fahrenheit and must have a power supply between 49 and 52 watts.
It also uses many equations involving summations and integrals to create a metric for things that
are not easily quantifiable, such as total mission operability and survivability.
Useful reference for when I need info on how to determine the theoretical operability of
the ship. It has equations and examples of ship operability and load models, which would help
me greatly when setting up paper diagrams of how the program is to run.
Pekala, M., N. Rolander, and D. Scheidt. "Tabletop Simulation Interface Specification." 2 July
2009. TS.
It’s an overview of the Tabletop GUI – which program receives what kind of data from
where. For example, the simulation will output arguments of type int32 and double. It also gives
brief examples of what each argument means, i.e. the simulation time is represented with a
double. At the end, it gives the mission objective using a propositional statement.
It’s good for when I’m past the theoretical stage of algorithm development and I’m
physically writing code. I’ll need to know the inputs, outputs, failure parameters, etc.
Scheidt, David. Personal interview. 21 Oct. 2013.
Mr. Scheidt offered a lot of insights into AI and the field of test planning. (For example,
there’s a very large journal dedicated to the field.) He also offered a lot of illumination into topdown vs. bottom-up A.I., and the strengths/weaknesses associated with each. Top-down AI are
easier to understand and create, but they can’t solve the same problems that bottom-up AI can.
However, bottom-up AI are much harder to create, and sometimes it’s impossible to understand
why they work. Testing them is even harder.
Mr. Scheidt has a lot of experience in the field of AI; he’s great person to ask if you need
info on specifics of the field. However, he’s a busy man, so keep the question few and
interesting.
“Scientific Papers.” Scitable. Nature Education, n.d. Web. 23 Mar. 2014.