Download Three Years of Using Robots in the Artificial Intelligence Course

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Human-Computer Interaction Institute wikipedia , lookup

The City and the Stars wikipedia , lookup

Robot wikipedia , lookup

Self-reconfiguring modular robot wikipedia , lookup

Embodied cognitive science wikipedia , lookup

Robotics wikipedia , lookup

Index of robotics articles wikipedia , lookup

List of Doctor Who robots wikipedia , lookup

Ethics of artificial intelligence wikipedia , lookup

Transcript
Three Years of Using Robots in the Artificial Intelligence Course – Lessons
Learned
Amruth N. Kumar
Ramapo College of New Jersey
505 Ramapo Valley Road
Mahwah, NJ 07430
[email protected]
Abstract
We have been using robots in our Artificial Intelligence course since fall 2000. We have been using
the robots for open-laboratory projects. The projects are designed to emphasize high- level
knowledge-based AI algorithms. After three offerings of the course, we paused to analyze the
collected data and see if we could answer the following questions: (i) Are robot projects effective at
helping students learn AI concepts? (ii) What advantages, if any, can be attributed to using robots
for AI projects? (iii) What are the downsides of using robots for traditional projects in AI? In this
paper, we will discuss the results of our evaluation and list the lessons learned.
1. Introduction
Working with robots is exciting. Many teachers and researchers have attempted to translate this
excitement into learning. In the last few years alone, numerous faculty have attempted to
incorporate robots into the undergraduate curriculum, and in various capacities: for non- majors, in a
survey course, across the Computer Science curriculum, for recruitment of women (e.g., [Haller and
Fossum 2001]), in Computer Science I (e.g., [Fagin 2000]) and in the Artificial Intelligence course
(e.g., [Kumar and Meeden 1998; Harlan et al 2001; Klassner 2002]).
We have been using robots in our Artificial Intelligence course [Kumar 2001] for assigning projects
in the course. Our Artificial Intelligence course is a junior/senior level course, taken by Computer
Science majors in a liberal arts undergraduate school. The course is traditional in its content: it
covers representation and reasoning, with emphasis on search, logic and expert systems. Our
objective in using robots was to reinforce the learning of these AI tools using an embodied agent.
This is similar to the approach initially used by [Harlan et al 2001], and that by [Greenwald and
Artz 2004] for soft computing topics. Other objectives for which robots have been used include: as
an organizing theme for the various AI concepts [Kumar and Meeden 1998], as an empirical testbed for philosophical issues in a graduate course [Turner et al, 1996], as a bridge between abstract
AI theory and implementation [Shamma and Turner 1998], and as an exploration of the
relationships between hardware, environment and software in agent design [Klassner 2002].
We chose the LEGO Mindstorms (http://www.legomindstorms.com) robot for the course because it
is truly "plug-and-play". Students do not have to design circuits, or even solder components to build
LEGO robots. Therefore, the projects can focus on AI algorithms rather than robot construction.
LEGO Mindstorms robot is also inexpensive. Therefore, we could ask students to buy their own kit,
either individually or in groups of 2 or 3 students.
1
In order to help students construct their robots, we recommended [Baum, 2002] as a reference. This
book describes several robots, including how to construct and program them. Students can easily
adapt these robots for their projects, and focus on building intelligent behaviors into them using AI
algorithms. Finally, we used Java and LeJos (http://lejos.sourceforge.net) with the robot. This
enabled us to utilize a larger part of the on-board main memory for programs, allowing our students
to write larger programs.
2. Our Goals and Objectives
The following principles guided how we used robots in our AI course:
• AI, not Electrical Engineering: We wanted to emphasize AI algorithms and not robot
construction. In other words, we wanted to minimize the time our students spent constructing
the robots, and maximize the time they spent implementing AI algorithms to test high- level
behavior of the robots. Constructing robots from components has greater pedagogical value to
engineering students than to Computer Science students. Some knowledge of engineering
principles is desirable for constructing robots. Constructing robots can be time-consuming, and
frustrating to the uninitiated. Since our students were Computer Science majors in a liberal arts
college, we decided to simplify robot construction by using a "plug-and-play" robot such as
LEGO MindStorms.
• AI, not Robotics: We wanted to use robot projects to teach AI, not robotics. We wanted to
emphasize knowledge-based AI algorithms and not reactive algorithms specific to robotics
[Brooks 1986] - we wanted to minimize the time students spent implementing low- level reactive
behavior in the robot, and maximize the time they spent building high- level knowledge-based
behavior traditionally considered “intelligent” in AI. For this reason, we stayed away from the
many interesting problems specific to robotics, such as localization, mapping, odometry,
landmarking, object detection, etc., that have been addressed by other practitioners, e.g., [Dodds
et al 2004; Mayer et al, 2004].
• Open, not Closed Labs: Finally, we wanted to use the robots in open laboratory projects [Tucker
et al 1991] – projects that students carry out on their own, after class. Open- labs have some
advantages over closed labs: students can spend as much time as they need to finish a project.
Not only is this often necessary to properly design and test robot behavior, it also encourages
students to be more creative in their design and implementation of robots. Traditionally, closed
lab courses are worth more credits than open- lab courses. So, using open labs in a course helped
us keep down the number of credits in the curriculum.
• Clearly defined, not open-ended: We chose to assign closely tailored (as opposed to openended) projects in our course. It is especially interesting to assign open-ended robot projects in a
course and let students fully exercise their imagination. Such open-ended projects can be
combined with contests to foster a healthy atmosphere of learning while playing (e.g., [Sklar et
al 2002; Verner and Ahlgren 2004]). But, many robot behaviors can be implemented using
purely reactive algorithms just as well as using knowledge-based algorithms. Since our
emphasis was on using AI algorithms, we chose to closely specify the requirements of our
projects. This not only gives students a clear idea of what is expected of them, but also helps us
formulate a clear grading policy.
Our approach differs from other current approaches for using robots in the AI course along the lines
of these objectives.
2
2.1 Robots for Traditional Projects
Why use robots for traditional knowledge-based AI projects? This has many pedagogical benefits:
• Using robots promotes hands-on, active learning, which increases the depth of the student’s
knowledge [McConnell 1996];
• Students can use robots to test their implementation of AI algorithms in a “situated”
environment rather than an abstract symbolic environment, which is known to promote
learning [Brown et al, 1989];
• Students have tangible ways to assess the success or failure of their implementation. Since
robots are tactile, using them helps visual and kinesthetic learners, thereby addressing the
needs of more and different types of learners than a traditional AI course.
Finally, robots excite and motivate students.
We believe that using robots in the Artificial Intelligence course also helps students better
understand topics such as:
• Algorithm Analysis: Students get a better feel for the time and space complexities of
algorithms, and an appreciation of algorithm complexity issues.
•
Real-time programming: A topic not well represented in typical undergraduate liberal arts
Computer Science curricula is real-time programming (non-determinism, concurrency, promptly
responding to changes in the environment). Using robots addresses this issue, briefly, but
effectively.
•
Group Projects: [Maxwell and Meeden, 2000] state that what students learned the most from
their robotics course was about team work. Assigning robot projects as group projects is a great
way to offer additional opportunities for collaborative learning in the curriculum.
Some questions worth considering in this context are: (i) Are robot projects effective at helping
students learn traditional AI concepts? (ii) Are there downsides to using robots for traditional
projects in AI? (iii) What other advantages, if any, can be attributed to using robots for AI projects?
We will attempt to answer these questions.
3. Typical Projects
The topics that we identified as candidates for robot projects in the introductory Artificial
Intelligence course are:
• Blind searches – depth- first search and breadth-first search;
• Informed searches – hill climbing, best- first search and A* search;
• Expert systems - forward chaining and backward chaining;
• Game playing - Minimax search and alpha-beta cutoffs.
For each topic, we designed a project that engaged the robot in a meaningful task from the real
world. Clearly, all these projects could be implemented just as easily, as a purely symbolic solution.
But, robot-based solutions are more natural and intuitively appealing than symbolic solutions for
these problems.
Blind searches: The robots had to use blind search algorithms to either traverse a two-dimensional
tree (See Figure 1), or a maze. Although these problems can also be solved purely reactively, we
required that students use knowledge-based algorithms in their robots.
3
Informed searches: The robots had to use informed search algorithms to either traverse a maze
(See Figure 2) whose layout is known, or to clean a grid of rooms while minimizing travel. A
successful robot not only reaches the end of the maze (which can be accomplished through reactive
behavior alone), but also explores the maze in the order dictated by the algorithm, foregoing parts of
the maze that are sub-optimal.
Expert Systems: The robots had to use forward and backward chaining to scan a pixel grid and
determine the character represented by the grid. We have used two versions of the pixel grid – one
where the robot traverses the grid (See Figure 3) and another where it scans the grid with an
overhanging scanner arm (See Figure 4). A successful robot not only correctly identifies the
character, but also traverses the pixel grid in the order dictated by the algorithm.
Game Playing: The robot had to play “mini-chess” with a human opponent. The chess board was 5
x 5, contained only pawns, and the robot learned of its opponent’s move s through keyboard input.
In each case, the students could practice on one version of the prop, but had to demonstrate their
robot on a different version not available to them till the day of submission. Human assists were
discouraged, but penalized only infrequently. The size of the props, e.g., the number of rooms in a
grid or maze, the number of nodes in a tree, and the number of characters displayable on a pixel
grid was limited by the main memory limitations of the robot. Students could either work
individually or in teams of two.
Figure 1: A Two-Dimensional Tree for Blind Searches
4
Figure 2: The Maze – for Blind and Informed Searches
Figure 3: Pixel Grid for Forward and Backward Chaining Expert Systems
Figure 4: Pixel Grid and Scanner Arm of a Robot
4. Evaluation of Robot Projects
We have offered our AI course with robot projects three times so far. In this section, we will discuss
the results of evaluating the robot projects each semester and compare them to draw conclusions
that span all three years.
4.1 Fall 2000
This was the first time we offered the AI course with robot projects. We conducted an anonymous
survey of our students at the end of the semester. In the survey, we asked them to compare robot
projects with traditional LISP projects, or projects in other courses they had taken with the same
5
instructor (all but 3 of the respondents had taken other courses with the same instructor). There
were 16 students in the class, all of whom responded to the survey.
Compared to projects in other courses, students rated the robot projects in AI as:
• hard, i.e., 3.85 on a Likert scale of 1 (very easy) to 5 (very hard).
• taking a lot more time, i.e., 4.56 on a scale of 1 (lot less time) to 5 (lot more time)
• more interesting, i.e., 4.18 on a scale of 1 (lot less interesting) to 5 (lot more interesting)
So, our initial hunch in using robots in the course was right: even though students spent a lot more
time doing robot projects, they also enjoyed the projects a lot more. Students agreed that the robot
projects:
• helped them learn/understand AI concepts better (2.06 on a scale of 1 (Strongly Agree) to 5
(Strongly Disagree))
• gave them an opportunity to apply/implement AI concepts that they had learned (1.93 on the
above scale)
They rated the various components of the projects on a scale of 1 (easy) to 3 (hard) as follows:
• the assigned problems tended to be hard: 2.4.
• putting together robot hardware was easy to moderate: 1.71
• writing software for the robot was moderate: 2.00
• getting the robot to work reliably was hard: 2.87
Clearly, building the robot was the easiest part of the project, which validated our choice of LEGO
robots for the course.
Students were nearly neutral on whether the grade they received on the projects accurately reflected
how much they had learned from the projects (2.61 on a Likert scale of 1 (Strongly Agree) to 5
(Strongly Disagree)) and on whether their grades were an accurate reflection of how much time
they had spent doing the projects (3.30 on the same scale).
Students were unanimous in recommending that we continue to assign robot projects in future
offerings of the course. Over 90% said that they would recommend such a course to friends. Both of
these indicated that robot projects had captured the imagination of the students, and were therefore
effective.
4.2 Fall 2001
We used similar projects in fall 2001 as in fall 2000. We evaluated the projects using an anonymous
survey at the end of the semester. 9 out of the 11 students in the class responded.
Compared to projects in other courses, students rated the robot projects in AI as:
• hard, i.e., 4.33 on a Likert scale of 1 (very easy) to 5 (very hard).
• taking a lot more time, i.e., 4.78 on a scale of 1 (lot less time) to 5 (lot more time)
• more interesting, i.e., 3.44 on a scale of 1 (lot less interesting) to 5 (lot more interesting)
They rated the various components of the projects on a scale of 1 (very easy) to 5 (very hard) as
follows:
• neutral about the assigned problems: 3.22
6
•
•
•
putting together robot hardware was easy: 2.44
writing software for the robot was between neutral and hard: 3.56
getting the robot to work reliably was very hard: 4.78
Students were neutral on whether the grade they received on the projects accurately reflected how
much effort they had put into the projects (3.43 on a Likert scale of 1 (Strongly Agree) to 5
(Strongly Disagree)). They were neutral on whether their grades were an accurate reflection of how
much they had learned from the projects (3.25 on the same scale).
These figures are interesting in how consistent they are with the figures from fall 2000, as shown in
Table 1. In order to facilitate comparison, we have scaled the Section B scores of fall 2000 from a
three-point scale to a five-point scale. N refers to the number of students who evaluated the projects.
Table 1: Comparison of evaluation results from fall 2000 and fall 2001
Criterion
Fall 2000 Fall 2001
N=16
N=9
A. Robot projects compared to traditional projects:
The ease of robot projects:
3.85
4.33
Scale: 1 (very easy) à 5 (very hard)
Time taken by robot projects:
4.56
4.78
Scale 1 (lot less time) à 5 (lot more time)
How interesting robot projects were:
4.18
3.44
Scale 1 (lot less) à 5 (lot more)
B. Student rating of the components of the robot projects:
Scale: 1 (very easy) à 5 (very hard)
The assigned problems
4.00
3.22
Assembling robot hardware
2.85
2.44
Writing software for the robot
3.33
3.56
Getting the robot to work reliably
4.78
4.78
C. Whether project grades reflected:
Scale: 1 (very easy) à 5 (very hard)
The effort put in by students on the project
3.30
3.43
How much students had learned from doing the project
2.61
3.25
Even though students consistently rated robot projects as being harder and a lot more timeconsuming than traditional projects, they also rated robot projects as being a lot more interesting
than traditional projects, highlighting one advantage of using robots for traditional projects in AI –
that they are more engaging, and therefore, more effective. Students were consistent in thinking that
getting the robots to work reliably was the hardest part of a robot project. They were also
consistently neutral about whether project grades reflected the effort they put into the projects. That
they reported spending a lot more time on robot projects ma y have tempered their opinion on this
issue.
4.3 Fall 2003
This was the third time we offered the Artificial Intelligence course with robot projects. Instead of a
single end-of-the-semester evaluation of the use of robots in the course, we evaluated every project
7
individually. We believed that evaluating each project individually, and as soon as it is completed
would provide a more accurate picture than an end-of-semester evaluation of all the projects
together. Previous evaluations had shown that getting the robot to work reliably was the hardest part
of any robot project. Therefore, in fall 2003, we relaxed the project requirements in several ways:
(i) the environment/props were more flexible; (ii) students could assist their robot without being
penalized as long as the robot announced the correct state. The state announcements of a robot were
now used to judge its success/failure. (These changes are described in the next Section “Lessons
Learned”.)
We wanted to assess the impact of using robots on students' knowledge of analysis of algorithms.
So, we drafted a test consisting of 10 multiple choice questions. We administered the test both at the
beginning of the semester, and again at the end of the semester. Students did not have access to the
test in the interim. The scores of 5 students increased from pretest to post-test; the scores of 3
students stayed the same, and the scores of 2 students decreased. We discarded the scores of
students who took one test and not the other. We cannot draw any definitive conclusions because of
the small sample size, but the numbers are encouraging.
On our evaluation of the first project, 11 students responded (class size was 12). Respondents rated
the various aspects of the project as follows, on a scale of 1 (very easy) to 5 (very hard):
• Neutral about building the robot: 2.82
• Neutral about writing the program: 2.9
• Getting the robot to work reliably was hard: 4.18
Students rated the components of the projects as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For Depth- first search:
o helped them understand the algorithm: 2.55
o helped them learn how to implement the algorithm: 2.45
o helped them apply the algorithm to a problem 2.0
• For Hill-Climbing:
o helped them understand the algorithm: 2.18
o helped them learn how to implement the algorithm: 2.55
o helped them apply the algorithm to a problem: 2.27
On a concept quiz, students who did not attempt the project did poorly as compared to those who
did.
On our evaluation of the second project, 12 students responded. Respondents rated the various
aspects of the project as follows on a scale of 1 (very easy) to 5 (very hard):
• Building the robot was easy: 2.08
• Neutral about writing the program: 2.92
• Getting the robot to work reliably was neutral to hard: 3.58
Students rated the components of the project as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For Best-first search:
o helped them understand the algorithm: 2.17
o helped them understand how to implement the algorithm: 2.08
o helped them apply the algorithm to a problem: 2.08
8
•
For A* search:
o helped them understand the algorithm: 2.25
o helped them understand how to implement the algorithm: 2.25
o helped them apply the algorithm to a problem: 2.25
On a concept quiz, students who did not attempt the project did poorly as compared to those who
did.
On our evaluation of the third project, 10 students responded. Respondents rated the various aspects
of the project as follows on a scale of 1 (very easy) to 5 (very hard):
• Building the robot: 3.3. This is to be expected since students had to build a scanner arm,
which had an overhang and used rack and pinion gears (See Figure 4).
• Writing the program: 2.9
• Getting the robot to work reliably: 4.0
Students rated the components of the project as follows on a scale of 1 (Strongly agree) to 5
(Strongly disagree):
• For forward chaining:
o helped them understand the algorithm: 2.2
o helped them understand how to implement the algorithm: 2.2
o helped them apply the algorithm to a problem: 2.2
• For backward chaining:
o helped them understand the algorithm: 2.1
o helped them understand how to implement the algorithm: 2.1
o helped them apply the algorithm to a problem: 2.2
On the fourth project, since there were only 3 respondent s, we did not analyze the results. Table 2
summarizes the student responses for the first three projects. N refers to the number of students who
evaluated the projects.
Table 2: Comparison of evaluation results from fall 2003
Criterion
Project 1 Project 2 Project 3
N=11
N=12
N=10
Rating the components of the project:
Scale: 1 (very easy) to 5 (very hard)
Building the robot
2.82
2.08
3.30
Writing the program
2.90
2.92
2.90
Getting the robot to work reliably
4.18
3.58
4.00
For the first algorithm, the project helped:
Scale: 1 (Strongly agree) to 5 (Strongly disagree)
Understand the algorithm
2.55
2.17
2.20
Understand how to implement the algorithm
2.45
2.08
2.20
Apply the algorithm to a problem
2.00
2.08
2.20
For the second algorithm, the project helped:
Scale: 1 (Strongly agree) to 5 (Strongly disagree)
Understand the algorithm
2.18
2.25
2.10
Understand how to implement the algorithm
2.55
2.25
2.10
Apply the algorithm to a problem
2.27
2.25
2.20
9
Students were again consistent in noting that getting the robot to work reliably was the hardest part
of a robot project. For all six algorithms, students agreed that the robot project helped them
understand the algorithm, how to implement it, and how to apply it to a problem. Clearly, students
believe that robot projects help them learn the underlying AI concepts. The results were most
consistent, and hence, most definitive on that robot projects help students learn how to apply the
algorithm to a problem. We believe that this is one of the advantages of using robots for traditional
projects in AI – in the traditional symbolic world, concepts such as state and operators are exact,
whereas in the robot world, they are subject to approximation and interpretation. This forces
students to focus on the boundary between real- life problems and their AI solutions, and helps them
better develop their skills of operationalizing AI algorithms, i.e., applying them to solve problems.
We are pleased that this is borne out by the results of the evaluation.
We analyzed the results of the midterm and final exam to see if there was a correlation between
project completion and grades. We considered anyone who scored at least 60% on a project as
having completed the project. Table 3 summarizes the results. For instance, the first project was on
depth-first search and hill climbing. On the midterm exam, 40% of the first question was on depthfirst search. The sole student who did not attempt the first project scored 3 (out of 10) points on this
question. The scores of the rest of the students ranged from 7 through 10, with an average of 8.4.
N/A in the table indicates that the student did not attempt the question - students were asked to
answer 6 out of 7 questions on the midterm exam and the final exam. It is clear from the table that
there is a positive correlation between project completion and student scores on relevant sections of
the tests.
Table 3: Comparison of the test scores of the students who completed the projects,
versus those who did not.
Problem
Topic & Points
Scores (out of 10) of students who
attempted the project
Did not attempt project
Project 1: Depth- first search and Hill Climbing
Midterm
4 points on
Range: 7 à 10 (11 scores)
3
Problem 1 Depth- first search
Average: 8.4
Midterm
6 points on
Range: 3 à 10 (11 scores)
N/A
Problem 3 Hill climbing
Average: 7.5
Project 2: Best-first search and A*
Midterm
8 points on
Range: 6 à 9 (9 scores)
4,5,9
Problem 5 A* search
Average: 7.7
Project 3: Forward and backward chaining
Final Exam 6 points on forward Range: N/A à 10 (10 scores)
4, N/A
Problem 5 / backward chaining
Average : 5.7
4.4 Analysis Across the Semesters
We considered any student who scored at least 1 point on a project as having attempted the project,
and anyone who scored at least 60% of the points as having completed the project. Table 3 lists the
10
number of students who attempted and completed each project from fall 2000 through fall 2003. N
refers to class size, not including any students who audited the course.
Table 4: Project completion rates from fall 2000 through fall 2003
Semester
Project 1 Project 2 Project 3 Project 4
Fall 2000 Attempted
10
8
13
N/A
(N=15)
Completed
6
6
9
N/A
Fall 2001 Attempted
11
9
8
9
(N=11)
Completed
11
9
6
7
Fall 2003 Attempted
11
9
10
2
(N=12)
Completed
11
9
10
1
Most students who attempted a project also comp leted it. 47% of the students completed the
projects in fall 2000, 75% in fall 2001, and 65% in fall 2003. Not many students attempted project 4
in fall 2003 because it was assigned in the last three weeks of the semester. If we discount it, project
comple tion rate jumps to 83% in fall 2003. These improvements in completion rates over the years
may be attributed to the changes described in the “Lessons Learned” section that we made to our
projects. At over 65%, the completion rates are comparable to those of traditional projects in an
upper- level elective, suggesting that project completion rates do not suffer when robots are used for
AI projects.
Students reported spending large amounts of time on the projects. In fall 2001, they spent an
average of 30.5 hours on the first project (which had four components, and was arguably the largest
project), 20.2 hours on the second project and 12.6 hours on the third and arguably the easiest
project. In fall 2001, they reported spending an average of 8.2 hours on the first (the easiest) project,
36.25 hours on the second project, 32.5 hours on the third project, and 12.2 hours on the fourth
project (on Expert Systems). Within each project, students reported wildly different amounts of
time – for instance, on the fourth project in fall 2001, the range was 3 hours to 30 hours! The
complexity of the robot, its programming and testing all play a role in how much time a student
spends on each project. For instance, students who took advantage of callbacks wrote significantly
shorter programs (and spent commensurately less time) than those who tried to deterministically
model the entire behavior of the robot, because callbacks permit modular separation of reactive and
planning components. Students who used a three-wheel construction spent longer hours testing the
robot than those who built the robot with dual differentials and four wheels.
In fall 2000, there was a trend from individual effort towards team effort as the semester
progressed: whereas 50% of the students attempted the first project by themselves, 60% or more
attempted the second and third projects in groups. The students strongly recommended that group
projects be allowed in future offerings of this course (1.42 on a scale of 1 (Strongly recommend) to
5 (Strongly do not recommend)). Yet, in fall 2003, all but two students chose to work on the first
three projects individually. Simplifying the environment and the criteria for the success of the robot
project may have contributed to this trend.
5. Lessons Learned
11
It is clear from our evaluations that one of the hardest parts of robot projects is to make the robot
behave reliably, time after time. Students often found that their robot would work correctly on trial
runs, but would fail to work during submission. Many factors contribute to the unpredictability of a
robot, including:
• The many analog components in a robot, such as sensors and motors;
• The batteries driving a robot, and their level of charge;
• The mechanical design of the robot.
Hardware Guidelines: The following are some mechanical design considerations that can alleviate
the problem of unpredictable robots:
• Treads and thick wheels should be avoided – they generate more friction than thin/bicycle
wheels and adversely affect turning and motion, especially as the batteries discharge.
• Three-wheel design should be avoided – the third wheel makes turning unpredictable,
especially when it is a swivel wheel. Two wheels and a stub arrangement should also be
avoided – the stub generates additional friction as it drags on the floor. Preferably, a fourwheel design with dual differentials should be used, and the robot should be turned by
changing the direction of rotation of front and rear wheels.
• The load/weight should be balanced on the front and rear wheels to avoid skateboard effect.
• Light sensors should be mounted as close to the ground as possible. Preferably, the sensor
should be calibrated at run-time.
While our experience has been specific to LEGO MindStorms robots, we believe these design
guidelines are applicable to most student-assembled robot platforms.
Project Design Guidelines: The following are some guidelines for designing robot projects that
alleviate the problem of unpredictable robot behavior:
• The robot should externalize its state before each move. This will help the instructor and
the student evaluate the correctness of the underlying knowledge-based algorithm, even if
the robot goes off-course using dead reckoning. For instance, a robot that scans a pixel grid
should announce the next pixel it intends to examine at each step. A robot that cleans rooms
should announce the coordinates of the next room it will visit, before traveling to it.
• The environment/prop should be flexible, not rigid. A flexible environment can
significantly alleviate the problems arising from the unpredictability of robot behavior. For
instance, consider the design of a maze. A maze with fixed walls (e.g., Figure 2) is less
forgiving than one that is constructed with walls that can be moved during the
demonstration (e.g., Figure 5). In a fixed-wall maze, if a robot turns by 75 instead of 90
degrees, the robot may end up in a room other than the one it intended to visit. In a
moveable-wall maze, if a robot is found to turn incorrectly, the wall that it should have
encountered can be moved into its path, essentially making the room it visits the one it
intended to visit! As long as the focus of the robot project is a knowledge-based algorithm
(and not robotics), and the robot announces its state and intention before each moveme nt,
moving walls to address errors in the navigation of a robot is inconsequential to the
correctness of the project.
• Finally, the students should be provided the option of submitting an unedited videotape of
their robot in action, instead of demonstrating the robot in the instructor’s presence. In our
experience, in-person demonstrations often turn into marathon sessions for both the students
12
and the instructor, since the students do not want to give up trying to get their robot to work
correctly.
Finally, it is not always possible or advisable to transplant a symbolic AI project into the robot
domain. For instance, consider the issue of chronological backtracking as a robot tries to find its
way out of a room. Unless the robot uses reliable dead reckoning, it will fail to backtrack to the
correct prior location in the room. Even if it can backtrack to the correct location, depending on the
placement of obstacles in the room, stopping to recalculate its next move at this location may not be
the most reasonable next course of action.
Using search on a two-dimensional tree (Figure 1) is an example of a symbolic project transplanted
into the robot domain – it adds to the complexity of the solution (because of the unreliability of
robot motion) without adding to the engagement factor of the problem. Recasting the same problem
as a search through a maze (Figure 2) not only makes the problem more interesting, but also
engages the strengths of the robot – processing input from multiple sensors, collision detection, the
ability to explore and map a space, etc. Our experience has been that providing a real- life context is
a good way to adapt a traditional project for robots. Real- life contexts typically demand
approximation instead of exactness, allow recovering from a mistake instead of undoing one, and
reward dynamically reacting to a changing environment, all strengths of robots. As added bonuses,
real- life contexts typically make projects more interesting and engaging, and help students
understand how to operationalize an algorithm.
Figure 5: A Maze Built with Moveable Walls
6. Discussion
Analysis of the results from fall 2003 clearly shows that there is a positive correlation between
project completion and test scores. Moreover, students themselves believe that robot projects help
13
them learn the underlying AI concepts. Therefore, robot projects are effective at helping students
learn AI concepts.
In both fall 2000, and fall 2001, students consistently rated robot projects as being harder and a lot
more time consuming than traditional projects. Across all three years, they confirmed that getting
the robots to work reliably was the hardest part of a robot project. These are clearly some
drawbacks of using robots for traditional projects in the AI course.
In both fall 2000, and fall 2001, students consistently rated robot projects as being a lot more
interesting than traditional projects. This confirms one advantage of using robots for traditional
projects in AI – they are more engaging, and therefore, more effective. In fall 2003, students
consistently agreed that robot projects help them learn how to apply the algorithm to a problem.
This is another advantage of using robots for traditional projects in AI – robot projects help students
better operationalize AI algorithms than symbolic projects. Finally, our evaluation in fall 2003,
while preliminary, seems to support the assertion that using robot projects promotes the
understanding of algorithm complexity issues.
So, is it worth using robots for traditional projects in the AI course? The answer is depends - yes, if
we consider the excitement and engagement generated among students; may be, if we only go by
how much students learn; and no, if we consider the time and effort that robot projects demand from
both the students and the instructor. We plan to continue to assign robot projects in our AI course.
We plan to develop additional knowledge-based robot projects on advanced topics. Finally, we plan
to continue to evaluate the use of robot projects in our AI course.
Acknowledgments
Partial support for this work was provided by the National Science Foundation’s Course,
Curriculum and Laboratory Improvement Program under grant DUE-0311549.
The author thanks the anonymous reviewers whose comments helped significantly improve this
paper and its findings.
References Cited
[1] Baum, D., Dave Baum's Definitive Guide to Lego MindStorms. Apress Publishers.
www.apress.com. 2nd Edition, 2002.
[2] Brooks, R.A., “A Robust Layered Control System for a Mobile Robot”, IEEE Journal of
Robotics and Automation, Vol. 2(1), March 1986, pp 14-23.
[3] Brown, J.S., Collins, A., and Duguid, S., “Situated Cognition and the Culture of Learning”,
Educational Researcher, 18(1), 1989, 32-42.
[4] Dodds, Z., Santana, S., Erickson, B., Wnuk, K., Fisher, J., and Livianu, M. Teaching Robot
Localization with the Evolution ER1. Accessible Hands-on Artificial Intelligence and Robotics
Education, AAAI Spring Symposium Technical Report SS-04-01, 2004, 18-23.
14
[5] Fagin, B., “Using Ada-Based robotics to teach Computer Science”. In Proceedings of the 5th
Annual Conference on Innovation and Technology in Computer Science Education (ITICSE
2000), New York, NY: The Association for Computing Machinery, 2000, 148-155.
[6] Fossum, T.V., Haller, S.M., Voyles, M.M., and Guttschow, G.L., “A gender-based study of
elementary school children working with Robolab”, Technical Report of the AAAI Spring
Symposium Workshop on Robotics in Education, Stanford University, March 2001.
[7] Greenwald, L. and Artz, D. “Teaching Artificial Intelligence with Low-Cost Robots”,
Accessible Hands-on Artificial Intelligence and Robotics Education, AAAI Spring Symposium
Technical Report SS-04-01, 2004, 35-40.
[8] Harlan, R., Levine, D., and McClarigan, S. The Khepera Robot and the kRobot Class: A
Platform for Introducing Robotics in the Undergraduate Curriculum. Proceedings of 32nd
SIGCSE Technical Symposium on Computer Science Education, 2001, 105-109.
[9] Klassner, F. A Case Study of LEGO MindStorms Suitability for Artificial Intellifence and
Robotics Courses a the College Level, Proceedings of 33rd SIGCSE Technical Symposium on
Computer Science Education, 2002, 8-12.
[10] Kumar, D. and Meeden, L., “A Robot Laboratory for Teaching Artificial Intelligence”. In
Proceedings of the Twenty-Ninth ACM SIGCSE Technical Symposium (SIGCSE '98), New
York, NY: The Association for Computing Machinery, 1998, 341-344.
[11] Kumar, A. “Using Robots in the Undergraduate Artificial Intelligence Course: An
Experience Report”, Proceedings of FIE 2001, 2001, Session T4D.
[12] Maxwell, B.A., and Meeden, L.A., “Integrating Robotics Research with Undergraduate
Education”, IEEE Intelligent Systems, November/December 2000, 2-7.
[13] Mayer, G.R., Weinberg, J.B., and Yu, X. Teaching Deliberative Navigation Using the
LEGO RCX and Standard LEGO Components. Accessible Hands-on Artificial Intelligence and
Robotics Education, AAAI Spring Symposium Technical Report SS-04-01, 2004, 30-34.
[14] McConnell, J.J., “Active Learning and its use in Computer Science”, In Proceedings of the
Conference on Integrating Technology into Computer Science Education (ITICSE 1996), New
York, NY: The Association for Computing Machinery, 1996, 52-54.
[15] Shamma, D.A. and Turner, C.W. 1998. Teaching the Foundations in AI: Mobile Robots and
Symbolic Victories. In Proceedings of the Eleventh International Florida Artificial Intelligence
Research Symposium Conference (FLAIRS ’98), 29-33. Menlo Park, CA: AAAI Press.
[16] Sklar, E., Eguchi, A., and Johnson, J. Robocupjunior: Learning with Educational Robotics.
In Proceedings of the 6th Robocup Symposium, 2002.
[17] Tucker, A. et al.: Computing Curricula 1991, Communications of the Association for
Computing Machinery, 34 (June 1991): 68-84.
[18] Turner, C.; Ford, K.; Dobbs, S.; Suri, N.; and Hayes, P. 1996. Robots in the Classroom. In
Proceedings of the Ninth Florida Artificial Intelligence Research Symposium (FLAIRS '96),
497-500. Florida AI Research Society.
15
[19] Verner, I.M. and Ahlgren, D.J. Robot Contests: Promoting Experiential Engineering
Education. Accessible Hands-on Artificial Intelligence and Robotics Education, AAAI Spring
Symposium Technical Report SS-04-01, 2004, 141-145.
16