Download Formative Evaluation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Pattern recognition wikipedia , lookup

Human-Computer Interaction Institute wikipedia , lookup

Adaptive collaborative control wikipedia , lookup

Machine learning wikipedia , lookup

Concept learning wikipedia , lookup

Transcript
Formative Evaluation
3-May-17
Adaptive Learning Environments
1
Contents
1. Overview of Evaluation
2. Methods
3. Case study: LeActiveMath
4. References
Some material based on Ainsworth’s AIED 2003
tutorial on Evaluation Methods for Learning
Environments, see AILE course web page and
link:
http://www.psychology.nottingham.ac.uk/staff/sea/Evaluationtutorial.ppt
3-May-17
Adaptive Learning Environments
2
3. Case study:
formative evaluation of
LeActiveMath
3-May-17
Adaptive Learning Environments
3
LeActiveMath:
Formative Evaluation
Formative evaluation continuously throughout
project
Iterative process:
development
---> testing
---> further development
School and University levels
In Spain, Germany and UK
3-May-17
Adaptive Learning Environments
4
LeAM Evaluation
Learner Model Evaluation
Initial User Evaluation:
Formative Evaluation with users
Resulting in improved OLM
School Evaluation
Formative
Summative
University Evaluation
Formative
Summative
3-May-17
Adaptive Learning Environments
5
Initial
User User
Evaluation:
Dec 2005
Initial
Evaluation
To inform the usability and usefulness of OLM
1 user + initial design of OLM
“Think aloud” protocol
Issues identified include:
Need for clear instruction in OLM and underlying
concepts
Add numeric scales to bar charts
Consider help boxes in different parts of OLM
Some confusion regarding colour grading
Re-define confidence bar when user disputes a claim
Change ‘warrant’ to more intuitive word
Clarify wording of validation buttons in Disagree view
Various changes made in response
3-May-17
Adaptive Learning Environments
6
Mastery colours during Usability Study
3-May-17
Adaptive Learning Environments
7
Usability Study of Mastery Colours
To identify if interface was effective, efficient,
and suitable for learning calculus on-line
5 German undergraduate, 5 high school students
6 users sufficient for detecting 90% of usability issues (Nielsen,
1994)
Standardised questionnaires, interviews, taskoriented evaluation plus “Think aloud” protocol
Results:
Users did not recognise mastery bullets as
representing their knowledge: interpreted them as
exercise difficulty
Problems with use and purpose of Book Creation tool
Proposal:
Descriptions and tooltips added to explain Mastery
Colours
3-May-17
Adaptive Learning Environments
8
Formative Evaluation of xLM
User's interpretation of mastery colours,
understanding of LM and how it could benefit
them.
11 first-year University Maths students, (6 F/5 M)
Questionnaires (more specific), task-oriented
evaluation plus “Think aloud” protocol
Results
Instruction required to understand Mastery Colours
Again, misunderstanding of their role
Thought there should be more than 4 levels
Once located all able to create own book
Not sure what items represented
Did not realise book related to LM
3-May-17
Adaptive Learning Environments
9
Outcomes of Formative Evaluation of xLM
Mastery colours still not intuitive;but learners decipher
meaning without assistance; need more levels
Discussion between Development and Evaluation
teams led to traffic light scheme extended to 6
levels, each with same proportion of knowledge
(20%)
Learners liked being able to create book, but did not
realise it was tailored to their learner model
Book creation tool received complete overhaul
Six book categories: learners can generate range of
books for different purposes, e.g. practising an
exam, rehearsing a topic.
Book Creation Wizard makes constant reference to
resulting book being tailored to the learner model
Relationship between items chosen and resulting
3-May-17
Adaptive Learning Environments
10
structure of the book
now clearer.
Mastery colours after xLM evaluation
3-May-17
Adaptive Learning Environments
11
Formative Evaluation of OLM
Questions regarding LM:
Do learners understand what the knowledge
represents?
Is there a perceived benefit of the mastery?
Do learners believe the LM?
What would they want LeAM to use this knowledge
for?
Questions regarding OLM:
Can learners understand the OLM?
Would the learner use it in the same way they
would use a tutor?
What would they use it for?
Is there a perceived benefit of having the OLM?
3-May-17
Adaptive Learning Environments
12
Method and Participants
Method:
Collaborative Evaluation
Pre-use questionnaire
Task-based
Structured hints
Think Aloud
Critique
Context-based Q&A
Post-use Questionnaire
Participants: 7M/3F; 5 took part in previous study=‘expert’
Studying calculus for 2.6 years (average).
Quite confident with calculus (3.50/5)
Use a computer at least daily.
Very familiar with web (4.6/5).
Average familiarity with Applets (2.7/5).
80% have used maths software before.
= University-level target group.
3-May-17
Adaptive Learning Environments
13
1. Do learners understand what the
knowledge represents?
Most learners thought the
mastery colours were just an
indicator of completion or
success.
Insufficient levels.
4  6 levels.
Tooltip not obvious.
Don’t know what % means.
Initially confused by propagation.
Deduced conceptual links after
exploring content.
These links are not indicated
anywhere on main interface.
3-May-17
Adaptive Learning Environments
14
2. Is there a perceived benefit of the
mastery?
Novice
Expert
 Most learners
believe the
mastery is “quite”
beneficial.
 Almost as
beneficial as they
had expected it to
be in an ITS.
 Experts are more
conservative but
still positive.
3-May-17
Adaptive Learning Environments
15
3. Do learners believe the LM?
3-May-17
Novice
Expert
ACCURACY
 Learners don’t
expect an ITS to
be accurate.
 LeAM is rated as
more accurate.
 Experts are less
trusting.
 Novices think the
beliefs are as
accurate as a
tutor after
challenge.
 Experts don’t.
Adaptive Learning Environments
16
4. What would they want LeAM to use this
knowledge for?
ITS
LeAM
1. Suggest
Exercises
2. Direct to
content.
3. Report
knowledge
to teacher.
4. Set tests.
5. Provide
Revision aids.
6. Block access
to content.
3-May-17
11
2
3
Adaptive Learning Environments
4
5
6
17
5. Can learners understand the OLM?
Ease of use =
Usefulness =
Medium (Novice: 2.6, Expert: 2.7)
Novice: Quite Useful (4.2)
Expert: Medium (2.7)
Rated as:
Enjoyable: Better than existing software:
Without OLM
4.09
3.90
With OLM
4.20
4.25 = slight increase.
Observations:
Learners could not start using system without guidance.
Help not provided.
Did not understand descriptors (2.88).
E.g. [average_slope,,solve,,,]
Did not know what they were asking.
General usability issues.
3-May-17
Adaptive Learning Environments
18
Evaluated Descriptor View
 Not Intuitive
how to use
(Novice: 2.2,
Expert: 3)
 Quite useful
(Novice: 4,
Expert: 3.6)
 Some use
dialogue,
some don’t
(3.7 useful).
 Would prefer
better
dialogue
(4.38).
3-May-17
Adaptive Learning Environments
19
Descriptor View - Improved 1
3-May-17
Adaptive Learning Environments
20
Other improvements, e.g.s
3-May-17
Adaptive Learning Environments
21
Displaying different data types
3-May-17
Adaptive Learning Environments
22
Evaluated Toulmin View
 Quite Intuitive to
use (Novice: 3.4,
Expert: 4.2)
 Very useful (4.6)
 Observations:
 Like graph
 Can understand
once explored
 Would benefit from
help
 Primarily use graph
 Dialogue acts are
confusing e.g. “I’m
Baffled”
3-May-17
Adaptive Learning Environments
23
Revised Toulmin Map 1
3-May-17
Adaptive Learning Environments
24
Toulmin View: further e.g.s
3-May-17
Adaptive Learning Environments
25
Likely user level
3-May-17
Adaptive Learning Environments
26
Dynamic partitioning of evidence nodes
3-May-17
Adaptive Learning Environments
27
Topic Map
 Quite Intuitive
(Novice: 3.4,
Expert: 4.2)
 VERY useful
(4.6)
Comments:
 A great
representation of
conceptual links.
 Great revision
aid.
 Should be main
descriptor
interface.
3-May-17
Adaptive Learning Environments
28
Clearer introduction to OLM
3-May-17
Adaptive Learning Environments
29
How would they expect to use it?
Novice
Expert
 Quite Intuitive
(Novice: 3.6,
Expert: 4.4)
 Medium
usefulness
(Novice: 3.6,
Expert: 2.8)
 Expect more
negotiation.
3-May-17
Adaptive Learning Environments
30
What would they use it for?
With OLM
Without
OLM
1
3-May-17
2
3
4
5
6
Adaptive Learning Environments
1.Learning
Practicality a
maths
= 3.6
concept
= 4.0
2.Group tutorials
= 4.3
3.Solitary
= 4.6
tutorials
4.Tutorials via = 4.2
= 4.5
web
5.Supplement
book
6.Revision
31
Conclusions of Formative Evaluation
Learners do not perceive separation between LM
and OLM, so OLM proves to be critical
Open learner models are perceived by learners to
be useful, and learners enjoy using them.
Learners want to use the OLM for individual study
and revision.
Learners like being able to interrogate the beliefs,
but changing them should require negotiation.
OLM provides means to explore gaps in learner
knowledge.
The interface was unintuitive and now improved in
the revised OLM Dialogue.
3-May-17
Adaptive Learning Environments
32
References
Cohen, P. (1995) Empirical Methods for Artificial Intelligence, MIT Press, 1995.
Conlon, T. and Pain, H. (1996). Persistent collaboration: a methodology for
applied AIED, Journal of Artificial Intelligence in Education, 7, 219-252.
Conlon, T. (1999). Alternatives to Rules for Knowledge-based Modelling.
Instructional Science Vol 27 No 6, pp 403-430.
Corbett, A.T. and Anderson, J.R., (1990) The Effect of Feedback Control on
Learning to Program with the Lisp Tutor, Proceedings of the 12th Annual
Conference of the Cognitive Science Society, LEA, New Jersey, 1990
Dix, A., Finlay, J., Abowd, R. and Beale, R. (2004) Human-Computer Interaction.
Prentice Hall
Luger, G. F. and Stubblefield, W. A., (1989) Artificial Intelligence and the Design
of Expert Systems, Benjamin Cummings, 1989.
Mark, M.A. and Greer, J.E. (1993). Evaluation methodologies for intelligent
tutoring systems, Journal of Artificial Intelligence in Education, 4, 129-153.
Shute, V. J., & Regian, W. (1993). Principles for evaluating intelligent tutoring
systems. Journal of Artificial Intelligence in Education, 4(2/3), 243-271.
Squires, D., & Preece, J. (1999). Predicting quality in educational software:
Evaluating for learning, usability and the synergy between them. Interacting
with Computers, 11(5), 467-483.
3-May-17
Adaptive Learning Environments
33