Download Barnes Slides - Personal Web Pages

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Bayesian Nets in Student
Modeling
ITS- Sept 30, 2004
Sources of Uncertainty
•
•
•
•
•
•
•
Incomplete and/or incorrect knowledge
Slips and/or guesses
Multiple derivations
Invisible inferences
Not showing all work
Help messages
Self-explaining ahead
Andes student model
• Knowledge tracing
• Plan recognition
• 1st to use student’s domain knowledge
• Action prediction
• Andes first to support all three
Goals of Andes
• Students work as much as possible alone
• React to student’s incorrect action,
signal error, explain
• React to student’s impasse, provide
procedural help
• Assure student understands examples,
prompt self-explaining
Types of help
• Error help
• Procedural help (ask for hints)
• Unsolicited help (for non-physics
errors)
• Different levels of hints ‘til
“bottom-out hint”
Usage of student model
• Plan recognition: recognize and
support goals (requires prediction)
• Asses knowledge: help presentation
(reminder v. minilesson)
• Assess mastery level: prompt selfexplanation or not
Self-Explaining Coach
• Step correctness (domain)
• Rule Browser
• E.g.: using force or acceleration
• Step utility (role in solution plan)
• Plan Browser
• Recognize goals
Bayesian network
• Solution graph: map of all solutions
with no variables (propositional)
Types of nodes
• Domain-general: rules
• 2 values indicating mastery
• Task-specific:
• facts, goals, rule apps, strategy nodes
• Doable (done already or knows all
needed)
• Not-doable
Knowledge evolution
• Dynamic Bayesian network
• Analyze each exercise alone
• Roll-up: prior probabilities set to
marginal probabilities for previous
• Improvements: could model
dependencies & knowledge decay
Intention or ability?
• Probability that student can and IS
implementing a certain goal
• Decision-theoretic tutor keeps
probabilites of “focus of attention”
Problem creation
• Givens
• Goals
• Problem-solver applies rules,
generating subgoals until done
• Solution graph created
Andes assessor
• Dynamic belief network for domaingeneral nodes
• Rules - priors set by test scores
• Context-Rules
• P(CR=true|R=true)=1
• P(CR=true|R=false)=difficulty
• One context changes, adjust rest
Task-specific nodes
• Fact, goal, rule application, strategy
• Context-Rule nodes link task-specific
to domain-general rules
Fact & Goal Nodes
• A.k.a. Propositional Nodes
• 1 parent for each way to derive
• Leaky-OR: T if 1 parent T, also
sometimes true if not
• Reasons: guessing, analogy, etc
Rule-Application Nodes
• Connect CR,Strategy & Prop nodes to
new derived Prop nodes
• Doable or not-doable
• Parents: 1 CR, pre-condition Props,
sometimes one Strategy node
• Noisy-AND: T if ALL parents T, but
sometimes not, 1-alpha
Strategy Nodes
•
•
•
•
•
Used when >1 way to reach a Goal
Paired with a Goal Node
Values are mutually exclusive
No parents in network
Priors=freq. students use this strat.
Compare Figures
• Figure 9 before observing A-is-body
• Figure 10 after observing A-is-body
Hints
• Add a new parent to a Prop node
• Accounts for guessing
SE-Coach
• Adds nodes for Read
• Link these to Prop nodes
• Longer read time, higher prob knows Prop (p
26)
• Adds nodes for plan selection
• Link these to Context-Rules
• Rule Application node prob T if knows CR
& all preconditions=Noisy-AND
Evaluation
• Simulated students, 65% correct for rule
mastery
• 95% if no “invis inferences” and has to
“show all work”
• Post-test shows significant learning
• Voluntary acceptance?
• Accuracy of plan recognition