Download Using Multimodal Learning Analytics to Study Learning Mechanisms

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Using Multimodal Learning Analytics to Study Learning
Mechanisms in Hands-on Environments
Marcelo Worsley
Paulo Blikstein
Stanford University
Graduate School of Education
Stanford University
Graduate School of Education
[email protected]
[email protected]
ABSTRACT
In this paper, we propose multimodal learning analytics as a new
approach for studying the intricacies of different learning
mechanisms. More specifically, we conduct two analyses of a
hands-on, engineering design study (N=20) in which students
received different treatments. In the first analysis, we used
machine learning to analyze hand-labeled video data. The findings
of this analysis suggest that one of the treatments resulted in
students initially engaging in more planning, while the other
resulted in students initially engaging in more building. In
accordance with prior literature, beginning with dedicated
planning tends to be associated with improved success and
improved learning. In the second analysis we introduce a
completely automated multimodal analysis of speech, actions and
stress. This automated analysis uses multimodal states to show
that students in the two conditions engaged in different amounts
of speech and building during the second half of the activity.
These findings mirror prior work on teamwork, expertise and
engineering education. They also represent two novel approaches
for studying complex, non-computer mediated learning
environments and provide new ways to understand learning.
Keywords
Learning Sciences, Qualitative, Computational, Constructionism
1. INTRODUCTION
Despite the many years that humans have studied learning and
human cognition there are still many unanswered questions in
how people learn. This has partially been the result of limitations
in the ways that we are able to study learners. More specifically, a
large portion of prior research was limited by a tradeoff between
the types of learning environments that could be studied, and the
scale at which a given phenomenon could be analyzed.
However, as the tools of educational data mining and learning
analytics continue to advance, we are beginning to dismantle this
tradeoff. We are now able to analyze a far greater variety of
learning environments and at unprecedented scales. In this study,
in order to keep the analysis verifiable, we do not yet venture to
tackle big data as it relates to a large number of participants.
Instead, we tackle the big data question as it relates to analyzing
extremely high frequency data, from several data streams. We use
multimodal learning analytic [1, 2] techniques to study speech,
gesture and electro dermal activation among pairs of students as
they complete a hands-on engineering design task.
The context for this paper is an extension of our prior work [3],
where we present two different approaches that students use in
engineering design: example-based reasoning – using personal
examples from the real-world as an entry point into solving a task;
and principle-based reasoning – using engineering fundamentals
as the basis for one’s design. These two reasoning strategies
complement prior work on learning by analogy [4], expertise [5,
6] and forward-backward reasoning [7]. In [3] we describe
example-based reasoning and principle-based reasoning in
qualitative terms, and then proceed to use these two approaches in
a controlled study (N=20) that compares how each approach
impacts learning gains and performance during a collaborative
hands-on activity. In that study we found that principle-based
reasoning improves the quality of designs (p < 0.05) as well as the
learning of important engineering principles (p < 0.002). The goal
of this paper is to expound upon why these differences may have
arisen between the two conditions. As such, we employ
multimodal learning analytic techniques as a way to
systematically study how example- and principle-based reasoning
are associated with different multimodal behaviors as observed in
the each student’s process.
2. METHODS
In this paper, we briefly present results from two complementary
analyses of example- and principle-based reasoning. The overall
approach closely mirrors our previous work [8, 9] on analyzing
design strategies and success in hands-on engineering tasks.
Specifically, in the first analysis we manually annotate the
students’ actions, and segment the data based on when they
explicitly evaluates their structure. The proportions of actions in
the different segments are used to find representative clusters,
which are subsequently used to re-label each user’s sequence of
segments. Finally, we compare sequences across participants.
In the second analysis we again use clustering to reduce the set of
multimodal states from several hundred, down to four. However,
it differs from analysis 1 in that all of the data is automatically
derived from speech, gesture and skin conductance data.
Additionally, instead of segmenting the data when students
evaluate their structure, we use fixed 30-second time windows.
3. RESULTS
The results from the first analysis, which combined qualitative
coding with X-means clustering, demonstrated that students in the
principle-based condition were more likely to start the task by
planning (see PREPARE in Figure 1). Planning has been
associated with increased success in several domains
[10,11,12,13,14]. In contrast the example-based condition was
typified by students who immediately began to build their projects
and overlooked the importance of thinking about and planning
their structure (see IMPLEMENT in Figure 1). Furthermore, the
Proceedings of the 7th International Conference on Educational Data Mining
431
first analysis also found that success correlates with students
beginning with planning. Hence the principle-based conditioned
was associated with increased planning, which may have
facilitated their improved performance.
Example-Based
12
10
8
6
4
2
0
-2
-4
9.1
Principle-Based
8.25
4.75
3.3
0.25 0.8
1.75 1.8
researchers with a more fine-grained representation of how the
two treatments differed. Examining the underlying mechanics of
different treatments provides educators and designers with a more
complete set of strategies to adopt and utilize in their teaching and
designing. To this end, beyond simply saying that the conditions
are different, multimodal learning analytics provides us with a
tool that explains how they are different, and, in so doing, starts to
answer questions around why they differ. That said, there remain
a number of important questions and opportunities in studying the
mechanics of successful learning interventions. We intend to more
closely examine the findings reported in this paper, and
investigate additional hypothesis that would explain the noted
differences in student outcomes in our ongoing research.
5. REFERENCES
PREPARE
IMPLEMENT
ADJUST
SIGNIFICANTLY
ADJUST
Figure 1 - Scaled Frequency of Cluster Use by Condition - the
y-axis is the count of times used, and the x-axis is the different
clusters, or states of user actions, as derived from clustering
In the fully-automated multimodal analysis our initial results
suggest that students in the example-based condition are much
more likely to transition towards an increase in speech during the
latter half of the activity. This is in contrast to the principle-based
group which shows no significant changes in speech, gesture or
stress, over the course of the activity. In our ongoing work we are
looking to better understand the nature of the multimodal
interactions and what caused the students in the example based
condition to engage in significantly more dialogue. We have
several initial hypotheses that we will describe in future work. For
example, an initial analysis of student speech during the
intervention phase of the experiment found significant differences
between the two conditions. Namely, students in the principlebased conditions generated more speech during the intervention
phase than the example-based condition. This may have helped
the students be better prepared for the activity, and allowed them
to circumvent the talking observed in the latter half of the
experiment for the example-based condition. However, additional
analysis is required to determine a link between the speech during
these two phases of the experiment.
4. CONCLUSION
Taken in concert, these two analyses provided initial explanations
concerning why principle-based reasoning produced higher
quality designs and greater learning gains than example-based
reasoning. Based on the analysis of hand-labeled process-oriented
data, in conjunction with machine learning, we were able to show
how students in the principle-based reasoning condition were
more likely to begin the task with planning. In contrast, students
in the example-based condition were more likely to start by
building. These findings aligned with previous observations made
in a number of disciplines. In the second analysis, we used a
completely automated multimodal algorithm to construct
generalizable multimodal states and found that students in the
principle-based condition had less variation in their speech,
gesture and skin conductance over the course of the activity. This
difference was particularly noticeable during the second half of
the activity. Both of these seem to point to students being better
prepared after participating in the principle-based reasoning
intervention. Thus, we have shown that in addition to producing
differences in learning and success, the two conditions resulted in
different processes. This is important because it provides
[1] Worsley, M. 2012. Multimodal Learning Analytics: Enabling
the Future of Learning through Multimodal Data Analysis
and Interfaces. In Proceedings of the 14th ACM international
conference on Multimodal interaction (ICMI '12). ACM,
New York, NY, USA. 353-356.
[2] Blikstein, P. 2013. Multimodal Learning Analytics. In
Proceedings of the 3rd Annual Learning Analytics and
Knowledge Conference. Leuven, Belgium.
[3] Worsley, M. and Blikstein, P. in press. The Impact of
Principle-Based Reasoning on Hands-on, Project-Based
Learning. In Proceedings of the 2014 International
Conference of the Learning Sciences (ICLS).
[4] Gentner, D. and Holyoak, K. J. 1997. Reasoning and
Learning by Analogy, 52(1), 32–34
[5] Chi, M. Glaser, Rees. 1981. Expertise in problem solving.
[6] Nokes T J, Schunn C D and Chi M. 2010, Problem Solving
and Human Expertise. In: Penelope Peterson, Eva Baker,
Barry McGaw, (Editors), International Encyclopedia of
Education. volume 5, pp. 265-272. Oxford: Elsevier.
[7] Ahmed, S., & Wallace, K. M. 2003. Understanding the
differences between how novice and experienced designers
approach design tasks, 14, 1–11. doi:10.1007/s00163-0020023-z
[8] Worsley, M. and Blikstein, P. 2013. Toward the
Development of Multimodal Action Based Assessment. In
Proceedings of the Third International Conference on
Learning Analytics and Knowledge (LAK '13). ACM, New
York, NY, USA, 94-101
[9] Worsley, M. and Blikstein, P. accepted. Analyzing
Engineering Design through the Lens of Computation.
Journal of Learning Analytics.
[10] Brown, A. L., & Cocking, R. R. 2000. How people learn (pp.
285-348). J. D. Bransford (Ed.). Washington, DC: National
Academy Press.
[11] Schwartz, D. L., Bransford, J. D., & Sears, D. 2005.
Efficiency and innovation in transfer. Transfer of learning
from a modern multidisciplinary perspective, 1-51.
[12] Hatano, G., & Inagaki, K. 1986. Two courses of expertise.
[13] Cross, N., & Cross, A. C. 1998. Expertise in Engineering
Design 2. An Outstanding Designer, 141–149.
[14] Atman, C. J., Cardella, M. E., Turns, J., & Adams, R. 2005.
Comparing freshman and senior engineering design
processes: an in-depth follow-up study. Design studies,
26(4), 325-357.
Proceedings of the 7th International Conference on Educational Data Mining
432