Download GRUNST - WebCampus --- Drexel University College of Medicine

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Gernoth Grunst (St. Augustin, Germany)
2
Enabling Systems in Medicine
Enabling Systeme in der Medizin
Summary
Zusammenfassung
The essay introduces the training modules
EchoExplorer and Cardio-Sim that provide an enabling system for echocardiography. The environment consists of practical sequential training units that build up
a course. There are various forms of interactivity. A common feature of all modules
is the combined use of digitized 2D or 3D
ultrasound images with related views of a
Virtual Reality model showing a beating
heart. The exploratory scenarios allow
forms of constructive learning that are to a
large extent self-controlled.
Die vorgestellten Trainingsmodule EchoExplorer und Cardio-Sim sind Elemente
eines Enablingsystems für die Echokardiographie. Sie bieten dem Arzt aufeinander
aufbauende Stufen praktischer Übungseinheiten, die unterschiedliche Arten der Interaktivität realisieren. Durchgängig ist
dabei der kombinierte Einsatz von digitalisierten 2D oder 3D Ultraschall Daten und
Virtual Reality Modellen des schlagenden
Herzens. Die integrierten Explorationsszenarien erlauben ein weitgehend selbstgesteuertes, konstruktives Lernen.
1
Objectives
The EchoExplorer [9] / Cardio-Sim[8] training environment for echocardiography
is an illustration of a medical enabling system. Its objective is to support the acquisition of complex medical expertise. The example domain echocardiography requires
the integrated application of a broad spectrum of cognitive and practical abilities. The
entire range is covering diagnostic and anatomical knowledge, visual perception, and
controlled handling of the ultrasound device. Thus multiple contents have to be made
accessible in intuitive and related scenarios. The learning environment shall allow the
active exploration of meaningful scenarios that present these contents mostly in nonsymbolic form. One of the research items of the developments has been to determine
appropriate combinations of near realistic Virtual Reality models and medical image
data that provide self-explaining Augmented Reality scenarios [1, 12].
1
ZSfHD 3/99
Methodological approach
The developments started with field studies of expert- / novice behavior in diagnosis and instruction. The studies applied techniques of immediate and video based
interaction analysis [7]. The analyses allowed insights into pertinent Mental Models of
skilled cardiologists as well as missing imaginations or typical misconceptions on the
novice side. The details could be derived from verbal utterances as well as behavior
patterns. The main advantage of this approach has been that we were able to identify
relevant elements of expert knowledge and skill embedded in the context of their
application.
Such situations provide the optimal scenario for the acquisition of realistic expertise. On the other hand this model of apprenticeship learning is usually incompatible
with the clinical reality. This holds especially for the initial states of learning. Therefore the analyses tried to identify important factors of insight based learning and to
specify potential functional equivalents for apprenticeship learning. The main strategy
of transfer was to organize interactive scenarios that allow constructive learning [2].
The role of a critical teacher in these learning scenarios should be reflected through
various forms of (mainly visual) feedback. In the sense of Donald Schön [11] the idea
was that the whole situation should “talk back” to the learner.
Technically the findings are mapped onto combinations of interactive VR models
and 3D image data. The VR elements are thought to impart the identified expert Mental Models of the dynamic anatomy and the way the heart is accessed via echocardiography. The related films and 3D data sets represent the reality of actual patients as it is
documented via echocardiography. The explorable scenes shall allow autonomous
inquiries and induce constructive learning. [2, 13]
In formative and summative evaluation cycles the adequacy of the analyses and
the technical transfer has been verified [4, 8, 12]. For example the anatomical detail of
the VR models was redesigned after prototype tests with experts from different medical domains.
3
The training environment
The development of the various enabling units started about eight years ago as a
cooperation of the German National Research Center for Information Technology and
Prof. Dr. D.A. Redel (Centre for Pediatry of the University Bonn, Department for
Cardiology). First attempts to map the complex spatial logic of ultrasound onto simpler multimedia visualizations [3] showed that interactive 3D graphics were indispensable to design flexible visual explanations of the various standard views in echocardiography. This soon lead to the design of a static 3D reference model. Dynamic features and the various forms of interactivity were added after the first prototypes had
ZSfHD 3/99
2
been used in realistic training sessions and the need of some form of dynamic anatomy
became evident. The resulting interactive VR model could then be used in both training modules (the CD-ROM based course EchoExplorer and the simulator CardioSim).
3.1
EchoExplorer
EchoExplorer is an interactive course that introduces students and cardiological
novices into echocardiography. The CD-ROM contains three main modules (dynamic
anatomy, examinations in echocardiography, and interactive echocardiography). The
training module furthermore features a selfassessment component the “Quiz”.
3.1.1 Dynamic anatomy
The first module of EchoExplorer (dynamic anatomy) displays the beating heart
model as an explorable object. Several visualization modes can be chosen (surface
view, transparent view, highlighted septum, transparent with blood flow).
Fig. 1: The dynamic anatomy heart-model in transparent mode
The learner can rotate the different versions of the 3D-model animation with the
mouse and thus acquire flexible conceptions of the inner and outer heart-anatomy. The
pertinent visual cues have to become automated match patterns that are later used in
the application of echocardiography. In a further form of interaction with this scene
the learner can manually steer the animation of the heart cycle with respect to the ECG
curve. This way the timing of ventricle contraction and the opening of the valves can
be explored in its relationship to the ECG.
3.1.2 Examinations in echocardiography
In the next stage the various standard-views in echocardiography are introduced
and related to the heart model. The learner can call up verbally commented animations
showing the appropriate positioning of the transducer and the fine adjustment of these
views.
Fig. 2: The echocardiography module for standard positions, thorax view
The animation is started as a demonstration of the correct transducer position with
respect to the pertinent intercostal space. Reference object is a Virtual Reality model
of the thorax (bone structure). The imaging segment is visualized and shown as a
transparent triangle that is slicing the heart. The animation continues as a “dive” into
the inner thorax showing the learner in detail where the transparent heart is cut by the
ultrasound segment. Then the scene is rotated such that the transducer head moves to
middle position in the top of the window. This reflects the logic of the transducer
related imaging of real ultrasound and can be used as a module for the training of
pertinent mental rotations [6]. The heart model is then decreased to the structures that
are cut by the ultrasound segment. The resulting substructure of the beating heart
model is a graphic match-pattern for the actual ultrasound image in this position. The
animation then blends into an exactly fitting example video of a real ultrasound film.
Fig. 4: the US image level with highlighted substructures.
Fig. 3: levels of visual explanation: heart-model with plane, slice, 2D US image
Further descriptions are related to substructures of the heart that can be seen in the
actually covered region. In order to keep the perception focus on relevant structural
and dynamic features the explanations apply a combination of graphical highlighting
and verbal comments. The positioning process is thus decomposed into a kind of two
step approach. First the transducer is roughly placed at pertinent positioning points.
This action is guided by external anatomical landmarks of the chest and controlled by
general, position dependent gestalts in the ultrasound image such as the typical pattern
of a four-chamber view. Then the fine-tuning is based on control patterns for relevant
substructures like the papillar muscles or the valves.
The standard positions are related to certain subregions of the heart and necessary
examinations of anatomical structures that can be accessed here via ultrasound. The
selection of these areas is determined by the diagnostic task. But the choice of alternative views also has to reflect the ultrasound imaging features of the patient. For adipose patients for example some standard views do not produce useful images. Therefore cardiologists have to develop flexible strategies of adjustment i.e. intuitively
switch to alternative standard positions that produce better image results for a specific
patient. This ability is based on a graphic understanding of the heart and flexible conceptions of the ultrasound examination process.
3
ZSfHD 3/99
The module has only restricted forms of interactivity. It merely allows the selection, play, stop, and replay of animation sequences for particular standard positions.
The rationale is to provide for the learner an introductory overview.
3.1.3 Interactive echocardiography
The module “Interactive Cardiology” lets the user more actively explore the learning scenario. There is a choice between sets of ultrasound examples that represent
healthy and pathological cases. For each case there are examples of standard views in
the various ultrasound modes such as 2D, M-mode, qualitative and quantitative Doppler. The interface presents the ultrasound films in one window with related graphical
explanations in an adjacent frame. The illustrations support the interpretation of the
ultrasound films through virtual models on three levels. They are associated to three
conceptual points of view.
The first level (thorax) shows the scenario from the outside. This Virtual Reality
scene includes the transducer in relation to the appropriate intercostal space of the
thorax and thus links the conception of the ultrasound image to the perception of the
patient under examination.
ZSfHD 3/99
2
verbal explanations in this stage. He should now be able to relate the words not only to
image details but also have a focussed spatial and anatomical understanding built up
through the exploration of visual explanations on the levels “thorax”, “3D heart anatomy”, and “sliced heart structures”.
3.1.4 Selfassessment
A very important element of any training is the assessment of achieved learning
levels. In (ideal) apprenticeship situations the teacher accomplishes this duty. Interactive multimedia or VR training environments that intend to cover this objective instead have to provide for instruments of self-assessment. For this purpose EchoExplorer features the “Quiz” module. The learner can choose three test-levels with specific feedback. On level A he is presented with randomly selected model-slices and
requested to choose the corresponding standard position from a selection-menu.
Fig. 5: scenario of the combined images and VR models for interactive exploration
The second level (3D heart anatomy) shows the transparent 3D-heart model and
the enclosed visualization of the ultrasound segment. The user can turn this combined
3D object and thus inspect which substructures and areas of the heart are cut in the
respective standard position. Here the ultrasound film is linked to the anatomical imagining of the heart.
A click switches to the third level (sliced heart structures). As in the training module for standard positions, animations show the rotation of the 3D model such that the
transducer head moves to middle position on top of the window. The heart model is
then decreased to the actually sliced heart structures. Our evaluations showed that
students extensively used this feature in order to exercise the spatial mapping of a
“front view” of the heart model onto this rotated structural pattern. This pattern is
aligned and can be synchronized with the real ultrasound film. The graphic dynamic
gestalt of the anatomic model slice provides intuitive visual explanations for corresponding structures in the ultrasound.
The learner can now continue to build up a more detailed understanding by addressing the mouse actions directly to the ultrasound window. By touching certain
areas with the mouse the user activates descriptions in the form of standard abbreviations. Mouse clicks start detailed verbal comments on the physiological substructures
and their appearance in ultrasound. We expect the learner to have more advantages of
Fig. 6: the selfassessment module in test mode A
If the learner is unable to give a correct answer he can call up a “tip” that facilitates the finding of an appropriate assignment. This module exploits the three levels of
visual explanation that have been introduced in the “Interactive Cardiology” unit.
Backward animations show a reverse rotation that ends with a front view of the transparent 3D-heart model and the ultrasound plane slicing it. This view lets the learner
recognize the position and orientation of the transducer and thus narrow the possible
range of standard views. This kind of support follows a specific elicitation strategy
[10]. Such narrowing of a problem space is analytically shown to be one of the most
effective ways to induce active mental processing, insights, and thereby learning. Thus
the self-assessment module is not just an isolated test layer but closely related to the
other learning scenarios. If test learners recognized gaps of knowledge in the “Quiz”
they typically went back to the related training units and explored them in a more
focussed manner.
The level B shows real ultrasound films that have to be assigned to pertinent
standard positions. The “tip” is realized by the presentation of the related model slice
view. On level C the learner identifies substructures in a real ultrasound film. The
“tip” support in this module is still insufficient. It only marks potentially relevant
areas.
3.2 Cardio Sim
In the Cardio-Sim (CS) unit the combined VR models and 3D data sets allow interactive explorations that build up visual expectation-patterns as they are required in
the application of ultrasound in real heart examinations. For this purpose a virtual
ultrasound transducer is used as interaction device. The transducer dummy is placed
on pertinent points on a human model torso.
The position, rotation, and tilt of the device is continuously checked and mapped
onto a virtual transducer model on the screen. The spatial information is also used to
slice associated 2D views out of the 3D model. Transparent presentations of the chest
and the inner and outer structures of the heart provide visual cues for sweeps and
rotations of the ultrasound plane. Reference planes of standard views can be used for
guidance. The hand-eye-coupling and a visual understanding of the scenes are trained
and merge into a Mental Model of the heart. This way the inspection of the anatomic
model is closely linked to the handling of the imaging device.
The next step from model learning to real diagnoses is the integration of real 3D or
rather 4D (3D + time) ultrasound image data. These data sets are registered i.e. geometrically aligned with the anatomic model. The trainee can exploit the graphic orientation provided by the model in order to become acquainted with the interpretation,
tuning and controlled adjustment of ultrasound images. The VR elements of the scene
provide continuous visual feedback informing the trainee where the actual plane slices
the heart and which anatomic structures should be seen. After extensive training in
this “training wheel” mode the novice is able to switch off the orientation models and
exclusively rely on the ultrasound image as visual feedback.
Fig. 8: the merged VR model / 3D US data set feedback mode of Cardio-Sim
Fig. 7: the Cardio-Sim simulator setting
5
ZSfHD 3/99
One of the cognitively demanding requirements of the visually controlled application of cardiac ultrasound is the necessity to continuously adapt to the transducer
relative images. This requires mental rotations [6] that reflect the changing points of
view in the various standard positions. Controlled sweeps in each of these diagnostically relevant positions need fluent adaptations to the anatomy of the heart. If a visual
conception of this 3D structure is missing the diagnostician is unable to adjust the
ZSfHD 3/99
2
transducer. If he can only rely on one inflexible standard imagining of the heart anatomy he consciously has to rotate and map the perceived structures onto this Mental
Model. Our field studies showed that skilled cardiologists have build up dynamic
Mental Models of the heart that are not related to fixed conceptual points of view.
This allows immediate interpretations and thus a fluent eye-hand coupling. The exploration of the simulation environment allows the novice to actively build up this form
of a dynamic Mental Model.
The kind and degree of immersiveness [13] realized in this training module reflects the way that the cardiologist will experience diagnostic reality. A “fly through”
the heart that would allow movement inside a structured scenario in this sense is less
appropriate than a local constellation that the trainee can manipulate from outside. The
skills that have to be activated in real diagnoses reflect the natural spatial relationship
between the cardiologist and the patient.
4
Work in progress
Adaptivity is a key issue in enabling systems that shall be used for learning and
orientation on demand. This requires integrated concepts of knowledge-based evaluations with interactive 3-D graphics and animation. The Cardio-Sim environment shall
in this sense be enhanced by adaptive modules that take into account the activities of
the trainee. The most advanced form of analysis and feed back is based on adaptive
evaluations of motion patterns [5]. These recognizable physical events have to be
mapped onto cognitive orientation demands as they change during actual examinations. Furthermore the examination steps realized this way are related to the diagnostic
context. Based on the evaluation of behavior protocols the system will provide context
sensitive help animations that show the specific deviations from appropriate diagnostic steps and standard positions.
5
Generalization and lessons learned
All developments of enabling systems require a detailed understanding of the
competence and abilities that experts apply in problem solving. Furthermore it is necessary to have a clear conception of the way in which various layers of expertise build
on each other. This is defining the spectrum and structure of training environments
that attempt to cover real world expertise. Closely related to the contents of such enabling systems the designer has to be aware of the various cognitive modes that are
involved in certain stages of effective problem solving. In the medical domain symbolic, visual, motor, and dynamic abilities, such as mental rotation or the combination
of inferences and pattern matches must be acquired in a coherent and situated form.
Medial presentation and interaction modes of enabling systems have to reflect these
different forms of cognition.
Certainly individual differences in cognitive and learning style are effecting learner-driven forms of interaction with the system. On the other hand enabling systems
have to guarantee that basic contents are picked up by every trainee. Therefore introductory and more restrictive modules have to be included. Training modes that are to a
higher degree learner-driven should support the active mapping of these contents onto
realistic tasks. The most advanced kind of such environments are medical simulators
that allow constructive learning. Even for advanced interactive learning environments
there is no warranty that the intended results are achieved. Therefore various forms of
assessment should be rendered possible. Typically controlled test tasks provide an
authentic way to access achieved levels of expertise. The value of such assessments is
even higher if even the learner himself can check the results. Adapted forms of correction and elicitation can be added to the self-assessment modules. This way the trainee
gets focussed forms of feedback and support in the learning process.
References
[1] Berlage, Th. (1997): Augmented reality for diagnosis based on ultrasound images. In: Troccaz, J.; E. Grimson,, and R. Mösges, R. (eds.): Proceedings CVRMedMRCAS '97. (Lecture Notes in Computer Science Nr. 1205). Berlin: Springer. pp.
253-262.
[2] Duffy, T.M., J. Lowick, & D.H. Jonassen (1993): Designing Environments for
Constructive Learning. New York.
[3] Fox, T. (1993): Kognitiv ergonomische Benutzerschnittstellen - Entwicklung
interaktiver 3D-Visualisierungen und multimedialer Simulationen: Das Tutor-System
COCARD zur Einführung in Ultraschall-Untersuchungen des Herzens. Sankt Augustin: GMD-Studie Nr.218.
[4] Grunst, G.; T. Fox,, K.-J. Quast, & D.A. Redel, (1995): Szenische Enablingsysteme – Trainingsumgebungen in der Echokardiographie. In: Glowalla, U.; E.;
Engelmann, A de Kemp, G. Rossbach, & E. Schoop, (eds): Auffahrt zum Information
Highway. Kongressband Deutscher Multimedia Kongress '95, Heidelberg 11.13.6.1995. Berlin-Heidelberg: Springer, S.174-178.
[5] Grunst, G., S. Trochim (2000): Enhanced Reality Trainingssysteme in der
Medizin. Künstliche Intelligenz , Nr. 2 . - S. 28-29.
[6] Metzler, J. & R.N. Shepard (1974): Transformational Studies of the Internal
Representations of Three Dimensional Objects. In: Solso, R.L. (ed.): Theories of
Cognitive Psychology: The Loyola Symposium. Hillsdale, NJ: Lawrence Erlbaum
Associates.
[7] O´Malley, S.W. C.E.O. Draper, and M.S. Riley (1985): Constructive
Interaction: A Method for Studying Human - Computer - Human Interaction.
Proceedings of INTERACT ´84, London, pp. 269-274.
[8] Quast, K. (1997): Computerbasiertes Lernen in 3D-graphischen Szenen Entwurf, Realisierung und Evaluation einer Anwendung für die kardiologische Ultraschalldiagnostik. (Berichte der GMD Nr. 280). München: Oldenbourg,
[9] Redel , D. A. & F. Hoffmann,(1998): EchoExplorer, Interaktive CD-ROM,
München, Urban & Schwarzenberg
[10] Rehbein, J. (1980): Hervorlocken, Verbessern, Aneignen. Diskursanalytische
Studien des Fremdsprachenunterrichts. Bochum.
[11] Schoen, D.A. (1990): Educating the Reflective Practitioner. San Franscisco,Oxford: Jossey-Bass Publishers.
[12] Weidenbach, M., C. Wick, S. Pieper, K.J. Quast, T. Fox, G. Grunst and D. A.
Redel (2000): Augmented Reality Simulator for Training in Two-Dimensional Echocardiography, in: Computers in Biomedical Research 33, 11 – 22, Academic Press.
[13] Winn, W.D. (1994): A Conceptual Basis for Educational Applications of Virtual Reality. (Technical report No. HITL R-94-1).
7
ZSfHD 3/99
ZSfHD 3/99
2