Download Lecture: Visual Salience and Attention, W5 - ppt

Document related concepts

Blindsight wikipedia , lookup

Optical illusion wikipedia , lookup

Mind-wandering wikipedia , lookup

Holonomic brain theory wikipedia , lookup

Transcript
Visual Attention
and “Things that Pop Out”
+
Human Memory
Ware, Chapter 5
+
Dix et al., Chapter 1
Last Time
• Considered Ware’s (and others) ideas
about a “science of visualization”
– What it is… and is it even possible to have a
“science of visualization?”
• Ware argues it is possible because there are
“sensory symbols”, i.e., not all symbols are
arbitrary
• Sensory vs. arbitrary symbols
Sensory vs. Arbitrary Symbols
• At core of issue of efficacy of
visualization in understanding is:
– How “natural” vs. “learned” are elements of
visual representations
• Sensory symbols:
– “Symbols and aspects of visualization
that derive their expressive power
from their ability to use the perceptual
processing power of the brain without
learning”
• Arbitrary symbols:
– “Aspects of representation that must
be learned, because the
representation have no perceptual
basis”
Record of a hunt
Arbitrary Symbols
• “Aspects of representation that must
be learned, because the
representation have no perceptual
basis”
• Derive power/utility from (learned)
culture, so dependent on cultural
milieu
– E.g., the ink of the characters “dog” on
paper
• Which obviously has no chance to be
perceptual, i.e., is completely a code
– vs. a picture of a dog
• Which most likely has some unlearned
correspondence with the real animal
• And there are those that argue that
all pictorial representations arbitrary
Record of a hunt
Sensory Symbols
• “Symbols and aspects of visualization
that derive their expressive power from
their ability to use the perceptual
processing power of the brain without
learning”
• Effective because well matched to
early stages of perceptual processing
– Human visual system has evolved to
detect forms and relationships in world
– HVS not a fully general purpose system
• Not tabla rasa
• Tend to be stable across individuals,
cultures, and time
– E.g., cave drawing still conveys
meaning across millenia
Record of a hunt
Theory of Sensory Languages/Symbols
• Based on idea that human visual system
evolved as an instrument to perceive
physical world
– In contrast to view that visual system is
“universal machine”, “undifferentiated neural
net” that can configure for any world
• Brain tissue appears to be
undifferentiated, but in fact morphology
has specific neural pathways
– Anatomically same pathways among primates
• And through experimentation some functions of
some areas are know, as shown on next slide
– “collection of highly specialized parallel
processing machines with high bandwidth
interconnections”
• System is designed (better, of course,
evolved) to extract information from the
(particular) world we live in
Overview
Overview
• Understanding differences among visual
elements key to understanding visualization
–
Here, “Visual Attention and Information that pops out”
• Things that “pop out” useful in “sensorial” way
• Consider “human information processing” – Dix
– Earlier, Ch. 1, saw 3 stage model of perceptual
processing
– Now, will consider “human memory” – 3-stage model
• Sensory (iconic) memory, short term memory, and
long term memory
• Description of information flow through memories
• Attention plays role
– “Searchlight model of attention” focusing on the visual
• See exs. of “things that pop out” – and why
A Model of Perceptual Processing – Ware, Ch. 2
What we do is design information displays!
• An information processing model
–
“Information” is transformed and processed
•
–
Gives account to examine aspects important to visualization
•
–
Physical light does excite neurons, but at this “level of analysis” consider information
Here, clearly, many neural subsystems and mapping of neural to ip is pragmatic
In spirit of visualization as evolving discipline, yet to develop its theories, laws, …
• Stage 1: Parallel processing - extract lo-level properties of vis. scene
• Stage 2: Pattern perception
• Stage 3: Sequential goal-directed processing
Stage 1: Parallel Processing to Extract Low-level
Properties of Visual Scene
(1/2)
• (Very first) neurons fire
– Sensation
• Visual information 1st processed by:
– large array of neurons in eye
– primary visual cortex at back of brain
• Individual neurons and sets selectively
tuned to certain kinds of information
– e.g., orientations of edges or color of light
– Evoked potential experiments
• In each subarea large arrays of
neurons work in parallel
– extracting particular features of
environment (stimulus)
Stage 1: Parallel Processing to Extract Low-level
Properties of Visual Scene
(2/2)
• At early stages, parallel processing
proceeds involuntarily
– largely independent of what choose to
attend to (though not where look)
• Rapid
– if want people to understand information
fast, should present in way so is easily
detected by these large, fast
computational systems in brain
• Pre-attentive processing
• Stage 1 processing is:
– Rapid and parallel
– Entails extraction of features, orientation,
color, texture, and movement patterns
– “transitory”
•
only briefly held in iconic store
– Bottom up, data-driven
Now … Pattern Perception
What we do is design information displays!
•
An information processing (the dominant paradigm) model
–
“Information” is transformed and processed
•
–
Gives account to examine aspects important to visualization
–
In spirit of visualization as evolving discipline, yet to develop its theories, laws, …
•
•
•
Here, clearly, many neural subsystems and mapping of neural to ip is pragmatic
Stage 1: Parallel processing to extract low-level properties of the visual scene
Stage 2: Pattern perception
–
•
Physical light does excite neurons, but at this “level of analysis” consider information
Today, focus on first elements of this stage, things that “pop out”
Stage 3: Sequential goal-directed processing
Now … Stage 2: Pattern Perception
(summary)
•
Rapid active processes
•
Specialized for object recognition
–
Visual attention and memory
•
–
–
E.g., for recognition must match features with memory
Task performing will influence what perceived
Bottom up nature of Stage 1, influenced by top down
nature of Stage 3
•
Specialized for interacting w / environment
•
“Two-visual system hypothesis”
–
One system for locomotion and eye-hand coordination
–
One system for symbolic object manipulation
•
•
•
The “action system”
The “what system”
Characteristics
–
–
–
Slower serial processing (vs. stage 1)
Involvement of both working (vs. iconic) and long-term
memory
Both bottom up and top down
•
•
–
More emphasis on arbitrary aspects of symbols than Stage 1
Top-down processing
Different pathways for object recognition and visually
guided motion
Pattern Perception Detail – 1st Part
• “Things that pop out”
–
Divide visual field into regions and simple patterns
• Continuous contours
• Regions of same color
• Regions of same texture
• “Active”, but not conscious processes
–
E.g., saccades (quickly)
• Involvement of both working (vs. iconic)
and long-term memory
–
Both bottom up and top down
•
•
–
More emphasis on arbitrary (learned) aspects of
symbols than Stage 1
Top-down processing
Different pathways for object recognition and visually
guided motion
• More later, but first, will examine classic
model of human information processing
to provide context for preattentive
processing
Modeling Humans
Modeling Humans
• Any theory or model is an abstraction
– E.g., Ware’s model focuses on visualization and perception
• Utility of human model lies in how well it helps understanding …
– Here, of visualization process
– Model originally derived for understanding human computer interaction generally
• Card, Moran, and Newell (1983) Model Human Processor for general
– “Classic” example of cognitive architecture with focus on humans interacting with
computers
– Considers: Perceptual system, motor system, cognitive system
– Each has own processor and memory
– Principles of operation dictate system behavior under certain conditions
• A very simple model
• Will also draw from Dix et al. reading, which uses similar information
processing division of elements
Model Human Processor + Attention
• First, a well known and “useful” big picture - Card et al. ’83,
plus attention
– Senses/input  f(attention, processing)  motor/output
– Notion of “processors”
• Purely an engineering abstraction
• Detail next
Model Human Processor + Attention
•
Sensory store
–
•
Perceptual processor
–
–
•
Recognizes symbols, phonemes
Aided by LTM, cf. Ware
Cognitive processor
–
–
–
–
•
Rapid decay “buffer” to hold sensory
input for later processing
Uses recognized symbols
Makes comparisons and decisions
Problem solving
Interacts with LTM and WM
Motor processor
–
–
–
Input from cog. proc. for action
Instructs muscles
Feedback
•
•
Results of muscles by senses
Attention
–
Allocation of resources
Model Human Processor + Attention
Similar to Ware Model for Visualization – Compare Both
•
Sensory store
–
•
Perceptual processor
–
–
•
Recognizes symbols, phonemes
Aided by LTM, cf. Ware
Cognitive processor
–
–
–
–
•
Rapid decay “buffer” to hold
sensory input for later processing
Uses recognized symbols
Makes comparisons and decisions
Problem solving
Interacts with LTM and WM
Motor processor
–
–
–
Input from cog. proc. for action
Instructs muscles
Feedback
•
•
Results of muscles by senses
Attention
–
Allocation of resources
Model Human Processor + Attention
Similar to Ware Model for Visualization – Compare Both
•
Sensory store
–
•
Perceptual processor
–
–
•
Recognizes symbols, phonemes
Aided by LTM
Cognitive processor
–
–
–
–
•
Rapid decay “buffer” to hold
sensory input for later processing
Uses recognized symbols
Makes comparisons and decisions
Problem solving
Interacts with LTM and WM
Motor processor
–
–
–
Input from cog. proc. for action
Instructs muscles
Feedback
•
•
Results of muscles by senses
Attention
–
Allocation of resources
Model Human Processor – Original, 1/2
• Card et al. ’83
– Supplementary reading
• An architecture with
parameters for cognitive
engineering …
–
Will see visual image store, etc. tonight
• E.g., memory properties
–
–
–
Decay time: how long memory lasts
Size: number of things stored
Encoding: type of things stored
Model Human Processor – Original, 2/2
• Memory properties
–
–
–
d: Decay time: how
long memory lasts
m: Size: number of
things stored
k: Encoding: type of
things stored
Also, the 3 Stage Model of Memory
• Experimental literature considers a slightly different
perspective
– E.g., Dix et al.
– Focuses on memory types
– “Parameters”
• Again, similarity
Overview of 3-Stage Model of Memory
• After Dix, will look at elements of human information processing from a
slightly different orientation than “engineering abstraction”
• A bit more fine grained analysis, following from psychological studies
– But, it is these psychological studies from which the “engineering
abstraction” is derived
• 3 stage model of human memory
– Iconic buffer, STM, LTM
• Models of LTM
• Reasoning
• Problem solving
3-Stage Model of Human Memory
•
Sensory (here, iconic) memory – “very” short term memory
– lasts 1-2 seconds, infinite capacity
•
Short-term memory (Working memory)
– lasts ~ 18 seconds, holds 1.75 (7+/-2 items)
•
Long-term memory
– infinite capacity; short of damage is permanent
– Recall vs. Recognition (Remember vs. Know)
• Retrieval cues
•
Will demonstrate later in class …
http://www.if.uidaho.edu/~marbjm/class%202.pdf
Also, “Executive” - Attention
•
Central “executive” controls tasking
– Pays, or allocates, attention, cf Ware’s “searchlight model” of attention
– Bandwidth of attention is limited
•
Tasks that require the same resources interfere with one another
•
Attention is both a low-level and high-level property of vision
http://www.if.uidaho.edu/~marbjm/class%202.pdf
Sensory Memory:
“Very” Short Term Memory
•
Buffers for stimuli received through senses
– iconic memory: visual stimuli
– echoic memory: aural stimuli
– haptic memory: tactile stimuli
•
Examples
– “sparkler” trail
– stereo sound
•
Continuously overwritten – demo follows
A Test – of Visual Iconic Memory
• Recall, information decays quickly from sensory, or iconic memory
• Task
– Will present figure briefly
– Try to remember as many as you can
– Write them down
The Phenomenon
• After presentation, did you continue to “see” items?
– Some purely physiological based “seeing”:
• Afterimage
• Bleaching of pigments
• “bright, or colored, stuff”
– But also, there is a more “memory-based” image (process further
downstream in memory system)
•
•
•
•
Iconic memory
“dark, or veridical, stuff”
Reading from the iconic buffer
1st of the three stages
Attention: Ware’s Searchlight Model
• Model focuses on elements of attention in perception
• E.g., when “tried to remember” the individual elements
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Attention: Ware’s Searchlight Model
• “Searchlight metaphor”, Ware :
– “Consider the eyeball as an
information-gathering searchlight,
sweeping the visual world under the
guidance of the cognitive centers that
control our attention.”
• Some questions:
– How to attract its attention?
– How to enable it to attend to individual
details?
– How to enable it to perceive emergent
patterns?
– How to do all this in a fraction of a
second?
•
Attention is both a low-level and highlevel property of vision
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Eye Movements: How Searchlight Seeks
• Saccades
– Ballistic movements between fixation
points
– Both:
• Dwells 200-600ms
• Sweeps 20-100ms
– We don’t see much during the sweep
– Also, eyes converge/diverge, refocus when
object moves in Z
– BTW - Saccades are required for visual
system to work
• Else all “washes out”
• Smooth pursuit
– Lock on to object moving in field of view
– Can move head and body while doing
– Eyes converge/diverge, refocus as object
moves in Z
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Eye Movement Control Loop
Fyi this semester
• … and visual search
• Details follow
How Large is the Searchlight/Attention
“Useful Field of View”
• Ware notion of “Useful Field of View”
• When reading text, size of fovea
– ~one word at a time
• When looking for patterns, can be
much larger
• Is changing and “adaptive”
– Varies with target density to maintain a
constant number of targets in attended
region
– Scaling down the display doesn’t help fit
more
• UFoV scales down as cognitive load
(or stress) increases!
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Attracting the Searchlight/Attention, 1/2
• Four requirements for “interrupt”
(or attention allocation)
• Easily perceived even if outside
attention focus
• Can be ignored, but continually
reminds
– Keeps “popping out”
• Not so irritating that it makes
use unpleasant
• Be able to display various levels
of urgency
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Attracting the Searchlight/Attention, 2/2
• What doesn’t work:
– Small targets in periphery
– Changes in color outside fovea
•
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Why?
– Things happening during a saccade
– Single change in icon appearance
(e.g., flag up)
• What works:
– Auditory cues are very well suited
– Motion UFOV >> static UFOV
• At least 40 vs. 4 degrees, maybe
whole field
• Motion is powerful
– Blinking (slightly irritating) or moving
targets
– Urgency coded to motion velocity
Eye
Movement
Control
Again, Human Memory Stages
• Sensory (here, iconic) memory
– lasts 1-2 seconds, infinite capacity
• Short-term memory (Working memory)
– lasts ~ 18 seconds, holds 1.75 (7+/-2 items)
• Long-term memory
– infinite capacity; short of damage is permanent
– Recall vs. Recognition (Remember vs. Know)
• Retrieval cues
http://www.if.uidaho.edu/~marbjm/class%202.pdf
Reading from the Iconic Buffer, 1/2
• Typically can list 3 – 7 items named
• Short lived visual, or iconic, buffer
– holds the image for a second or two
• Read images and place in STM
– 3-stage model
Set of miscellaneous symbols
• Can get about 5-7 items until run out of short
term (working) memory capacity
• Limitation of 5-7 comes from:
– Decay of iconic memory
– Rate can read from visual buffer
– Capacity of working memory
• In each fixation between saccadic eye
movements, image of world captured
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Reading from the Iconic Buffer, 2/2
• Again, Limitation of 7 comes from:
– Decay of iconic memory
– Rate can read from visual buffer
– Capacity of working memory
• From each image,
Set of miscellaneous symbols
– Brain must identify objects,
– Match them with objects previously perceived, and
– Take information into working memory for symbolic
analysis
• Search light model of attention,
– Visual information is acquired by pointing fovea at
regions of visual field that are interesting
– Then using a scanning process in which objects are
read from an image buffer from more extensive
processing
Useful Visual
Field of View
Visual
Search or
Monitoring
Strategy
Eye
Movement
Control
Short-term memory (STM)
•
“Scratch-pad” (or buffer) for temporary recall
– rapid access ~ 70ms
– rapid decay ~ 200ms
– limited capacity - 7± 2 chunks
• Requires “rehearsing” or other wise holding information in this store
so that can/will be transferred to LTM
• Chunking, recoding, etc.
– affects amount of information retained, entering LTM
Examples - Chunking
HEC ATR ANU PTH ETR EET
9563813453
0121 414 2626
Long-term Memory (LTM)
•
Repository for all our knowledge
– slow access ~ 1/10 second
– slow decay, if any
– huge or unlimited capacity
•
Two types:
– Episodic (episodes): Serial memory of events
– Semantic (“meanings”): Structured memory of facts, concepts, skills
• Semantic LTM derived from episodic LTM
LTM – Models of Semantic Memory
•
Semantic memory structure
– Provides access to information
– Represents relationships between bits of information
– Supports inference
•
Many models, theories, accounts, schemata proposed
•
Semantic network model (example next slide):
– Inheritance – child nodes inherit properties of parent nodes
– Relationships between bits of information explicit
– Supports inference through inheritance
•
Other Models (examples follow):
– Scripts, frames, production rules
LTM - semantic network
Models of LTM – Frames, Schemata
•
Information organized in “memorial data structures”
•
Slots in structure instantiated with values for instance of data
•
Type–subtype relationships
DOG
Fixed
legs: 4
Default
diet: carniverous
sound: bark
Variable
size:
colour
COLLIE
Fixed
breed of: DOG
type: sheepdog
Default
size: 65 cm
Variable
colour
Models of LTM - Scripts
• Model of stereotypical information required to interpret situation
• Script has elements that can be instantiated with values for context
Script for a visit to the vet
Entry conditions: dog ill
vet open
owner has money
Result:
dog better
owner poorer
vet richer
Props:
examination table
medicine
instruments
Roles:
vet examines
diagnoses
treats
owner brings dog in
pays
takes dog out
Scenes:
arriving at reception
waiting in room
examination
paying
Tracks:
dog needs medicine
dog needs operation
LTM - Storage of information
•
LTM much studied in psychology:
•
Rehearsal
– information moves from STM to LTM
•
Total time hypothesis
– amount retained proportional to rehearsal time
•
Distribution of practice effect
– optimized by spreading learning over time
•
Structure, meaning and familiarity
– information easier to remember
LTM - Forgetting
•
Decay
– information is lost gradually but very slowly
•
Interference
– new information replaces old: retroactive interference
– old may interfere with new: proactive inhibition
•
So, may not forget at all, memory is selective …!
•
Also, affected by emotion – can subconsciously “choose” to forget
LTM - Retrieval
•
Recall
– information reproduced from memory can be assisted by cues, e.g. categories,
imagery
•
Recognition
– information gives knowledge that it has been seen before
– less complex than recall - information is cue
• ... and much more, e.g., eyewitness testimony
Back to Ware’s Preattentive Processing
• Back to Ware’s Preattentive Processing
Pre-attentive Processing
• Can do certain things to visual elements to increase likelihood of
identification after even brief exposure
• Certain simple shapes or colors “pop out” from surroundings
– Due to “pre-attentive” processing
• i.e., occurs before mechanisms of conscious processing occur
• Pre-attentive processing determines what objects are made
available for attention (allocation of processing resources)
• Understanding of what processed pre-attentively direct and
important contribution of vision science to data visualization
–
See Healy at http://www.csc.ncsu.edu/faculty/healey/PP/index.html
• Recall, “count the number of 1’s in the tables that follow”
34160542300740587058588458
34712447745473444494409458
94309439895093849045071090
03483294383094809383494830
39322903481907400042233839
34160542300740587058588458
34712447745473444494409458
94309439895093849045071090
03483294383094809383494830
39322903481907400042233839
34160542300740587058588458
34712447745473444494409458
94309439895093849045071090
03483294383094809383494830
39322903481907400042233839
Experimental Data
• Generic pre-attentive processing task
–
Find target with set of distractors
•
–
E.g., as with blue (preattentive, color) and black ground 1’s
Vary whether targets are type preattentively processed
•
•
Circles - times to detect for preattentively distinct items
X’s - times for other symbols
• Number of irrelevant items varies
• Pre-attentive symbols <=10 msec / item
–
–
Distractors have little affect
Primitive features extracted early in visual processing
• Other symbols 200 ms + per 3 items
–
Suggests serial search of set
• Preattentively processed elements are
“information that pops out”
Laws of Pre-attentive Display
• For something to “pop out”, or,
– be attended to pre-attentively, or,
• without conscious control,
– and within 10 msecs …
• Must stand out on some simple dimension:
–
–
–
–
Color
Simple shape = orientation, size
Motion
Depth
• Suggests “feature”, as in “visual primitive feature”, level processing
–
–
–
–
Lessons from low-level vision
Applications in Icon design
Use of texture (see Ware)
Glyph design
• Examples follow …
Examples of Pre-attentive Elements
• Most of these
preattentively
processed in primary
visual cortex
• Not preattentive:
– Compound (juncture)
“Catalog” of Preattentive Features
• Form
–
–
–
–
–
–
–
–
–
–
Line orientation
Line length
Line width
Line collinearity
Size
Curvature
Spatial grouping
Blur
Added marks
Numerosity
• Color
– Hue
– Intensity
• Motion
– Flicker
– Direction of motion
• Spatial Position
– 2D position
– Stereoscopic depth
– Concave/convex shape from
shading
Conjunction of Features Not Necessarily
Preattentively Process
• Groups of, e.g., large
and green, do not “pop
out”
• Color and Size, not lead
to preattentive
processing
• Rather, a serial task to
search
• As with conjunction of
lines
Summary
Laws of Pre-attentive Display
• Must stand out on some simple dimension
– Color
– Simple shape = orientation, size
– Motion
– Depth
• Lessons for highlighting – one of each
Glyphs and Multivariate Discrete Data
•
Representing data that is 1, 2, or 3
attributes is easy
•
Representing higher dimension,
multivariate, data is hard
– Often Discrete values, e.g., gender,
occupation, education
– How display visually?
•
Glyph is a single graphical object that
represents a multivariate data object
Composite glyphs
– VTK facilities
– All preattentive attributes are useful
•
Integral and separable dimensions
– Issues of perceptual independence of
display dimensions
– E.g., does color coding interfere with
discriminating size?
Glyphs with five attributes
Example: Glyphs with Five Attributes
• What are the
dimensions?
• How easy is it to
distinguish?
Integral and Separable Dimensions
• Perceptual independence of display dimensions
• Integral display dimensions
– Two or more attributes of a visual object are perceived holistically
• E.g., perception of rectangular shape
– Perceived as a combination of rectangle’s length and height
• E.g., perception of yellow light as combination of green and red light
• Separable display dimensions
– Tend to make judgments about each graphical dimension separately
• E.g., ball size and color
– Analytic (vs. holistic) processing
• Will consider three experimental methods of determining
Integral-Separable Dimension Pairs
• In fact continuum of
integrality-separability
– Always some
interference between
some data values
presented using
different graphical
elements of a single
graphical object
General Problem
of Multidimensional Discrete Data
• Given a set of entities, each of which has values on a number of
attribute dimensions, how might those entities be represented
visually?
– And to what purpose?
• data exploration, decision, …
• Examples:
– 1000 beetles, each measured on 30 anatomical characteristics
• Classification, relation to ecological niche
– 500 stocks, each described by 20 financial variables
• Selection for gain
• Glyph display:
– Each entity represented by a graphical object
– Data attributes are mapped to graphical attributes of each glyph
• A mapping of data dimension to graphical attributes of glyph
• In fact limited set of visual attributes available
– From pre-attentive processing, integral-separable dimensions
• Chart on next slide summarizes
More later …
End
• .
Chernoff Faces
Data attributes are mapped to graphical attributes of each glyph
•
•
Use elements of face for attributes
Chernoff, 1973
•
http://www.epcc.ed.ac.uk/computing/training/document_archive/SciViscourse/SciVis.book_47.html
Glyph Design
Data attributes are mapped to graphical attributes of each glyph
•
Summary of graphical attributes
for glyph design >
•
Many not independent
–
–
E.g., must use color to make texture show up
Blink interferes with motion coding
•
Eight dimensions likely maximum
•
Most differentiated using:
–
–
–
–
•
Also, how many resolvable steps
on each dimension?
–
–
•
Color
Shape
Spatial position
Motion
E.g, only 8 rapidly resolvable colors
E.g., number of orientation steps about 4, etc.
Estimate 2 bits (4 alts) per each
of 8 dimenstion,
–
–
Gives 64,000 or 16 bits
But, conjunction searching, etc., limit to much less
Stars, Whiskers, and Other Glyphs
•
Whisker plot
–
–
Each data value represented by a line
segment from central point
Length of line denote value of attribute
•
Star same with line ends connected
•
Exvis package
–
•
Large number of glyphs becomes
perceived as texture field
–
•
Line angles, orientations, widths
Limitations occur
Better to use small number of
orientations, e.g., 3 for rapid
classification
Whisker or fan plot, star, Exvis stick
Startplot Glyph
• .
Conclusion
• Things do “pop out”
– How might one test a set of symbols to determine if some do pop
out and others don’t?
– Why do things pop out?
• How does 3-stage model of memory help understand mechanism?
• How does flashlight model of memory help understand mechanism?
– How does one use them in designing visual representations of
data?
• What are glyphs?
– What are the lessons from pre-attentive processing to be applied
in glyph design