Download Document in

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Relational algebra wikipedia , lookup

IMDb wikipedia , lookup

SQL wikipedia , lookup

Extensible Storage Engine wikipedia , lookup

Entity–attribute–value model wikipedia , lookup

Microsoft SQL Server wikipedia , lookup

Ingres (database) wikipedia , lookup

Open Database Connectivity wikipedia , lookup

Functional Database Model wikipedia , lookup

Concurrency control wikipedia , lookup

Database wikipedia , lookup

Microsoft Jet Database Engine wikipedia , lookup

Clusterpoint wikipedia , lookup

Relational model wikipedia , lookup

Versant Object Database wikipedia , lookup

Database model wikipedia , lookup

Transcript
QUERY PROCESSING IN MULTIMEDIA DATABASES
1-20
QUERY PROCESSING IN MULTIMEDIA DATABASES
Türker YILMAZ
99050080
(TERM PAPER)
Abstract. In this term paper, articles related to query processing in multimedia
databases are presented in a comprehensive manner; in particular subjects complete
each other. Since approaches represented in the articles are complementary, it would
be nonsense to separate them. The articles focus on object oriented approaches to
multimedia database design, spatial approaches to image retrieval methods, content
based queries and solutions to modeling problems and query display difficulties. All
of these approaches are tried to be covered and mathematical details are given to
some degree -when necessary, in order to prevent distraction from the main subject.
1. INTRODUCTION
Multimedia Data is any unstructured piece of information stored in the
Multimedia Database. A Multimedia Database differs from a conventional database in
that its content may consist of pictures, sound clips, movies documents, applets and
text (in postscript, dvi, pdf etc)
Large image databases are commonly employed in applications like criminal
records, customs, plan root databases and, voters’ registration databases.
Most researches in Multimedia Database design have focused on a particular
kind of MM data and most of them are concerned with what relations are needed to
adequately store the context of a particular Multimedia Database.
Another direction of research on Multimedia Databases is focused on data
structures and algorithms for storing and processing Multimedia content.
Related articles about query processing in multimedia databases, specifically
focus on the following subjects:
a. An object oriented approach to multimedia database design.
b. Spatial approaches to image retrieval methods.
c. Content based queries, including automated feature extraction methods
supported with the alphanumeric query modules.
d. Solutions to modeling problems and query display difficulties.
2. A GENERAL APPROACH:
2.1. MULTIMEDIA MODEL CLASSIFICATIONS
Multimedia Description Model: It provides the linguistic mechanism for
identifying the huge amount of conceptual entities stored in raw objects.
Multimedia Presentation Model (MPM): Describes the temporal and spatial
relationships among differently structured multimedia data.
Multimedia Interpretation Model: There are two levels of representation
considered in the interpretation model: The feature level and the concept level. The
feature level manages recognizable measurable aspects of description level objects.
Each description level object is indexed using its features. The concept level describes
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
1-20
2
QUERY PROCESSING IN MULTIMEDIA DATABASES
the semantic content of the description level objects. Each relevant concept is mapped
into the description level objects that match the concept.
In similarity retrieval, it is expected to get suitable candidates according to the
subjective measures. Therefore the image database system should be object oriented.
A Canonical media object (CO) is a higher level view of a raw object and
corresponds to the entire raw objects where a media object represents a relevant
portion of a canonical media object.
Examples of MO’s are regions of images, sequences of regions of video
frames, video shots and, words or paragraphs in text documents.
Operations defined on MO’s are the usual editing primitives like creation,
modification, access etc.
2.2. ANALYSIS AND RETRIEVAL PROCESS
One of the aims of interpreting a set of persistent Multimedia data is to make
explicit the structure and content present in the Multimedia data in order to support
their retrieval.
Therefore Multimedia Interpretation Model should allow the representation of
the semantic content of Multimedia objects.
The values of features, which are defined and used in the MIM, are calculated
for the objects of the description model and queries are performed by, using these
features and their semantic description as arguments.
A block diagram of the analysis and retrieval process is sketched in
3. MULTIMEDIA DATABASE GENERATION
There are three main phases
3.1. DATABASE POPULATION:
Only type of data is known and it is stored completely. Relevant objects are
identified by, interacting with the user. Features are extracted and recognition of
concepts associated with relevant objects.
Feature extraction from a picture is not satisfiable because, there is always a
possibility, that we may need some information in the future that was not extracted.
Manual indexing and feature extraction could fail to take into account some particular
2-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
3-20
feature, which may be relevant in view of some future search requests. Due to this,
automatic indexing of an image based on the extracted features is required.
Since self-organizing image retrieval systems are not restricted by their human
designer’s limitations in understanding the complex nature of visual perception, they
are open-ended and allow learning and improving continuously.
But this leads to a problem, which is known as “the curse of dimensionality” –
more features do not necessarily imply a better classification role.
This leads to the conclusion that abstracting information from a multimedia
artifact is not enough and we also should have some provision to be able to specify
operations on artifacts that can be executed at run time.
3.2. ACCESS STRUCTURE GENERATION:
Using feature & concept values the system creates appropriate access
structures that will speed up the subsequent process
3.3. QUERY FORMULATION AND EXECUTION:
The user formulates the query by interacting with the graphical interface
provided by Query formulation tool.
Concepts associated with the objects of the description model can be
recognised either during the database population or at retrieval time. The first solution
requires a pre analysis where as the second solution requires a run time recognition
method. The first solution slows down the insertion process whereas the second
approach allows faster insertion but slower execution of the conceptual level queries.
4. QUERYING THE MULTIMEDIA DATABASE
There are basically two modes for visiting a Multimedia Database.
 Browsing: Users have foggy ideas of what they’re looking for.
 Content Based Retrieval: Where a request is specified and retrieval of objects
satisfying the queries is expected.
Content Based Retrieval in Multimedia environments generally takes the form
of similarity queries, which are needed when;
-an exact comparison is not possible,
-retrieved objects need to be ranked so that the set of retrieved objects can be
restricted and qualifying objects are shown to the user in decreasing order of
relevance.
4.1. QUERY RESTRICTIONS
A query may contain the following types of restrictions
 Feature and Concepts: The user may express restrictions on the values of the
object’s features and on the values of concepts.
 Object Structure: The query formulation tool will allow the user to make
restrictions on the structure of the Multimedia objects to be retrieved.
 Spatio Temporal Relationships: User should have the possibility to formulate
restrictions on the spatial and temporal relationships of the objects to be retrieved.
 Uncertainty: Example: users may not be certain of the color of an object. QFT will
allow expressing this fact.
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
3-20
4
QUERY PROCESSING IN MULTIMEDIA DATABASES
4.2. THE MULTIMEDIA QUERY LANGUAGE
If the user specifies a certain concept in the query, the answer set may also
contain objects that do not contain that concept but other related concepts, which
defined through a relationship between concepts.
In queries many expressions are used to evaluate the questions such as,
<Condition>, <precise-comparison>, <imprecise comparison>
In those expressions weights are included in order to provide a ranked based
retrieval of Multimedia data.
Selectors are needed to cope with features, recognition degrees and structure,
in addition to traditional selectors for accessing fields of structured values and for
evaluating the methods of objects.
Example:
After this schema, we can identify four classes containing canonical objects:
MPEG, MJPEG, Frames, JPEG and GIF.
By looking at this conceptual level schema we can say that skyscrapers,
churches and bell towers are subclasses of BUILDING class
Example:
Let us suppose that the user needs to “retrieve all images of all skyscrapers that are
higher than 200m”
4-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
5-20
This can be done with
SELECT
I
FROM
I in images
WHERE
I match any
(SELECT
SS
FROM
SS in SKYSCRAPERS
WHERE
SS.height>200)
5. SPATIAL APPROACHES TO FEATURE IMAGE QUERY:
Image queries can be performed by regions and their spatial and feature
attributes. To provide this the proposed system integrates content based and spatial
query methods in order to enable searching for images by arrangements of regions.
The objective of content based visual query (CBVQ) is to retrieve the images that are
most similar to the user’s query image by performing a similarity search.
In spatial image query (SaFe) the images are matched based upon the relative
locations of symbols. For example a relative SQ may ask for images in which symbol
A is to the left of symbol B.
5.1. HOW DOES IT WORK?
In the integrated SaFe query system, regions and their feature and spatial
attributes are first extracted from the images, the overall match score between images
is computed by summing the weighted distances between the best matching regions in
terms of spatial locations, sizes and features.
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
5-20
6
QUERY PROCESSING IN MULTIMEDIA DATABASES
5.2. In SaFe system;
-Each object is assigned a minimum-bounding rectangle,
-Distances between objects are computed
-The user assigns the relative weighting “x” to each object. For example the user may
weight the size parameter more strongly than the color feature value and location in
one query.
-The overall single region query distance between region q and t is given by
d q,t   s d qs,t   a d qa,t   m d qm,t   f d qf,t
5.3. STRATEGIES FOR SPATIAL IMAGE QUERIES
The overall image query strategy consists of joining the results of the queries
on the individual regions in the query image.
There are two strategies for image queries:
-Parallel attribute query strategy, which processes the query by first computing
parallel queries on each of the region attributes.
-Pipeline attribute query strategy, which avoids the computation of the
attribute JOIN, required in the parallel strategy by using an indexing structure.
6-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
7-20
5.4. FEATURE QUERY
In order to provide color image retrieval, query-by-color method is used. In
order to support query-by-color method, an automated color region extraction system
is proposed which is named “single-color quadratic back projection system”
(SCQBP).
The system first generates a color histogram h for each image. For each image
m such that h[m]  r in a binary set c is generated such that c m [m]=1 and  i  m c m [i]=0
Then, each binary set c m is back projected onto the image using
B[x,y]= max j 0... M 1 ( A j ,k c j )
The detected regions are extracted and are added to the region table
Related image retrieval techniques such as
-Synthetic color region image retrieval
-Color photographic image retrieval
are proposed and implemented. This implementation can be found in the WEB at
URL: http://disney.ctr.columbia.edu/safe
6. CONTENT BASED QUERIES and ALPHANUMERIC QUERY
SUPPORT
The proposed system offers support for both alphanumeric query, based on
alphanumeric data attached to the image file and, content based query utilizing image
examples which is accessible from within a user friendly GUI.
6.1. RELATED DEFICIENCIES
In existing systems, queries are typically done using SQL like languages.
Methods do not utilize the image content for retrieval.
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
7-20
8
QUERY PROCESSING IN MULTIMEDIA DATABASES
The inherit problem of noise in images prohibits searching for exact matches
in many cases.
The proposed system implements image retrieval method using SelfOrganizing Hierarchical Optimal Subspace and Learning Framework for Object
Recognition (SHOSLIF-O)
The system incorporates 3 major modules:
The SHOSLIF-O module, the alphanumeric query module and GUI module.
The SHOSLIF-O module analyses all images in the database and builds a
hierarchical structure for efficient search providing the query-by-image content
capability of the system.
Alphanumeric database fields can be defined by the user in the definition
phase and a flat file imported by the user can act as a database provided it matches the
field count given in the definition phase.
In the query phase, the user can enter a text query and the alphanumeric
database modules search the database and come out with the image files that satisfy
the given conditions.
The automated hierarchical discrimination analysis method proposed in the
paper, recursively decompose a huge, high-dimensional, non-linear problem into a
collection of smaller and simpler problems using a tree structure. At the leaf nodes the
problem is locally approximated by decision boundaries.
In this method, each node of hierarchical tree finds a set of most
discriminating features for the sample population it receives and further divides them
using these automatically selected features.
In this work a pattern recognition technique – Karhunen-Loeve projection – is
combined with multidimensional discriminant analysis to derive the most discriminant
8-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
9-20
features from the samples. These feature sets are used to build a network that allows
an O(lgn) complexity for retrieving the appropriate class from a query image.
Leaf nodes in this tree represent the smallest cells defined by the training set;
as the processing moves down from the root node of the tree, the space tessellation
tree subdivides the training samples into recursively smaller subproblems.
The process is like this;
Each node represents a fovea image extracted from the main image
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
9-20
10
QUERY PROCESSING IN MULTIMEDIA DATABASES
Fovea image is produced using a visual attention mechanism, which finds
areas of interest using a scanning technique in the learning phase (construction of the
tree).
6.1.1. MEFs (Most Expressive Features)
Each input image can be treated as a high dimensional feature vector by
concatenating the rows of the subimage together, using each pixel as a single feature.
Principal component analysis is done by Karhunen-Loeve projection and those
components are called Most Expressive Features (MEFs) because they best express
the training set population, in the sense of linear transform.
6.1.2. MDFs (Most Discriminating Features)
When MEF projection cannot separate classes in the population, projecting
onto Z produces a unique value that is optimized for separating these classes. It is the
discriminant analysis procedure that produces the Z vector. The features produced by
this procedure are called the Most Discriminating Features because they optimally
discriminate among the training set classes in the sense of linear transform.
MEF and MDF samples are;
10-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
11-20
When building the tree, the system can proceed in a supervised or
unsupervised learning mode.
In supervised learning mode, the user supplies a hierarchical set of labels for
each training image.
In unsupervised learning mode we let the machine learn its best estimates for
the appropriate classifications of the training samples in an unsupervised manner
In Voronoi tessellation, the image space is partitioned. It is called hierarchical
Voronoi tessellation.
In this training sample creation (tree building) process, they’re added to the
tree in series of batches. Larger batches produce more efficient image retrieval trees,
but they take longer to develop. Smaller batches can be processed more quickly but
they may yield a tree that is different from one in which all the training images are
given as a single batch. It is trade-off in the learning versus the retrieval time
complexity.
6.2. KEYWORD-BASED QUERY SUPPORT
In large image databases, alphanumeric data associated with an image is
entered in an alphanumeric database. For example, in a criminal database, text
information such as criminal name, height, weight, etc are also provided. It is
important for the user to be able to access this textual information to provide more
clues, so the system can narrow down the search.
This system uses a relational database structure for storage and retrieval of
images and associated data.
6.3. GENERAL FEATURES OF THE PROPOSED SYSTEM
Content-based query-by-example and alphanumeric retrieval module has an
integration invisible to the user, enhanced by the graphical user interface.
The interaction between the appearance-based and text based modes is in the
following sense.
1-The matched items of appearance-based retrieval have pointers to the
associated text.
2-One can also start searching with a key field and retrieve images.
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
11-20
12
QUERY PROCESSING IN MULTIMEDIA DATABASES
3-One can use alphanumeric search to find all the matched persons and their
face images. Then the user can use those images to find people who look similar to
those matched.
GUI of the proposed system.
6.4. RESULTS OF THIS APPROACH
Many computer vision researchers are experimenting with the accuracy of face
and gesture recognition. This approach (SHOSLIF) always retrieves the query image
12-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
13-20
from the database as its first choice and the second image retrieved was an image of
the correct individual in 98% of the test probes.
7. PROPOSALS FOR EASING THE QUERY PROBLEMS AND
RESULT DISPLAY
Think of an example: A movie database can be created using the following
attributes
MOVIE (Title, Year, Producer, Director, Length, Movie_type, Prod_studio)
But with this definition we cannot easily make operations on movies like
extracting the opening sequence, removing all occurrences of MC Donald’s arch in
the movie etc. Then the result is, “We have to store the movie itself “
Here another data type called “CORE” is proposed in order to refer to the
digitized item directly with out causing any confusion between special attribute
names.
New definition is:
MOVIE (Title, Year, Producer, Director, Length, Movie_type, Prod_studio, CORE)
By using CORE attributes to refer to the raw data item, the user is able to pose
queries even if he/she is not familiar with the database schema.
Given an example of modeling the WWW as a Multimedia Database by using
CORE entity relationship diagrams is proposed in the paper but I will not go into
details but will say a few words about it.
After creating CORE ER Diagram (CER), the table definitions are:
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
13-20
14
QUERY PROCESSING IN MULTIMEDIA DATABASES
HTMLDoc (h_url, title, type, length, lastmodify, CORE)
Links (l_url, label)
Include (h_url, l_url)
After defining the following methods:
 contains (HTMDoc.Title, string)
 reach_by (HTMLDoc.url, url_to, by_n, by_type)
 mentions (HTMLDoc,string)
 linktype (HTMLDoc,url)
Many queries can be performed on this Multimedia Database like:
“Starting from the Computer Science home page, find all documents that are linked
through paths of lengths two or less containing only local links. Keep only the
documents containing the string “database” in their title.”
SELECT
FROM
WHERE
AND
AND
AND
Links.l_url
HTMLDoc,Links,Include
substring(“database”, HTMLDoc.title)
HTMLDoc.h_url= Include.h_url
Links.l_url=Include.l_url
reach_by(“http://cs.bilkent.edu.tr”,Links.l_url,2,local)
WebSQL, which is proposed before this paper is suitable for this purpose.
Adding two additional methods which are
-displayDoc(HTMLDoc)
-displayObj (WebObject, properties.position, properties.size, properties.props)
Additional queries can be performed such as;
“List all documents that have video clip or picture labelled ‘Atatürk’”
SELECT
HTMLDoc.h_url
FROM
HTMLDoc, WebObject, Include
WHERE
HTMLDoc.h_url = Include.h_url
AND
WebObject.w_url = Include.w_url
AND
(WebObject.objectType= “IMAGE”
OR WebObject.objectType = “VIDEO”)
AND
WebObject.label = “Atatürk”
14-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
15-20
By extending the CER diagram as follows
But what about displaying answers?
8. A RESULT DISPLAY PROPOSAL
In conventional databases the answer is presented either as a table or with the
help of forms. It is believed that since Multimedia Databases have a web like front
ends, there should be some display specifications for users.
Here also another word “DISPLAY” is proposed to be reserved word for this
purpose, even if it is not mentioned in the queries.
This implementation has been made and it is called SQL+D, which allows us
to specify how the answer of a query posed to a multimedia database should be
displayed.
Example: Consider a database for a video rental store containing movie titles and
other general information of the movies, plus a movie clip and a picture of the
promotional poster. Also available is a list of the actors in a movie, and other
information about the actors, including their picture. The Schema looks as follows:
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
15-20
16
QUERY PROCESSING IN MULTIMEDIA DATABASES
MOVIE
MOVIE_ACTORS
ACTORS
(Available, title, director, producer, date, classification, rating,
CORE, poster)
(title, name, role)
(name, dob, biography, picture)
Here CORE is a video (mpeg or avi) and poster is an image (in tif, gif, jpeg
etc.)
Pose the following query:
“List all actors in ‘Gone with the wind’ with their pictures and biographies.”
SELECT
FROM
WHERE
DISPLAY
WITH
MOVIE_ACTORS.name,
ACTORS.biography,
ACTORS.picture
MOVIE_ACTORS,ACTORS
MOVIE_ACTORS.title=”Gone with the wind” AND
ACTORS.name=MOVI_ACTORS.name
PANEL main, PANEL info ON main (east),
MOVIE_ACTORS.name AS list ON main (west),
ACTORS.picture AS image ON info (north),
ACTORS.biography AS text ON info (south)
This is a standard SQL query up to WHERE clause, thereafter the display
clause is used to specify where data is to be placed on the screen. Since DISPLAY
clause operates on the data extracted from the query, only the attribute names
included in the SELECT clause can be used inside the DISPLAY clause and the result
is
16-20
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
17-20
There are also many examples given in the paper like
Displays having PLAY buttons for viewing which has a trigger associated with it, or;
Maps showing the place of the facility selected and many others like
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
17-20
18
18-20
QUERY PROCESSING IN MULTIMEDIA DATABASES
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
QUERY PROCESSING IN MULTIMEDIA DATABASES
19-20
9. CONCLUSION
There are many proposed systems to make query processing in multimedia
databases easier. Although all of them are useful in themselves, some coordination is
needed in order to evaluate the most successful ones and combine the theoretical and
practical issues hidden in them. Object oriented modeling is necessary for multimedia
database design. Query language is defined from traditional query language and
extended to support;
 Partial match retrieval
 Expressions of conditions on the values of features
 Possibilities to take into account the imprecision of the interpretation of the
content of the Multimedia object.
Usage of automated feature extraction methods improves image detection and
query effectiveness. Extensions for the results of the query displays improve
multimedia database query flexibility. By using spatial image querying mechanisms,
we can improve effectiveness over non-spatial image query mechanisms.
Unfortunately, there is not any answer for image queries that searches for a picture
taken in different lighting and weather conditions hence the problem of distortion
continues to affect the effectiveness of multimedia databases.
REFERENCES:
1.Conceptual Modeling and Querying in Multimedia Databases.
CHITTA BARAL, GRACIELA GONZALEZ, TRAN SON, Multimedia Tools and
Applications, Vol 7, Issue 1-2, 1998, pp 37-66.
2. An Approach to a Content-Based Retrieval of Multimedia Data.
GUISEPPE AMATO, GIOVANNI MAINETTO, PASQUALE SAVINO, Multimedia
Tools and Applications, Vol 7, Issue 1-2, 1998, pp 9-36.
3. Integrated Spatial and Feature Image Query
JOHN R.SMITH, SHIH-FU CHANG, Multimedia Systems, Vol 7, Issue 2, 1998, pp
129-140.
4. An Image Database System with Support for Traditional Alphanumeric Queries
and Content-Based Queries by Example.
DANIEL L.SWEETS, YOGESH PATHAK, JOHN J.WENG, Multimedia Tools and
Applications, Vol 7, Issue 3, 1998, pp 181-212.
TERM PAPER, CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES
19-20
20
QUERY PROCESSING IN MULTIMEDIA DATABASES
INDEX
Multimedia Description Model
Multimedia Interpretation Model
Multimedia Presentation Model
A
ACCESS STRUCTURE
alphanumeric
Alphanumeric
3
1, 7, 8, 11, 12
8, 19
O
Object Structure
B
bounding rectangle
Browsing
6
3
4
13, 15
7
7
3, 4
7
3
13, 14, 16
D
deficiencies
DISPLAY
discrimination analysis
7
15, 16
8
pattern recognition
PLAY
query
1, 3, 4, 5, 6, 7, 8, 9, 11, 12, 15, 16, 19
QUERY FORMULATION
3
Query language
19
QUERY RESTRICTIONS
3
QUERYING THE MULTIMEDIA DATABASE 3
R
region table
relational database
relative weighting
RESULT DISPLAY
RETRIEVAL
G
7, 8, 12
I
indexing
SaFe
SELECT
SHOSLIF
SPATIAL
spatial
Spatial
Spatio Temporal Relationships
SQL
STRATEGIES
Synthetic color region
tree structure
K
8
U
8, 10
11
Uncertainty
L
Leaf nodes
3
V
9
Voronoi tessellation
M
Multimedia Data
Multimedia Database
multimedia databases
20-20
1, 19
1, 3, 13, 14
1, 19
5, 6
5, 14, 16
8, 12
5, 6
1, 3, 5, 19
1, 19
3
7, 15, 16
6
7
T
2, 6
Karhunen-Loeve projection
key field
7
11
6
13, 15
2
S
2, 3, 19
1, 2, 3, 4, 5, 8, 9, 10, 19
2, 10
10
GUI
8
17
Q
F
Feature
features
Features
Fovea image
3
P
C
canonical objects
CER
color histogram
Color photographic image retrieval
comparison
CONTENT BASED
Content Based Retrieval
CORE
1
1, 2
1
11
W
WebSQL
14
TERM PAPER CS 532 DATABASE SYSTEMS
QUERY PROCESSING IN MULTIMEDIA DATABASES