Geo Spacial Matching for Image Retrieval.
Abstract
Every
day the average person with a computer faces a growing flow of multimedia
information particularly via the Internet. But this ocean of information would
be useless without the ability to manipulate, classify, archive and access them
quickly and selectively. While text indexing is ubiquitous, it is often
limited, tedious and subjective for describing image content.
One of the main problems was the difficulty
of locating the desired image in a large and varied collection, while it is
perfectly feasible to identify the desired image from a small collection simply
by browsing. More effective techniques are needed with collections containing
thousands of items.
CONVENTIONAL
TECHNIQUES (TEXT ANNOTATION):
To
date, image and video storage and retrieval systems have typically relied on
human supplied textual annotations to enable indexing and searches. The
text-based indexes for large image and video archives are time consuming to
create. They necessitate that each image and video scene is analyzed manually
by a domain expert so the contents can be described textually. The
language-based descriptions, however, can never capture the visual content
sufficiently.
For example, a description of the overall
semantic content of an image does not include an enumeration of all the objects
and their characteristics, which may be of interest later.
A content mismatch occurs when the information
that the domain expert ascertains from an image differs from the information
that the user is interested in. A content mismatch is catastrophic in the sense
that little can be done to approximate or recover the omitted annotations. In
addition, a language mismatch can occur when the user and the domain expert use
different languages or phrases. Because text-based matching provides only
hit-or-miss type searching, when the user does not specify the right keywords
the desired images are unreachable without examining the entire collection.
CONTENT
BASED RETRIEVAL:
The problems with
text-based access to images have prompted increasing interest in the
development of image based solutions. This is more often referred to as Content
Based Image Retrieval (CBIR). Content Based Image Retrieval relies on the
characterization of primitive features such as colour, shape and texture that
can be automatically extracted from the images themselves.
Queries to CBIR
system are most often expressed as visual exemplars of the type of the image or
image attributed being sought. For Example user may submit a sketch, click on
the texture pallet, or select a particular shape of interest. This system then
identifies those stored images with a high degree of similarity to the
requested feature.
EXISTING SYSTEMS:
IBM’s QBIC system is the
first commercial CBIR system and probably the best known of all CBIR systems.
QBIC supports users to retrieval images by color, shape and texture. QBIC
provides several query methods: Simple, Multi-feature and Multi-pass. In the
simple method, a query is processed using only one feature. A Multi-feature
query involves more than one feature and all features have equal weights during
the search. A Multi-pass query uses the output of a previous query as the basis
for further refinements. Users can draw and specify color and texture color and
texture patterns in desired images. In QBIC, the color similarity is computed
by quadratic metric using k-element color histograms and the average colors are
used as filters to improve query efficiency. Its shape function retrieves
images by shape area, circularity, eccentricity and major axis orientation. Its
texture function retrieves images by global coarseness, contrast and
directionality features.
The
Photo book system (developed at the Massachusetts institute of technology)
allows retrieving images by color, shape and texture features. This system
provides a set of matching algorithms, including Euclidean, mahalanobis,
divergence, vector space angle, histogram, Fourier peak, and wavelet tree
distances as distance metrics. In its most recent version, users can define
their own matching algorithms.
The
I match system allows users to retrieve images by color, texture and shape. I
match supports several query similar images: Color similarity, color and shape
(Quick), color and Shape (Fuzzy), and color distribution.
PROPOSED
SYSTEM:
Currently the most widely used image search engine,
the GOOGLE, provides its users with the textual annotation kind of
implementation. With lacks of images added to the image database, not many
images are annotated with proper description. So many relevant images go
unmatched.
The most widely accepted
content-based image retrieval techniques use the Quadratic Distance and the
Integrated Region Matching methods. The Quadratic Distance method, though
yields metric distance, is computationally expensive. The conventional
Integrated Region Matching is non-metric and hence gives results that are not
optimal. Our system uses a modified IRM method which overcomes the
disadvantages of both the above mentioned methods. The color feature is
extracted using the commonly adopted histogram technique.
We also provide an interface where the user can give a query image as an
input. The colour feature is automatically extracted from the query image and
is compared to the images in the database retrieving the matching images.
No comments:
Post a Comment