Web Image Re-Ranking Using Query-Specific Semantic
Signatures
ABSTRACT:
Image
re-ranking, as an effective way to improve the results of web-based image
search, has been adopted by current commercial search engines such as Bing and
Google. Given a query keyword, a pool of images is first retrieved based on
textual information. By asking the user to select a query image from the pool,
the remaining images are re-ranked based on their visual similarities with the
query image. A major challenge is that the similarities of visual features do
not well correlate with images’ semantic meanings which interpret users’ search
intention. Recently people proposed to match images in a semantic space which
used attributes or reference classes closely related to the semantic meanings
of images as basis. However, learning a universal visual semantic space to
characterize highly diverse images from the web is difficult and inefficient.
In this paper, we propose a novel image re-ranking framework, which
automatically offline learns different semantic spaces for different query
keywords. The visual features of images are projected into their related
semantic spaces to get semantic signatures. At the online stage, images are
re-ranked by comparing their semantic signatures obtained from the semantic
space specified by the query keyword. The proposed query-specific semantic
signatures significantly improve both the accuracy and efficiency of image
re-ranking. The original visual features of thousands of dimensions can be
projected to the semantic signatures as short as 25 dimensions. Experimental
results show that 25-40 percent relative improvement has been achieved on
re-ranking precisions compared with the state-of-the-art methods.
EXISTING SYSTEM:
WEB-SCALE image search engines mostly use keywords as
queries and rely on surrounding text to search images. They suffer from the
ambiguity of query keywords, because it is hard for users to accurately
describe the visual content of target images only using keywords. For example,
using “apple” as a query keyword, the retrieved images belong to different categories
(also called concepts in this paper), such as “red apple,” “apple logo,” and
“apple laptop.”
This is the most common form of text search on the
Web. Most search engines do their text query and retrieval using
keywords. The
keywords based searches they usually provide results from blogs or other
discussion boards. The user cannot have a satisfaction with these results due
to lack of trusts on blogs etc. low precision and high recall
rate. In early search engine that offered disambiguation to search
terms. User intention identification plays an important role in the intelligent
semantic search engine.
DISADVANTAGES
OF EXISTING SYSTEM:
* Some
popular visual features are in high dimensions and efficiency is not
satisfactory if they are directly matched.
* Another
major challenge is that, without online training, the similarities of low-level
visual features may not well correlate with images’ high-level semantic
meanings which interpret users’ search intention.
PROPOSED SYSTEM:
In this paper, a novel framework is proposed for web
image re-ranking. Instead of manually defining a universal concept dictionary,
it learns different semantic spaces for different query keywords individually
and automatically. The semantic space related to the images to be re-ranked can
be significantly narrowed down by the query keyword provided by the user. For
example, if the query keyword is “apple,” the concepts of “mountain” and
“Paris” are irrelevant and should be excluded. Instead, the concepts of “computer”
and “fruit” will be used as dimensions to learn the semantic space related to
“apple.” The query-specific semantic spaces can more accurately model the
images to be re-ranked, since they have excluded other potentially unlimited
number of irrelevant concepts, which serve only as noise and deteriorate the
re-ranking performance on both accuracy and computational cost. The visual and
textual features of images are then projected into their related semantic
spaces to get semantic signatures. At the online stage, images are re-ranked by
comparing their semantic signatures obtained from the semantic space of the
query keyword. The semantic correlation between concepts is explored and
incorporated when computing the similarity of semantic signatures.
We propose the semantic web based search engine
which is also called as Intelligent Semantic Web Search Engines. We use the
power of xml meta-tags deployed on the web page to search the queried
information. The xml page will be consisted of built-in and user defined tags. Here
propose the intelligent semantic web based search engine. We use the power of
xml meta-tags deployed on the web page to search the queried information. The
xml page will be consisted of built-in and user defined tags. The metadata
information of the pages is extracted from this xml into rdf. our practical
results showing that proposed approach taking very less time to
answer the queries while providing more accurate information.
ADVANTAGES
OF PROPOSED SYSTEM:
ü The
visual features of images are projected into their related semantic spaces
automatically learned through keyword expansions offline.
ü Our
experiments show that the semantic space of a query keyword can be described by
just 20-30 concepts (also referred as “reference classes”). Therefore the
semantic signatures are very short and online image re-ranking becomes
extremely efficient. Because of the large number of keywords and the dynamic
variations of the web, the semantic spaces of query keywords are automatically
learned through keyword expansion.
SYSTEM
REQUIREMENTS:
HARDWARE REQUIREMENTS:
Ø
System : Pentium IV 2.4 GHz.
Ø
Hard Disk :
40 GB.
Ø
Floppy Drive : 1.44
Mb.
Ø
Monitor : 15
VGA Colour.
Ø
Mouse :
Logitech.
Ø Ram : 512 Mb.
SOFTWARE
REQUIREMENTS:
Ø Operating system : Windows
XP/7.
Ø Coding Language : ASP.net,
C#.net
Ø Tool : Visual Studio 2010
Ø Database : SQL
SERVER 2008
REFERENCE:
Xiaogang Wang,
Shi Qiu, Ke Liu, and Xiaoou Tang, “Web Image Re-Ranking
Using Query-Specific Semantic Signatures”, IEEE
TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 36, NO. 4,
APRIL 2014.
No comments:
Post a Comment