
A metasearch engine (or
search aggregator) is an online
information retrieval
Information retrieval (IR) in computing and information science is the task of identifying and retrieving information system resources that are relevant to an Information needs, information need. The information need can be specified in the form ...
tool that uses the data of a
web search engine
A search engine is a software system that provides hyperlinks to web pages, and other relevant information on World Wide Web, the Web in response to a user's web query, query. The user enters a query in a web browser or a mobile app, and the sea ...
to produce its own results.
Metasearch engines take input from a user and immediately query search engines for results. Sufficient
data
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted for ...
is gathered, ranked, and presented to the users.
Problems such as
spamming
Spamming is the use of messaging systems to send multiple unsolicited messages (spam) to large numbers of recipients for the purpose of commercial advertising, non-commercial proselytizing, or any prohibited purpose (especially phishing), or si ...
reduce the
accuracy and precision
Accuracy and precision are two measures of ''observational error''.
''Accuracy'' is how close a given set of measurements (observations or readings) are to their ''true value''.
''Precision'' is how close the measurements are to each other.
The ...
of results. The process of fusion aims to improve the engineering of a metasearch engine.
Examples of metasearch engines include
Skyscanner and
Kayak.com, which aggregate search results of online travel agencies and provider websites.
SearXNG is a generic
free and
open-source
Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentrali ...
search software which aggregates results from internet search engines and other sources like Wikipedia and is offered for free by more than 70 SearXNG providers.
History
The first person to incorporate the idea of meta searching was
University of Washington
The University of Washington (UW and informally U-Dub or U Dub) is a public research university in Seattle, Washington, United States. Founded in 1861, the University of Washington is one of the oldest universities on the West Coast of the Uni ...
student Eric Selberg, who published a paper about his
MetaCrawler experiment in 1995. The search engine is still usable as of 2024.
On May 20, 1996,
HotBot, then owned by
Wired
Wired may refer to:
Arts, entertainment, and media Music
* ''Wired'' (Jeff Beck album), 1976
* ''Wired'' (Hugh Cornwell album), 1993
* ''Wired'' (Mallory Knox album), 2017
* "Wired", a song by Prism from their album '' Beat Street''
* "Wired ...
, was a search engine with search results coming from the
Inktomi
Inktomi Corporation was an American Internet service provider (ISP) software developer based in Foster City, California. Customers included Microsoft, HotBot, Amazon.com, eBay, and Walmart.
The company developed Traffic Server, a proxy se ...
and Direct Hit databases. It was known for its fast results and as a search engine with the ability to search within search results. Upon being bought by
Lycos
Lycos, Inc. (stylized as LYCOS), is a web search engine and web portal established in 1994, spun out of Carnegie Mellon University. Lycos also encompasses a network of email, web hosting, social networking, and entertainment websites. The company ...
in 1998, development for the search engine staggered and its market share fell drastically. After going through a few alterations, HotBot was redesigned into a simplified search interface, with its features being incorporated into Lycos' website redesign.
In 1997, Daniel Dreilinger published a paper on his experimental metasearch engine, SavvySearch, which was able to automatically select the correct search engine to prioritize based on prior experience.
A metasearch engine called Anvish was developed by Bo Shu and
Subhash Kak in 1999; the search results were sorted using
instantaneously trained neural networks. This was later incorporated into another metasearch engine called Solosearch.
In August 2000, India got its first meta search engine when HumHaiIndia.com was launched. It was developed by the then 16 year old Sumeet Lamba. The website was later rebranded as Tazaa.com.
Ixquick is a search engine known for its privacy policy statement. Developed and launched in 1998 by David Bodnick, it is owned by Surfboard Holding BV. In June 2006, Ixquick began to delete private details of its users following the same process with
Scroogle. Ixquick's privacy policy includes no recording of users' IP addresses, no identifying cookies, no collection of personal data, and no sharing of personal data with third parties. It also uses a unique ranking system where a result is ranked by stars. The more stars in a result, the more search engines agreed on the result.
In April 2005,
Dogpile, then owned and operated by
InfoSpace, Inc., collaborated with researchers from the
University of Pittsburgh
The University of Pittsburgh (Pitt) is a Commonwealth System of Higher Education, state-related research university in Pittsburgh, Pennsylvania, United States. The university is composed of seventeen undergraduate and graduate schools and colle ...
and
Pennsylvania State University
The Pennsylvania State University (Penn State or PSU) is a Public university, public Commonwealth System of Higher Education, state-related Land-grant university, land-grant research university with campuses and facilities throughout Pennsyl ...
to measure the overlap and ranking differences of leading Web search engines in order to gauge the benefits of using a metasearch engine to search the web. Results found that from 10,316 random user-defined queries from
Google
Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
,
Yahoo!
Yahoo (, styled yahoo''!'' in its logo) is an American web portal that provides the search engine Yahoo Search and related services including My Yahoo, Yahoo Mail, Yahoo News, Yahoo Finance, Yahoo Sports, y!entertainment, yahoo!life, and its a ...
, and
Ask Jeeves, only 3.2% of first page search results were the same across those search engines for a given query. Another study later that year using 12,570 random user-defined queries from
Google
Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
,
Yahoo!
Yahoo (, styled yahoo''!'' in its logo) is an American web portal that provides the search engine Yahoo Search and related services including My Yahoo, Yahoo Mail, Yahoo News, Yahoo Finance, Yahoo Sports, y!entertainment, yahoo!life, and its a ...
,
MSN Search, and
Ask Jeeves found that only 1.1% of first page search results were the same across those search engines for a given query.
Advantages
By sending multiple queries to several other search engines this extends the
coverage data
A coverage is the digital representation of some spatio-temporal phenomenon. ISO 19123 provides the definition:
* '' feature that acts as a function to return values from its range for any direct position within its spatial, temporal or spatiote ...
of the topic and allows more information to be found. They use the indexes built by other search engines, aggregating and often post-processing results in unique ways. A metasearch engine has an advantage over a single search engine because more results can be
retrieved with the same amount of exertion.
[ It also reduces the work of users from having to individually type in searches from different engines to look for resources.][
Metasearching is also a useful approach if the purpose of the user's search is to get an overview of the topic or to get quick answers. Instead of having to go through multiple search engines like Yahoo! or Google and comparing results, metasearch engines are able to quickly compile and combine results. They can do it either by listing results from each engine queried with no additional post-processing (Dogpile) or by analyzing the results and ranking them by their own rules (IxQuick, Metacrawler, and Vivismo).
A metasearch engine can also hide the searcher's IP address from the search engines queried thus providing privacy to the search.
]
Disadvantages
Metasearch engines are not capable of parsing
Parsing, syntax analysis, or syntactic analysis is a process of analyzing a String (computer science), string of Symbol (formal), symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal gramm ...
query forms or able to fully translate query syntax
In linguistics, syntax ( ) is the study of how words and morphemes combine to form larger units such as phrases and sentences. Central concerns of syntax include word order, grammatical relations, hierarchical sentence structure (constituenc ...
. The number of hyperlink
In computing, a hyperlink, or simply a link, is a digital reference providing direct access to Data (computing), data by a user (computing), user's point and click, clicking or touchscreen, tapping. A hyperlink points to a whole document or to ...
s generated by metasearch engines are limited, and therefore do not provide the user with the complete results of a query.
The majority of metasearch engines do not provide over ten linked files from a single search engine, and generally do not interact with larger search engines for results. Pay per click links are prioritised and are normally displayed first.
Metasearching also gives the illusion that there is more coverage of the topic queried, particularly if the user is searching for popular or commonplace information. It's common to end with multiple identical results from the queried engines. It is also harder for users to search with advanced search syntax to be sent with the query, so results may not be as precise as when a user is using an advanced search interface at a specific engine. This results in many metasearch engines using simple searching.
Operation
A metasearch engine accepts a single search request from the user. This search request is then passed on to another search engine's database
In computing, a database is an organized collection of data or a type of data store based on the use of a database management system (DBMS), the software that interacts with end users, applications, and the database itself to capture and a ...
. A metasearch engine does not create a database of web page
A web page (or webpage) is a World Wide Web, Web document that is accessed in a web browser. A website typically consists of many web pages hyperlink, linked together under a common domain name. The term "web page" is therefore a metaphor of pap ...
s but generates a Federated database system
A federated database system (FDBS) is a type of Meta (prefix), meta-database management system (DBMS), which transparently maps multiple autonomous Database management system, database systems into a single federated database. The constituent data ...
of data integration
Data integration refers to the process of combining, sharing, or synchronizing data from multiple sources to provide users with a unified view.
There are a wide range of possible applications for data integration, from commercial (such as when a ...
from multiple sources.
Since every search engine is unique and has different algorithms
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for per ...
for generating ranked data, duplicates will therefore also be generated. To remove duplicates, a metasearch engine processes this data and applies its own algorithm. A revised list is produced as an output for the user. When a metasearch engine contacts other search engines, these search engines will respond in three ways:
* They will both cooperate and provide complete access to the interface for the metasearch engine, including private access to the index database, and will inform the metasearch engine of any changes made upon the index database;
* Search engines can behave in a non-cooperative manner whereby they will not deny or provide any access to interfaces;
* The search engine can be completely hostile and refuse the metasearch engine total access to their database and in serious circumstances, by seeking legal
Law is a set of rules that are created and are law enforcement, enforceable by social or governmental institutions to regulate behavior, with its precise definition a matter of longstanding debate. It has been variously described as a Socia ...
methods.
Architecture of ranking
Web pages that are highly ranked on many search engines are likely to be more relevant in providing useful information.[ However, all search engines have different ranking scores for each website and most of the time these scores are not the same. This is because search engines prioritise different criteria and methods for scoring, hence a website might appear highly ranked on one search engine and lowly ranked on another. This is a problem because Metasearch engines rely heavily on the consistency of this data to generate reliable accounts.][
]
Fusion
A metasearch engine uses the process of Fusion to filter data for more efficient results. The two main fusion methods used are: Collection Fusion and Data Fusion.
* Collection Fusion: also known as distributed retrieval, deals specifically with search engines that index unrelated data. To determine how valuable these sources are, Collection Fusion looks at the content and then ranks the data on how likely it is to provide relevant information in relation to the query. From what is generated, Collection Fusion is able to pick out the best resources from the rank. These chosen resources are then merged into a list.[
* Data Fusion: deals with information retrieved from search engines that indexes common data sets. The process is very similar. The initial rank scores of data are merged into a single list, after which the original ranks of each of these documents are analysed. Data with high scores indicate a high level of relevancy to a particular query and are therefore selected. To produce a list, the scores must be normalized using algorithms such as CombSum. This is because search engines adopt different policies of algorithms resulting in the score produced being incomparable.
]
Spamdexing
Spamdexing is the deliberate manipulation of search engine indexes. It uses a number of methods to manipulate the relevance or prominence of resources indexed in a manner unaligned with the intention of the indexing system. Spamdexing can be very distressing for users and problematic for search engines because the return contents of searches have poor precision. This will eventually result in the search engine becoming unreliable and not dependable for the user. To tackle Spamdexing, search robot algorithms are made more complex and are changed almost every day to eliminate the problem.
It is a major problem for metasearch engines because it tampers with the Web crawler
Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
's indexing criteria, which are heavily relied upon to format ranking lists. Spamdexing manipulates the natural ranking
A ranking is a relationship between a set of items, often recorded in a list, such that, for any two items, the first is either "ranked higher than", "ranked lower than", or "ranked equal to" the second. In mathematics, this is known as a weak ...
system of a search engine, and places websites higher on the ranking list than they would naturally be placed. There are three primary methods used to achieve this:
Content spam
Content spam are the techniques that alter the logical view that a search engine has over the page's contents. Techniques include:
* Keyword Stuffing – Calculated placements of keywords within a page to raise the keyword count, variety, and density of the page
* Hidden/Invisible Text – Unrelated text disguised by making it the same color as the background, using a tiny font size, or hiding it within the HTML code
* Meta-tag Stuffing – Repeating keywords in meta tags and/or using keywords unrelated to the site's content
* Doorway Pages – Low quality webpages with little content, but relatable keywords or phrases
* Scraper Sites – Programs that allow websites to copy content from other websites and create content for a website
* Article Spinning – Rewriting existing articles as opposed to copying content from other sites
* Machine Translation – Uses machine translation to rewrite content in several different languages, resulting in illegible text
Link spam
Link spam are links between pages present for reasons other than merit. Techniques include:
* Link-building Software – Automating the search engine optimization
Search engine optimization (SEO) is the process of improving the quality and quantity of Web traffic, website traffic to a website or a web page from web search engine, search engines. SEO targets unpaid search traffic (usually referred to as ...
(SEO) process
* Link Farms – Pages that reference each other (also known as mutual admiration societies)
* Hidden Links – Placing hyperlinks where visitors won't or can't see them
* Sybil Attack – Forging of multiple identities for malicious intent
* Spam Blogs – Blogs created solely for commercial promotion and the passage of link authority to target sites
* Page Hijacking – Creating a copy of a popular website with similar content, but redirects web surfers to unrelated or even malicious websites
* Buying Expired Domains – Buying expiring domains and replacing pages with links to unrelated websites
* Cookie Stuffing – Placing an affiliate tracking cookie on a website visitor's computer without their knowledge
* Forum Spam – Websites that can be edited by users to insert links to spam sites
Cloaking
This is an SEO technique in which different materials and information are sent to the web crawler and to the web browser
A web browser, often shortened to browser, is an application for accessing websites. When a user requests a web page from a particular website, the browser retrieves its files from a web server and then displays the page on the user's scr ...
. It is commonly used as a spamdexing technique because it can trick search engines into either visiting a site that is substantially different from the search engine description or giving a certain site a higher ranking.
See also
* Federated search
Federated search retrieves information from a variety of sources via a search application built on top of one or more search engines. A user makes a single query request which is distributed to the search engines, databases or other query engines ...
* List of metasearch engines
* Metabrowsing
* Multisearch
* Search aggregator
* Search engine optimization
Search engine optimization (SEO) is the process of improving the quality and quantity of Web traffic, website traffic to a website or a web page from web search engine, search engines. SEO targets unpaid search traffic (usually referred to as ...
References
{{Web search engines
Internet search engines