In its widest sense, Internet research comprises any kind of
research
Research is creative and systematic work undertaken to increase the stock of knowledge. It involves the collection, organization, and analysis of evidence to increase understanding of a topic, characterized by a particular attentiveness to ...
done on the
Internet
The Internet (or internet) is the Global network, global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a internetworking, network of networks ...
or the
World Wide Web
The World Wide Web (WWW or simply the Web) is an information system that enables Content (media), content sharing over the Internet through user-friendly ways meant to appeal to users beyond Information technology, IT specialists and hobbyis ...
. Unlike simple
fact-checking
Fact-checking is the process of verifying the factual accuracy of questioned reporting and statements. Fact-checking can be conducted before or after the text or content is published or otherwise disseminated. Internal fact-checking is such che ...
or
web scraping
Web scraping, web harvesting, or web data extraction is data scraping used for data extraction, extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. W ...
, it often involves synthesizing from diverse sources and verifying the credibility of each. In a stricter sense, "Internet research" refers to conducting scientific research using
online
In computer technology and telecommunications, online indicates a state of connectivity, and offline indicates a disconnected state. In modern terminology, this usually refers to an Internet connection, but (especially when expressed as "on lin ...
tools and techniques; the discipline that studies Internet research thus understood is known as
online research methods
Online research methods (ORMs) are ways in which researchers can collect data via the internet. They are also referred to as Internet research, Internet science, or Web-based methods. Many of these online research methods are related to existing re ...
or
Internet-mediated research. As with other kinds of scientific research, it involves an
ethical dimension. Internet research can also be interpreted as the part of
Internet studies that investigates the social, ethical, economic, managerial and political implications of the Internet.
Characterization
Internet research has had a profound impact on the way
idea
In philosophy and in common usage, an idea (from the Greek word: ἰδέα (idea), meaning 'a form, or a pattern') is the results of thought. Also in philosophy, ideas can also be mental representational images of some object. Many philosophe ...
s are formed and
knowledge
Knowledge is an Declarative knowledge, awareness of facts, a Knowledge by acquaintance, familiarity with individuals and situations, or a Procedural knowledge, practical skill. Knowledge of facts, also called propositional knowledge, is oft ...
is created. Through
web search,
pages with some relation to a given search entry can be visited, analyzed, and compiled. In addition, the Web can be used to connect with relevant sources of primary data (e.g., experts) and conduct
online interviews. Communication tools used for this purpose on the Web include
email
Electronic mail (usually shortened to email; alternatively hyphenated e-mail) is a method of transmitting and receiving Digital media, digital messages using electronics, electronic devices over a computer network. It was conceived in the ...
(including
mailing lists), online
discussion group
A discussion group is a group of individuals, typically who share a similar interest, who gather either formally or informally to discuss ideas, solve problems, or make comments. Common methods of conversing including meeting in person, conducting ...
s (including
message boards and
BBSes), and other personal communication facilities (
instant messaging,
IRC,
newsgroups, etc.).
Issues
Internet research can provide quick, immediate, and worldwide
access to information, although results may be affected by unrecognized bias, difficulties in verifying a writer's
credentials (and therefore the accuracy or pertinence of the information obtained), and whether the researcher has sufficient skill to draw meaningful results from the abundance of material that is typically available. The first resources that are retrieved may not be the most suitable resources to answer a particular question. Popularity is often a factor used in structuring Internet search results, but popular information is not always the most correct or representative of the breadth of knowledge and opinion on a given topic.
Related activities
Internet research is distinct from library research, in that libraries provide access to institutional publications, which are ideally more reliable; as the review and selection process gets transferred online, the line between the two is getting blurred. The expression "Internet research" resembles "
scientific research" because of the word "research", but they denote different types of activity; the first refers to online information gathering, whereas the second usually implies empirical experiments. It is also distinct from
Internet search, which includes looking up specific facts online driven by other needs than research. Internet research is also distinct from
market research
Market research is an organized effort to gather information about target markets and customers. It involves understanding who they are and what they need. It is an important component of business strategy and a major factor in maintaining com ...
, as the latter is done mainly for profitability.
Related fields
One distinction could be made between Internet studies and Internet research, in that the former is the study of the distinctive sorts of human interaction done on the Internet,
whereas Internet research could study other aspects than behavior: technology, outcomes, etc.
Library and information science
Library and information science (LIS)Library and Information Sciences is the name used in the Dewey Decimal Classification for class 20 from the 18th edition (1971) to the 22nd edition (2003). are two interconnected disciplines that deal with inf ...
studies information, especially how it is managed and deployed. Its object is related, but not identity to Internet research, whose object of study is an activity.
Human–computer interaction
Human–computer interaction (HCI) is the process through which people operate and engage with computer systems. Research in HCI covers the design and the use of computer technology, which focuses on the interfaces between people (users) and comp ...
is the study of the design and the use of computer technology, with a focus on the interfaces between people humans and computers.
Search tools
Search tools for finding information on the Internet include
web search engine
A search engine is a software system that provides hyperlinks to web pages, and other relevant information on World Wide Web, the Web in response to a user's web query, query. The user enters a query in a web browser or a mobile app, and the sea ...
s, the search engines on individual websites, the browsers' hotkey-activated feature for searching in the current page,
meta search engines,
web directories, and specialty search services.
Web search
A Web search allows a user to enter a search query, in the form of keywords or a phrase, into either a search box or on a search form, and then finds matching results and displays them on the screen. The results are accessed from a database, using search algorithms that select web pages based on the location and frequency of keywords on them, along with the quality and number of external hyperlinks pointing at them. The database is supplied with data from a
web crawler
Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
that follows the hyperlinks that connect web pages, and copies their content, records their URLs, and other data about the page along the way. The content is then indexed, to aid retrieval.
To view this information, a user enters their search query, in the form of keywords or a phrase, into a search box or search form. Then, the search engine uses its algorithms to query a database, selecting
Websites' search feature
Websites often have a search engine of their own, for searching just the site's content, often displayed at the top of every page. For example, Wikipedia provides a search engine for exploring its content. A search engine within a website allows a user to focus on its content and find desired information with more precision than with a web search engine. It may also provide access to information on the website for which a web search engine does not.
Specialty search tools
Specialty search tools enable users to find information that conventional search engines and meta search engines cannot access because the content is stored in databases. In fact, the vast majority of information on the web is stored in databases that require users to go to a specific site and access it through a search form. Often, the content is generated dynamically. As a consequence, Web crawlers are unable to index this information. In a sense, this content is "hidden" from search engines, leading to the term invisible or
deep Web. Specialty search tools have evolved to provide users with the means to quickly and easily find deep Web content. These specialty tools rely on advanced bot and
intelligent agent
In artificial intelligence, an intelligent agent is an entity that Machine perception, perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge r ...
technologies to search the deep Web and automatically generate specialty Web directories, such as the Virtual Private Library.
Internet research software
Internet research software captures information while performing Internet research. This information can then be organized in various ways included tagging and hierarchical trees. The goal is to collect information relevant to a specific research project in one place, so that it can be found and accessed again quickly.
These tools also allow captured content to be edited and annotated and some allow the ability to export to other formats. Other features common to
outliner
An outliner (or outline processor) is a specialized type of text editor (word processor) used to create and edit Outline (list), outlines, which are text files which have a tree structure or a tree view, for organization. Textual information is co ...
s include the ability to use full text search which aids in quickly locating information and filters enable you to drill down to see only information relevant to a specific query. Captured and kept information also provides an additional backup, in case web pages and sites disappear or are inaccessible later.
See also
*
How to use Wikipedia for research
*
Data care
*
Open access citation advantage
*
Inquiry-based learning
*
Internet Research (journal)
*
CRAAP test
*
Web literacy
*
Association of Internet Researchers
References
See also
*
Internet research ethics
External links
*
*
*
*
*
*
{{Authority control
Research