Robots In Film
   HOME

TheInfoList



OR:

"\n\n\n\n\n\n\nrobots.txt is the
filename A filename or file name is a name used to uniquely identify a computer file in a file system. Different file systems impose different restrictions on filename lengths. A filename may (depending on the file system) include: * name – base ...
used for implementing the Robots Exclusion Protocol, a standard used by
website A website (also written as a web site) is any web page whose content is identified by a common domain name and is published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, educatio ...
s to indicate to visiting
web crawler Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
s and other web robots which portions of the website they are allowed to visit.\n\nThe standard, developed in 1994, relies on
voluntary compliance Voluntary compliance is conforming (" complying") to a rule, without facing negative consequences if not complying. In corporations Voluntary compliance is one of possible ways of practicing corporate social responsibility. It is seen as an alte ...
. Malicious bots can use the file as a directory of which pages to visit, though standards bodies discourage countering this with
security through obscurity In security engineering, security through obscurity is the practice of concealing the details or mechanisms of a system to enhance its security. This approach relies on the principle of hiding something in plain sight, akin to a magician's slei ...
. Some archival sites ignore robots.txt. The standard was used in the 1990s to mitigate
server Server may refer to: Computing *Server (computing), a computer program or a device that provides requested information for other programs or devices, called clients. Role * Waiting staff, those who work at a restaurant or a bar attending custome ...
overload. In the 2020s, websites began denying bots that collect information for
generative artificial intelligence Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models Machine learning, learn the underlyin ...
.\n\nThe \"robots.txt\" file can be used in conjunction with
sitemaps Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, h ...
, another robot inclusion standard for websites.\n


History

\nThe standard was proposed by
Martijn Koster Martijn Koster (born ca 1970) is a Dutch software engineer noted for his pioneering work on Internet searching. Koster created ALIWEB, the Internet's first search engine, which was announced in November 1993
,<\/ref><\/ref> when working for
Nexor Nexor Limited is a privately held company based in Nottingham, providing products and services to safeguard government, defence and critical national infrastructure computer systems. It was originally known as X-Tel Services Limited. History N ...
<\/ref> in February 1994<\/ref> on the ''www-talk'' mailing list, the main communication channel for WWW-related activities at the time.
Charles Stross Charles David George "Charlie" Stross (born 18 October 1964) is a British writer of science fiction and fantasy. Stross specialises in hard science fiction and space opera. Between 1994 and 2004, he was also an active writer for the magazine ' ...
claims to have provoked Koster to suggest robots.txt, after he wrote a badly behaved web crawler that inadvertently caused a
denial-of-service attack In computing, a denial-of-service attack (DoS attack) is a cyberattack in which the perpetrator seeks to make a machine or network resource unavailable to its intended users by temporarily or indefinitely disrupting services of a host co ...
on Koster's server.<\/ref>\n\nThe standard, initially RobotsNotWanted.txt, allowed
web developer A web developer is a programmer who develops World Wide Web applications using a client–server model. The applications typically use HTML, CSS, and JavaScript in the client, and any general-purpose programming language in the server. is used ...
s to specify which bots should not access their website or which pages bots should not access. The internet was small enough in 1994 to maintain a complete list of all bots;
server Server may refer to: Computing *Server (computing), a computer program or a device that provides requested information for other programs or devices, called clients. Role * Waiting staff, those who work at a restaurant or a bar attending custome ...
overload was a primary concern. By June 1994 it had become a ''de facto'' standard;<\/ref> most complied, including those operated by search engines such as
WebCrawler WebCrawler is a search engine, and one of the oldest surviving search engines on the web today. For many years, it operated as a metasearch engine. WebCrawler was the first web search engine to provide full text search. History Brian Pinker ...
,
Lycos Lycos, Inc. (stylized as LYCOS), is a web search engine and web portal established in 1994, spun out of Carnegie Mellon University. Lycos also encompasses a network of email, web hosting, social networking, and entertainment websites. The company ...
, and
AltaVista AltaVista was a web search engine established in 1995. It became one of the most-used early search engines, but lost ground to Google and was purchased by Yahoo! in 2003, which retained the brand, but based all AltaVista searches on its own sear ...
.<\/ref>\n\nOn July 1, 2019, Google announced the proposal of the Robots Exclusion Protocol as an official standard under
Internet Engineering Task Force The Internet Engineering Task Force (IETF) is a standards organization for the Internet standard, Internet and is responsible for the technical standards that make up the Internet protocol suite (TCP/IP). It has no formal membership roster ...
.<\/ref> A proposed standard was published in September 2022 as RFC 9309.\n


Standard

\nWhen a site owner wishes to give instructions to web robots they place a text file called in the root of the web site hierarchy (e.g. ). This text file contains the instructions in a specific format (see examples below). Robots that choose to follow the instructions try to fetch this file and read the instructions before fetching any other file from the
website A website (also written as a web site) is any web page whose content is identified by a common domain name and is published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, educatio ...
. If this file does not exist, web robots assume that the website owner does not wish to place any limitations on crawling the entire site.\n\nA robots.txt file contains instructions for bots indicating which web pages they can and cannot access. Robots.txt files are particularly important for web crawlers from search engines such as Google.\n\nA robots.txt file on a website will function as a request that specified robots ignore specified files or directories when crawling a site. This might be, for example, out of a preference for privacy from search engine results, or the belief that the content of the selected directories might be misleading or irrelevant to the categorization of the site as a whole, or out of a desire that an application only operates on certain data. Links to pages listed in robots.txt can still appear in search results if they are linked to from a page that is crawled.<\/ref>\n\nA robots.txt file covers one
origin Origin(s) or The Origin may refer to: Arts, entertainment, and media Comics and manga * ''Origin'' (comics), a Wolverine comic book mini-series published by Marvel Comics in 2002 * ''The Origin'' (Buffy comic), a 1999 ''Buffy the Vampire Sl ...
. For websites with multiple
subdomain In the Domain Name System (DNS) hierarchy, a subdomain is a domain that is a part of another (main) domain. For example, if a domain offered an online store as part of their website it might use the subdomain. Overview The Domain Name System ...
s, each subdomain must have its own robots.txt file. If had a robots.txt file but did not, the rules that would apply for would not apply to . In addition, each
URI scheme A Uniform Resource Identifier (URI), formerly Universal Resource Identifier, is a unique sequence of characters that identifies an abstract or physical resource, such as resources on a webpage, mail address, phone number, books, real-world obje ...
and
port A port is a maritime facility comprising one or more wharves or loading areas, where ships load and discharge cargo and passengers. Although usually situated on a sea coast or estuary, ports can also be found far inland, such as Hamburg, Manch ...
needs its own robots.txt file; does not apply to pages under or .\n


Compliance

\nThe robots.txt protocol is widely complied with by bot operators. \n


Search engines

\nSome major
search engines Search engines, including web search engines, selection-based search engines, metasearch engines, desktop search tools, and web portals and vertical market websites have a search facility for online databases. By content/topic Gene ...
following this standard include Ask,<\/ref> AOL,<\/ref> Baidu,<\/ref> Bing,<\/ref> DuckDuckGo,<\/ref> Kagi,<\/ref> Google,<\/ref> Yahoo!,<\/ref> and Yandex.<\/ref>\n


Archival sites

\nSome web archiving projects ignore robots.txt.
Archive Team Archive Team is a group dedicated to digital preservation and web archiving that was co-founded by Jason Scott in 2009. Its primary focus is the copying and preservation of content housed by at-risk online services. Some of its projects include ...
uses the file to discover more links, such as
sitemap A sitemap is a list of pages of a web site within a domain. There are three primary kinds of sitemap: * Sitemaps used during the planning of a website by its designers * Human-visible listings, typically hierarchical, of the pages on a site * ...
s.<\/ref> Co-founder
Jason Scott Jason Scott Sadofsky (born September 13, 1970) is an American archivist, historian of technology, filmmaker, performer, and actor. Scott has been known by the online pseudonyms Sketch, SketchCow, Sketch The Cow, The Slipped Disk, and textfiles. ...
said that \"unchecked, and left alone, the robots.txt file ensures no mirroring or reference for items that may have general use and meaning beyond the website's context.\"<\/ref> In 2017, the
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
announced that it would stop complying with robots.txt directives.<\/ref> According to ''
Digital Trends Digital Trends is a Portland, Oregon-based tech news, lifestyle, and information website that publishes news, reviews, guides, how-to articles, descriptive videos and podcasts about technology and consumer electronics products. With offices in P ...
'', this followed widespread use of robots.txt to remove historical sites from search engine results, and contrasted with the nonprofit's aim to archive \"snapshots\" of the internet as it previously existed.<\/ref>\n


Artificial intelligence

\nStarting in the 2020s, web operators began using robots.txt to deny access to bots collecting training data for
generative AI Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and str ...
. In 2023, Originality.AI found that 306 of the thousand most-visited websites blocked
OpenAI OpenAI, Inc. is an American artificial intelligence (AI) organization founded in December 2015 and headquartered in San Francisco, California. It aims to develop "safe and beneficial" artificial general intelligence (AGI), which it defines ...
's GPTBot in their robots.txt file and 85 blocked
Google Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
's Google-Extended. Many robots.txt files named GPTBot as the only bot explicitly disallowed on all pages. Denying access to GPTBot was common among news websites such as the
BBC The British Broadcasting Corporation (BBC) is a British public service broadcaster headquartered at Broadcasting House in London, England. Originally established in 1922 as the British Broadcasting Company, it evolved into its current sta ...
and ''
The New York Times ''The New York Times'' (''NYT'') is an American daily newspaper based in New York City. ''The New York Times'' covers domestic, national, and international news, and publishes opinion pieces, investigative reports, and reviews. As one of ...
''. In 2023, blog host
Medium Medium may refer to: Aircraft *Medium bomber, a class of warplane * Tecma Medium, a French hang glider design Arts, entertainment, and media Films * ''The Medium'' (1921 film), a German silent film * ''The Medium'' (1951 film), a film vers ...
announced it would deny access to all artificial intelligence web crawlers as \"AI companies have leached value from writers in order to spam Internet readers\".\n\nGPTBot complies with the robots.txt standard and gives advice to web operators about how to disallow it, but ''
The Verge ''The Verge'' is an American Technology journalism, technology news website headquarters, headquartered in Lower Manhattan, New York City and operated by Vox Media. The website publishes news, feature stories, guidebooks, product reviews, cons ...
''s David Pierce said this only began after \"training the underlying models that made it so powerful\". Also, some bots are used both for search engines and artificial intelligence, and it may be impossible to block only one of these options. ''
404 Media ''404 Media'' is an online publication that focuses on technology and internet reporting. It covers topics such as hacking, sex work, niche online communities, and the right-to-repair movement. It is worker-owned by its reporters. History ' ...
'' reported that companies like
Anthropic Anthropic PBC is an American artificial intelligence (AI) startup company founded in 2021. Anthropic has developed a family of large language models (LLMs) named Claude as a competitor to OpenAI's ChatGPT and Google's Gemini. According to the ...
and Perplexity.ai circumvented robots.txt by renaming or spinning up new scrapers to replace the ones that appeared on popular blocklists.<\/ref>\n


Security

\nDespite the use of the terms ''allow'' and ''disallow'', the protocol is purely advisory and relies on the compliance of the
web robot An Internet bot, web robot, robot, or simply bot, is a software application that runs automated tasks ( scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the c ...
; it cannot enforce any of what is stated in the file. <\/ref> Malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide to find disallowed links and go straight to them. While this is sometimes claimed to be a security risk,<\/ref> this sort of ''
security through obscurity In security engineering, security through obscurity is the practice of concealing the details or mechanisms of a system to enhance its security. This approach relies on the principle of hiding something in plain sight, akin to a magician's slei ...
'' is discouraged by standards bodies. The
National Institute of Standards and Technology The National Institute of Standards and Technology (NIST) is an agency of the United States Department of Commerce whose mission is to promote American innovation and industrial competitiveness. NIST's activities are organized into Outline of p ...
(NIST) in the United States specifically recommends against this practice: \"System security should not depend on the secrecy of the implementation or its components.\"<\/ref> In the context of robots.txt files, security through obscurity is not recommended as a security technique.<\/ref>\n


Alternatives

\nMany robots also pass a special
user-agent In computing, the User-Agent header is an HTTP header intended to identify the user agent responsible for making a given HTTP request. Whereas the character sequence User-Agent comprises the name of the header itself, the header value that a giv ...
to the web server when fetching content.<\/ref> A web administrator could also configure the server to automatically return failure (or pass alternative content) when it detects a connection using one of the robots.<\/ref><\/ref>\n\nSome sites, such as
Google Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
, host a humans.txt<\/code> file that displays information meant for humans to read.<\/ref> Some sites such as
GitHub GitHub () is a Proprietary software, proprietary developer platform that allows developers to create, store, manage, and share their code. It uses Git to provide distributed version control and GitHub itself provides access control, bug trackin ...
redirect humans.txt to an ''About'' page.<\/ref>\n\nPreviously, Google had a joke file hosted at \/killer-robots.txt<\/code> instructing
the Terminator ''The Terminator'' is a 1984 American science fiction action film directed by James Cameron, written by Cameron and Gale Anne Hurd and produced by Hurd. It stars Arnold Schwarzenegger as the Terminator, a cybernetic assassin sent back in t ...
not to kill the company founders
Larry Page Lawrence Edward Page (born March 26, 1973) is an American businessman, computer engineer and computer scientist best known for co-founding Google with Sergey Brin. Page was chief executive officer of Google from 1997 until August 2001 when ...
and
Sergey Brin Sergey Mikhailovich Brin (; born August 21, 1973) is an American computer scientist and businessman who co-founded Google with Larry Page. He was the president of Google's parent company, Alphabet Inc., until stepping down from the role on D ...
.<\/ref><\/ref>\n


Examples

\nThis example tells all robots that they can visit all files because the wildcard *<\/code> stands for all robots and the Disallow<\/code> directive has no value, meaning no pages are disallowed. Search engine giant Google open-sourced their robots.txt parser,<\/ref> and recommends testing and validating rules on the robots.txt file using community-built testers such as Tame the Bots <\/ref> and Real Robots Txt.<\/ref> \n\n
\nUser-agent: *\nDisallow: \n<\/pre>\n\nThis example has the same effect, allowing all files rather than prohibiting none.\n\n
\nUser-agent: *\nAllow: \/\n<\/pre>\n\nThe same result can be accomplished with an empty or missing robots.txt file.\n\nThis example tells all robots to stay out of a website:\n\n
\nUser-agent: *\nDisallow: \/\n<\/pre>\n\nThis example tells all robots not to enter three directories:\n\n
\nUser-agent: *\nDisallow: \/cgi-bin\/\nDisallow: \/tmp\/\nDisallow: \/junk\/\n<\/pre>\n\nThis example tells all robots to stay away from one specific file:\n\n
\nUser-agent: *\nDisallow: \/directory\/file.html\n<\/pre>\n\nAll other files in the specified directory will be processed.\n\n\nThis example tells one specific robot to stay out of a website:\n\n
\nUser-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot\nDisallow: \/\n<\/pre>\n\nThis example tells two specific robots not to enter one specific directory:\n\n
\nUser-agent: BadBot # replace 'BadBot' with the actual user-agent of the bot\nUser-agent: Googlebot\nDisallow: \/private\/\n<\/pre>\n\nExample demonstrating how comments can be used:\n\n
\n# Comments appear after the \"#\" symbol at the start of a line, or after a directive\nUser-agent: * # match all bots\nDisallow: \/ # keep them out\n<\/pre>\n\nIt is also possible to list multiple robots with their own rules. The actual robot string is defined by the crawler. A few robot operators, such as 
Google Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
, support several user-agent strings that allow the operator to deny access to a subset of their services by using specific user-agent strings.\n\nExample demonstrating multiple user-agents:\n\n
\nUser-agent: googlebot        # all Google services\nDisallow: \/private\/          # disallow this directory\n\nUser-agent: googlebot-news   # only the news service\nDisallow: \/                  # disallow everything\n\nUser-agent: *                # any robot\nDisallow: \/something\/        # disallow this directory\n<\/pre>\n


The use of the wildcard * in rules

\nThe directive Disallow: \/something\/<\/code> blocks all files and subdirectories starting with \/something\/<\/code>.\n\nIn contrast using a wildcard, (if supported by the crawler), allows for more complex patterns in specifying paths and files to allow or disallow from crawling, for example Disallow: \/something\/*\/other<\/code> blocks URLs such as:\n
\/something\/foo\/other\n\/something\/bar\/other\n<\/pre>\n\nIt would not prevent the crawling of \/something\/foo\/else<\/code>, as that would not match the pattern.\n\nThe wildcard *<\/code> allows greater flexibility but may not be recognized by all crawlers, although it is part of the Robots Exclusion Protocol RFC <\/ref>\n\nA wildcard at the end of a rule in effect does nothing, as that is the standard behaviour.\n


Nonstandard extensions

\n


Crawl-delay directive

\nThe crawl-delay value is supported by some crawlers to throttle their visits to the host. Since this value is not part of the standard, its interpretation is dependent on the crawler reading it. It is used when the multiple burst of visits from bots is slowing down the host. Yandex interprets the value as the number of seconds to wait between subsequent visits. Bing defines crawl-delay as the size of a time window (from 1 to 30 seconds) during which BingBot will access a web site only once.<\/ref> Google ignores this directive,<\/ref> but provides an interface in its search console for webmasters, to control the
Googlebot Googlebot is the web crawler software used by Google that collects documents from the web to build a searchable index for the Google Search engine. This name is actually used to refer to two different types of web crawlers: a desktop crawler (to ...
's subsequent visits.<\/ref>\n\n
\nUser-agent: bingbot\nAllow: \/\nCrawl-delay: 10\n<\/pre>\n


Sitemap

\nSome crawlers support a Sitemap<\/code> directive, allowing multiple
Sitemaps Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, h ...
in the same robots.txt<\/samp> in the form Sitemap: ''full-url''<\/code>:<\/ref><\/ref>\n
Sitemap: http:\/\/www.example.com\/sitemap.xml<\/pre>\n


Universal \"*\" match

\nThe ''Robot Exclusion Standard'' does not mention the \"*\" character in the Disallow:<\/code> statement.<\/ref>\n


Meta tags and headers

\nIn addition to root-level robots.txt files, robots exclusion directives can be applied at a more granular level through the use of Robots meta tags and X-Robots-Tag HTTP headers. The robots meta tag cannot be used for non-HTML files such as images, text files, or PDF documents. On the other hand, the X-Robots-Tag can be added to non-HTML files by using .htaccess and

files.<\/ref>\n


A \"noindex\" meta tag

\n\n\n<\/syntaxhighlight>\n


A \"noindex\" HTTP response header

\n\nX-Robots-Tag: noindex\n<\/syntaxhighlight>\n\nThe X-Robots-Tag is only effective after the page has been requested and the server responds, and the robots meta tag is only effective after the page has loaded, whereas robots.txt is effective before the page is requested. Thus if a page is excluded by a robots.txt file, any robots meta tags or X-Robots-Tag headers are effectively ignored because the robot will not see them in the first place.\n


Maximum size of a robots.txt file

\nThe Robots Exclusion Protocol requires crawlers to parse at least 500 kibibytes (512000 bytes) of robots.txt files, which Google maintains as a 500 kibibyte file size restriction for robots.txt files.<\/ref>\n


See also

\n\n\n* ads.txt<\/code>, a standard for listing authorized ad sellers\n*
security.txt security.txt is an accepted standard for website security information that allows security researchers to report security vulnerabilities easily. The standard prescribes a text file named ''security.txt'' in the Well-known URIs, well known locat ...
<\/code>, a file to describe the process for security researchers to follow in order to report security vulnerabilities\n* eBay v. Bidder's Edge\n* Automated Content Access Protocol \u2013 A failed proposal to extend robots.txt\n* BotSeer \u2013 Now inactive search engine for robots.txt files\n* Distributed web crawling\n* Focused crawler\n*
Internet Archive The Internet Archive is an American 501(c)(3) organization, non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including web ...
\n* Meta elements for search engines\n*
National Digital Library Program The National Digital Library Program (NDLP) is a project by the United States Library of Congress to assemble a digital library of reproductions of primary source materials to support the study of the history and culture of the United States. ...
(NDLP)\n*
National Digital Information Infrastructure and Preservation Program The National Digital Information Infrastructure and Preservation Program (NDIIPP) of the United States was an archival program led by the Library of Congress to preserve and provide access to digital resources. The program convened several workin ...
(NDIIPP)\n*
nofollow nofollow is a setting on a web page hyperlink that directs search engines not to use the link for page ranking calculations. It is specified in the page as a type of link relation; that is: <a rel="nofollow" ...>. Because search engi ...
\n*
noindex The noindex value of an HTML robots meta tag requests that automated Internet bots avoid Search engine indexing, indexing a web page.Perma.cc Perma.cc is a web archiving service for legal and academic citations founded by the Harvard Library Innovation Lab in 2013. Concept Perma.cc was created in response to studies showing high incidences of link rot in both academic publications an ...
\n*
Sitemaps Sitemaps is a protocol in XML format meant for a webmaster to inform search engines about URLs on a website that are available for web crawling. It allows webmasters to include additional information about each URL: when it was last updated, h ...
\n*
Spider trap A spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are a ...
\n*
Web archiving Web archiving is the process of collecting, preserving, and providing access to material from the World Wide Web. The aim is to ensure that information is preserved in an archival format for research and the public. Web archivists typically ...
\n*
Web crawler Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (''web spider ...
\n\n


References

\n\n


Further reading

\n* \n


External links

\n\n* \n\n\n\n{{DEFAULTSORT:Robots Exclusion Standard\n Search engine optimization\n Websites\n Web scraping\n Text files"