Google Ngram
   HOME

TheInfoList



The Google Ngram Viewer or Google Books Ngram Viewer is an that charts the frequencies of any set of search strings using a yearly count of s found in sources printed between 1500 and 2019 in 's in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish. There are also some specialized English corpora, such as American English, British English, and English Fiction.Google Books Ngram Viewer info page: https://books.google.com/ngrams/info The program can search for a word or a phrase, including misspellings or gibberish. The n-grams are matched with the text within the selected corpus, optionally using spelling (which compares the exact use of uppercase letters), and, if found in 40 or more books, are then displayed as a graph. The Google Ngram Viewer supports searches for and . It is routinely used in research.


History

The program was developed by Jon Orwant and Will Brockman and released in mid-December 2010. It was inspired by a prototype called "Bookworm" created by Jean-Baptiste Michel and from Harvard's and Yuan Shen from and . The Ngram Viewer was initially based on the 2009 edition of the Google Books Ngram Corpus. , the program supports 2009, 2012, and 2019 corpora.


Operation and restrictions

Commas delimit user-entered search terms, indicating each separate word or phrase to find. The Ngram Viewer returns a plotted within seconds of the user pressing the or the "Search" button on the screen. As an adjustment for more books having been published during some years, the data are , as a relative level, by the number of books published in each year. Due to limitations on the size of the Ngram database, only matches found in at least 40 books are indexed in the database; otherwise, the database could not have stored all possible combinations. Typically, search terms cannot end with punctuation, although a separate (a period) can be searched. Also, an ending (as in "Why?") will cause a second search for the question mark separately. Omitting the periods in abbreviations will allow a form of matching, such as using "R M S" to search for "R.M.S." versus "RMS".


Corpora

The used for the search are composed of total_counts, 1-grams, 2-grams, 3-grams, 4-grams, and 5-grams files for each language. The file format of each of the files is . Each line has the following format: * total_counts file *: year TAB match_count TAB page_count TAB volume_count NEWLINE * Version 1 ngram file (generated in July 2009) *: ngram TAB year TAB match_count TAB page_count TAB volume_count NEWLINE * Version 2 ngram file (generated in July 2012) *: ngram TAB year TAB match_count TAB volume_count NEWLINE The Google Ngram Viewer uses match_count to plot the graph. As an example, a word "Wikipedia" from the Version 2 file of the English 1-grams is stored as follows: The graph plotted by the Google Ngram Viewer using the above data is here:


Criticism

The data set has been criticized for its reliance upon inaccurate , an overabundance of scientific literature, and for including large numbers of incorrectly dated and categorized texts. Because of these errors, and because it is uncontrolled for bias (such as the increasing amount of scientific literature, which causes other terms to appear to decline in popularity), it is risky to use this corpus to study language or test theories. Since the data set does not include , it may not reflect general linguistic or cultural change and can only hint at such an effect. Guidelines for doing research with data from Google Ngram have been proposed that address many of the issues discussed above.


OCR issues

Optical character recognition, or OCR, is not always reliable, and some characters may not be scanned correctly. In particular, systemic errors like the confusion of "s" and "f" in pre-19th century texts (due to the use of the which was similar in appearance to "f") can cause systemic bias. Although Google Ngram Viewer claims that the results are reliable from 1800 onwards, poor OCR and insufficient data mean that frequencies given for languages such as Chinese may only be accurate from 1970 onward, with earlier parts of the corpus showing no results at all for common terms, and data for some years containing more than 50% noise.When n-grams go bad
digitalsinology.org.


See also

* * *


References


Bibliography

*


External links

* {{Google Inc. Probabilistic models