Minimal Recursion Semantics
Minimal recursion semantics (MRS) is a framework for computational semantics. It can be implemented in typed feature structure formalisms such as head-driven phrase structure grammar and lexical functional grammar. It is suitable for computational language parsing and natural language generation.Copestake, A., Flickinger, D. P., Sag, I. A., & Pollard, C. (2005)Minimal Recursion Semantics. An introduction In Research on Language and Computation. 3:281–332 MRS enables a simple formulation of the grammatical constraints on lexical and phrasal semantics, including the principles of semantic composition. This technique is used in machine translation. Early pioneers of MRS include Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. See also * DELPH-IN * Discourse representation theory In formal linguistics, discourse representation theory (DRT) is a framework for exploring meaning under a formal semantics approach. One of the main differences between DRT-style approaches ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computational Semantics
Computational semantics is the study of how to automate the process of constructing and reasoning with semantics, meaning representations of natural language expressions. It consequently plays an important role in natural language processing, natural-language processing and computational linguistics. Some traditional topics of interest are: semantic analysis (linguistics), construction of meaning representations, semantic underspecification, anaphora (linguistics), anaphora resolution,Basile, Valerio, et al.Developing a large semantically annotated corpus" LREC 2012, Eighth International Conference on Language Resources and Evaluation. 2012. presupposition projection, and Quantifier (linguistics), quantifier scope resolution. Methods employed usually draw from Formal semantics (linguistics), formal semantics or statistical semantics. Computational semantics has points of contact with the areas of lexical semantics (word-sense disambiguation and semantic role labeling), discourse s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Carl Pollard
Carl Jesse Pollard (born June 28, 1947) is a Professor of Linguistics at the Ohio State University. He is the inventor of head grammar and higher-order grammar, as well as co-inventor of head-driven phrase structure grammar (HPSG). He is currently also working on convergent grammar (CVG). He has written numerous books and articles on formal syntax and semantics Semantics is the study of linguistic Meaning (philosophy), meaning. It examines what meaning is, how words get their meaning, and how the meaning of a complex expression depends on its parts. Part of this process involves the distinction betwee .... He received his Ph.D. from Stanford. External linksCarl Pollard's website 1947 births Living people Linguists from the United States Syntacticians Ohio State University faculty {{US-linguist-stub ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Generative Linguistics
Generative grammar is a research tradition in linguistics that aims to explain the cognition, cognitive basis of language by formulating and testing explicit models of humans' subconscious grammatical knowledge. Generative linguists, or generativists (), tend to share certain working assumptions such as the linguistic competence, competence–linguistic performance, performance distinction and the notion that some domain-specific aspects of grammar are partly innate in humans. These assumptions are rejected in non-generative approaches such as usage-based models of language. Generative linguistics includes work in core areas such as syntax, semantics, phonology, psycholinguistics, and language acquisition, with additional extensions to topics including biolinguistics and music cognition. Generative grammar began in the late 1950s with the work of Noam Chomsky, having roots in earlier approaches such as structural linguistics. The earliest version of Chomsky's model was calle ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Natural Language Processing
Natural language processing (NLP) is a subfield of computer science and especially artificial intelligence. It is primarily concerned with providing computers with the ability to process data encoded in natural language and is thus closely related to information retrieval, knowledge representation and computational linguistics, a subfield of linguistics. Major tasks in natural language processing are speech recognition, text classification, natural-language understanding, natural language understanding, and natural language generation. History Natural language processing has its roots in the 1950s. Already in 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence, though at the time that was not articulated as a problem separate from artificial intelligence. The proposed test includes a task that involves the automated interpretation and generation of natural language ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computational Linguistics
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others. Computational linguistics is closely related to mathematical linguistics. Origins The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Discourse Representation Theory
In formal linguistics, discourse representation theory (DRT) is a framework for exploring meaning under a formal semantics approach. One of the main differences between DRT-style approaches and traditional Montagovian approaches is that DRT includes a level of abstract mental representations (discourse representation structures, DRS) within its formalism, which gives it an intrinsic ability to handle meaning across sentence boundaries. DRT was created by Hans Kamp in 1981. A very similar theory was developed independently by Irene Heim in 1982, under the name of ''File Change Semantics'' (FCS). Discourse representation theories have been used to implement semantic parsers and natural language understanding systems.Rapaport, William J.Syntactic semantics: Foundations of computational natural-language understanding" Thinking Computers and Virtual Persons. 1994. 225-273. Discourse representation structures DRT uses ''discourse representation structure''s (DRS) to represent a hea ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
DELPH-IN
Deep Linguistic Processing with HPSG - INitiative (DELPH-IN) is a collaboration where computational linguists worldwide develop natural language processing tools for deep linguistic processing of human language. The goal of DELPH-IN is to combine linguistic and statistical processing methods in order to computationally understand the meaning of texts and utterances. The tools developed by DELPH-IN adopt two linguistic formalisms for deep linguistic analysis, viz. head-driven phrase structure grammar (HPSG) and minimal recursion semantics (MRS). All tools under the DELPH-IN collaboration are developed for general use of open-source license, open-source licensing. Since 2005, DELPH-IN has held an annual summit. This is a loosely structured unconference where people update each other about the work they are doing, seek feedback on current work, and occasionally hammer out agreement on standards and best practice. DELPH-IN technologies and resources The DELPH-IN collaboration has ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ivan Sag
Ivan Andrew Sag (November 9, 1949 – September 10, 2013) was an American linguist and cognitive scientist. He did research in areas of syntax and semantics as well as work in computational linguistics. Personal life Born in Alliance, Ohio on November 9, 1949, Sag attended the Mercersburg Academy but was expelled shortly before graduation. He received a BA from the University of Rochester, an MA from the University of Pennsylvania—where he studied comparative Indo-European languages, Sanskrit, and sociolinguistics—and a PhD from MIT in 1976, writing his dissertation (advised by Noam Chomsky) on ellipsis. Sag received a Mellon Fellowship at Stanford University in 1978–79, and remained in California from that point on. He was appointed a position in Linguistics at Stanford, and earned tenure there. He died of cancer in 2013. He was married to sociolinguist Penelope Eckert. Academic work Sag made notable contributions to the fields of syntax, semantics, pragmatics, and lan ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Dan Flickinger
Dan or DAN may refer to: People * Dan (name), including a list of people with the name ** Dan (king), several kings of Denmark * Dan people, an ethnic group located in West Africa **Dan language, a Mande language spoken primarily in Côte d'Ivoire and Liberia * Dan (son of Jacob), one of the 12 sons of Jacob/Israel in the Bible **Tribe of Dan, one of the 12 tribes of Israel descended from Dan **Danel, the hero figure of Ugarit who inspired stories of the biblical figure * Crown Prince Dan, prince of Yan in ancient China Places * Dan (ancient city), the biblical location also called Dan, and identified with Tel Dan * Dan, Israel, a kibbutz * Dan, subdistrict of Kap Choeng District, Thailand * Dan, West Virginia, an unincorporated community in the United States * Dan River (other) * Danzhou, formerly Dan County, China * Gush Dan, the metropolitan area of Tel Aviv in Israel Organizations *Dan-Air, a defunct airline in the United Kingdom *Dan Bus Company, a public transpo ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Feature Structure
In phrase structure grammars, such as generalised phrase structure grammar, head-driven phrase structure grammar and lexical functional grammar, a feature structure is essentially a set of attribute–value pairs. For example, the attribute named ''number'' might have the value ''singular''. The value of an attribute may be either atomic, e.g. the symbol ''singular'', or complex (most commonly a feature structure, but also a list or a set). A feature structure can be represented as a directed acyclic graph (DAG), with the nodes corresponding to the variable values and the paths to the variable names. Operations defined on feature structures, e.g. unification, are used extensively in phrase structure grammars. In most theories (e.g. HPSG), operations are strictly speaking defined over equations describing feature structures and not over feature structures themselves, though feature structures are usually used in informal exposition. Often, feature structures are written like this: ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ann Copestake
Ann Alicia Copestake is professor of computational linguistics and head of the Department of Computer Science and Technology at the University of Cambridge and a fellow of Wolfson College, Cambridge. Education Copestake was educated at the University of Cambridge where she was awarded a Bachelor of Arts degree in Natural Sciences. After two years working for Unilever Research she completed the Cambridge Diploma in Computer Science. She went on to study at the University of Sussex where she was awarded a PhD in 1992 for research on lexical semantics supervised by Gerald Gazdar. Career and research Copestake started doing research in Natural language processing and Computational Linguistics at the University of Cambridge in 1985. Since then she has been a visiting researcher at Xerox PARC (1993/4) and the University of Stuttgart (1994/5). From July 1994 to October 2000 she worked at the Center for the Study of Language and Information (CSLI) at Stanford University, as a Senior Res ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Machine Translation
Machine translation is use of computational techniques to translate text or speech from one language to another, including the contextual, idiomatic and pragmatic nuances of both languages. Early approaches were mostly rule-based or statistical. These methods have since been superseded by neural machine translation and large language models. History Origins The origins of machine translation can be traced back to the work of Al-Kindi, a ninth-century Arabic cryptographer who developed techniques for systemic language translation, including cryptanalysis, frequency analysis, and probability and statistics, which are used in modern machine translation. The idea of machine translation later appeared in the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol. The idea of using digital computers for translation of natural languages was proposed as early as 1947 by England's A. D. Booth and Warr ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |