History
Early evaluation of algorithms for word sense disambiguation
From the earliest days, assessing the quality of word sense disambiguation algorithms had been primarily a matter of intrinsic evaluation, and “almost no attempts had been made to evaluate embedded WSD components”. Only very recently (2006) had extrinsic evaluations begun to provide some evidence for the value of WSD in end-user applications. Until 1990 or so, discussions of the sense disambiguation task focused mainly on illustrative examples rather than comprehensive evaluation. The early 1990s saw the beginnings of more systematic and rigorous intrinsic evaluations, including more formal experimentation on small sets of ambiguous words.Senseval to SemEval
In April 1997, Martha Palmer and Marc Light organized a workshop entitled ''Tagging with Lexical Semantics: Why, What, and How?'' in conjunction with the Conference on Applied Natural Language Processing. At the time, there was a clear recognition that manually annotatedSemEval's 3, 2 or 1 year(s) cycle
After SemEval-2010, many participants feel that the 3-year cycle is a long wait. Many other shared tasks such as Conference on Natural Language Learning (CoNLL) and Recognizing Textual Entailments (RTE) run annually. For this reason, the SemEval coordinators gave the opportunity for task organizers to choose between a 2-year or a 3-year cycle. The SemEval community favored the 3-year cycle.List of Senseval and SemEval Workshops
* ''Senseval-1'' took place in the summer of 1998 for English, French, and Italian, culminating in a workshop held at Herstmonceux Castle, Sussex, England on September 2–4. * ''Senseval-2'' took place in the summer of 2001, and was followed by a workshop held in July 2001 in Toulouse, in conjunction with ACL 2001. Senseval-2 included tasks for Basque, Chinese, Czech, Danish, Dutch, English, Estonian , Italian, Japanese, Korean, Spanish and Swedish. * ''Senseval-3'' took place in March–April 2004, followed by a workshop held in July 2004 in Barcelona, in conjunction with ACL 2004. Senseval-3 included 14 different tasks for core word sense disambiguation, as well as identification of semantic roles, multilingual annotations, logic forms, subcategorization acquisition. * ''SemEval-2007 (Senseval-4)'' took place in 2007, followed by a workshop held in conjunction with ACL in Prague. SemEval-2007 included 18 different tasks targeting the evaluation of systems for the semantic analysis of text. A special issue of ''Language Resources and Evaluation'' is devoted to the result. * ''SemEval-2010'' took place in 2010, followed by a workshop held in conjunction with ACL in Uppsala. SemEval-2010 included 18 different tasks targeting the evaluation of semantic analysis systems. * ''SemEval-2012'' took place in 2012; it was associated with the new *SEM, First Joint Conference on Lexical and Computational Semantics, and co-located with NAACL, Montreal, Canada. SemEval-2012 included 8 different tasks targeting at evaluating computational semantic systems. However, there was no WSD task involved in SemEval-2012, the WSD related tasks were scheduled in the upcoming SemEval-2013. * ''SemEval-2013'' was associated with NAACL 2013, North American Association of Computational Linguistics, Georgia, USA and took place in 2013. It included 13 different tasks targeting at evaluating computational semantic systems. * ''SemEval-2014'' took place in 2014. It was co-located with COLING 2014, 25th International Conference on Computational Linguistics and *SEM 2014, Second Joint Conference on Lexical and Computational Semantics, Dublin, Ireland. There were 10 different tasks in SemEval-2014 evaluating various computational semantic systems. * ''SemEval-2015'' took place in 2015. It was co-located with NAACL-HLT 2015, 2015 Conference of the North American Chapter of the Association for Computational Linguistics – Human Language Technologies and *SEM 2015, Third Joint Conference on Lexical and Computational Semantics, Denver, USA. There were 17 different tasks in SemEval-2015 evaluating various computational semantic systems.SemEval Workshop framework
The framework of the SemEval/Senseval evaluation workshops emulates the Message Understanding Conferences (MUCs) and other evaluation workshops ran by ARPA (Advanced Research Projects Agency, renamed the Defense Advanced Research Projects Agency (DARPA)). ] Stages of SemEval/Senseval evaluation workshops # Firstly, all likely participants were invited to express their interest and participate in the exercise design. # A timetable towards a final workshop was worked out. # A plan for selecting evaluation materials was agreed. # Natural language processing#Different types of evaluation, 'Gold standards' for the individual tasks were acquired, often human annotators were considered as a gold standard to measure precision and recall scores of computer systems. These 'gold standards' are what the computational systems strive towards. In WSD tasks, human annotators were set on the task of generating a set of correct WSD answers (i.e. the correct sense for a given word in a given context) # The gold standard materials, without answers, were released to participants, who then had a short time to run their programs over them and return their sets of answers to the organizers. # The organizers then scored the answers and the scores were announced and discussed at a workshop.Semantic evaluation tasks
Senseval-1 & Senseval-2 focused on evaluation WSD systems on major languages that were available corpus and computerized dictionary. Senseval-3 looked beyond theOverview of Issues in Semantic Analysis
The SemEval exercises provide a mechanism for examining issues in semantic analysis of texts. The topics of interest fall short of the logical rigor that is found in formal computational semantics, attempting to identify and characterize the kinds of issues relevant to human understanding of language. The primary goal is to replicate human processing by means of computer systems. The tasks (shown below) are developed by individuals and groups to deal with identifiable issues, as they take on some concrete form. The first major area in semantic analysis is the identification of the intended meaning at the word level (taken to include idiomatic expressions). This is word-sense disambiguation (a concept that is evolving away from the notion that words have discrete senses, but rather are characterized by the ways in which they are used, i.e., their contexts). The tasks in this area include lexical sample and all-word disambiguation, multi- and cross-lingual disambiguation, and lexical substitution. Given the difficulties of identifying word senses, other tasks relevant to this topic include word-sense induction, subcategorization acquisition, and evaluation of lexical resources. The second major area in semantic analysis is the understanding of how different sentence and textual elements fit together. Tasks in this area include semantic role labeling, semantic relation analysis, and coreference resolution. Other tasks in this area look at more specialized issues of semantic analysis, such as temporal information processing, metonymy resolution, and sentiment analysis. The tasks in this area have many potential applications, such as information extraction, question answering, document summarization, machine translation, construction of thesauri and semantic networks, language modeling, paraphrasing, and recognizing textual entailment. In each of these potential applications, the contribution of the types of semantic analysis constitutes the most outstanding research issue. For example, in the word sense induction and disambiguation task, there are three separate phases: #In the training phase, evaluation task participants were asked to use a training dataset to induce the sense inventories for a set of polysemous words. The training dataset consisting of a set of polysemous nouns/verbs and the sentence instances that they occurred in. No other resources were allowed other than morphological and syntactic Natural Language Processing components, such as morphological analyzers, Part-Of-Speech taggers and syntactic parsers. #In the testing phase, participants were provided with a test set for the disambiguating subtask using the induced sense inventory from the training phase. #In the evaluation phase, answers of to the testing phase were evaluated in a supervised an unsupervised framework. The unsupervised evaluation for WSI considered two types of evaluation V Measure (Rosenberg and Hirschberg, 2007), and paired F-Score (Artiles et al., 2009). This evaluation follows the supervised evaluation of SemEval-2007 WSI task (Agirre and Soroa, 2007)Senseval and SemEval tasks overview
The tables below reflects the workshop growth from Senseval to SemEval and gives an overview of which area of computational semantics was evaluated throughout the Senseval/SemEval workshops. The Multilingual WSD task was introduced for the SemEval-2013 workshop. The task is aimed at evaluating Word Sense Disambiguation systems in a multilingual scenario using BabelNet as its sense inventory. Unlike similar task like crosslingual WSD or the multilingual lexical substitution task, where no fixed sense inventory is specified, Multilingual WSD uses the BabelNet as its sense inventory. Prior to the development of BabelNet, a bilingual lexical sample WSD evaluation task was carried out in SemEval-2007 on Chinese-English bitexts. The Cross-lingual WSD task was introduced in the SemEval-2007 evaluation workshop and re-proposed in the SemEval-2013 workshop .Lefever, E., & Hoste, V. (2013, June). Semeval-2013 task 10: Cross-lingual word sense disambiguation. In Second joint conference on lexical and computational semantics (Vol. 2, pp. 158-166). To facilitate the ease of integrating WSD systems into otherAreas of evaluation
The major tasks in semantic evaluation include the following areas ofType of Semantic Annotations
SemEval tasks have created many types of semantic annotations, each type with various schema. In SemEval-2015, the organizers have decided to group tasks together into several tracks. These tracks are by the type of semantic annotations that the task hope to achieve. Here lists the type of semantic annotations involved in the SemEval workshops: #Learning Semantic Relations #Question and Answering #Semantic Parsing #Semantic Taxonomy #Sentiment Analysis #Text Similarity #Time and Space #Word Sense Disambiguation and Induction A task and its track allocation is flexible; a task might develop into its own track, e.g. the taxonomy evaluation task in SemEval-2015 was under the ''Learning Semantic Relations'' track and in SemEval-2016, there is a dedicated track for ''Semantic Taxonomy'' with a new ''Semantic Taxonomy Enrichment'' task.SemEval-2016 website. Retrieved Jun 4 2015 http://alt.qcri.org/semeval2016/See also
* List of computer science awards * Computational semantics *References
{{Reflist, 2External links