Robust Semantic Role Labeling PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Robust Semantic Role Labeling PDF full book. Access full book title Robust Semantic Role Labeling.

Robust Semantic Role Labeling

Robust Semantic Role Labeling
Author: Yi Szu-Ting
Publisher: LAP Lambert Academic Publishing
Total Pages: 172
Release: 2015-05-25
Genre:
ISBN: 9783659691966

Download Robust Semantic Role Labeling Book in PDF, ePub and Kindle

Correctly identifying semantic entities and successfully disambiguating the relations between them and their predicates is an important and necessary step for successful natural language processing applications, such as text summarization, question answering, and machine translation. Researchers have studied this problem, semantic role labeling (SRL), as a machine learning problem since 2000. However, after using an optimal global inference algorithm to combine several SRL systems, the growth of SRL performance seems to have reached a plateau. Syntactic parsing is the bottleneck of the task of semantic role labeling and robustness is the ultimate goal. In this book, we investigate ways to train a better syntactic parser and increase SRL system robustness. We demonstrate that parse trees augmented by semantic role markups can serve as suitable training data for training a parser for an SRL system. For system robustness, we propose that it is easier to learn a new set of semantic roles. The new roles are less verb- dependent than the original PropBank roles. As a result, the SRL system trained on the new roles achieves significantly better robustness.


Semantic Role Labeling

Semantic Role Labeling
Author: Martha Palmer
Publisher: Morgan & Claypool Publishers
Total Pages: 103
Release: 2011-02-02
Genre: Computers
ISBN: 1598298321

Download Semantic Role Labeling Book in PDF, ePub and Kindle

This book is aimed at providing an overview of several aspects of semantic role labeling. Chapter 1 begins with linguistic background on the definition of semantic roles and the controversies surrounding them. Chapter 2 describes how the theories have led to structured lexicons such as FrameNet, VerbNet and the PropBank Frame Files that in turn provide the basis for large scale semantic annotation of corpora. This data has facilitated the development of automatic semantic role labeling systems based on supervised machine learning techniques. Chapter 3 presents the general principles of applying both supervised and unsupervised machine learning to this task, with a description of the standard stages and feature choices, as well as giving details of several specific systems. Recent advances include the use of joint inference to take advantage of context sensitivities, and attempts to improve performance by closer integration of the syntactic parsing task with semantic role labeling. Chapter 3 also discusses the impact the granularity of the semantic roles has on system performance. Having outlined the basic approach with respect to English, Chapter 4 goes on to discuss applying the same techniques to other languages, using Chinese as the primary example. Although substantial training data is available for Chinese, this is not the case for many other languages, and techniques for projecting English role labels onto parallel corpora are also presented. Table of Contents: Preface / Semantic Roles / Available Lexical Resources / Machine Learning for Semantic Role Labeling / A Cross-Lingual Perspective / Summary


Semantic Role Labeling

Semantic Role Labeling
Author: Martha Palmer
Publisher: Springer Nature
Total Pages: 95
Release: 2022-05-31
Genre: Computers
ISBN: 3031021355

Download Semantic Role Labeling Book in PDF, ePub and Kindle

This book is aimed at providing an overview of several aspects of semantic role labeling. Chapter 1 begins with linguistic background on the definition of semantic roles and the controversies surrounding them. Chapter 2 describes how the theories have led to structured lexicons such as FrameNet, VerbNet and the PropBank Frame Files that in turn provide the basis for large scale semantic annotation of corpora. This data has facilitated the development of automatic semantic role labeling systems based on supervised machine learning techniques. Chapter 3 presents the general principles of applying both supervised and unsupervised machine learning to this task, with a description of the standard stages and feature choices, as well as giving details of several specific systems. Recent advances include the use of joint inference to take advantage of context sensitivities, and attempts to improve performance by closer integration of the syntactic parsing task with semantic role labeling. Chapter 3 also discusses the impact the granularity of the semantic roles has on system performance. Having outlined the basic approach with respect to English, Chapter 4 goes on to discuss applying the same techniques to other languages, using Chinese as the primary example. Although substantial training data is available for Chinese, this is not the case for many other languages, and techniques for projecting English role labels onto parallel corpora are also presented. Table of Contents: Preface / Semantic Roles / Available Lexical Resources / Machine Learning for Semantic Role Labeling / A Cross-Lingual Perspective / Summary


Learning Structured Probabilistic Models for Semantic Role Labeling

Learning Structured Probabilistic Models for Semantic Role Labeling
Author: David Terrell Vickrey
Publisher:
Total Pages:
Release: 2010
Genre:
ISBN:

Download Learning Structured Probabilistic Models for Semantic Role Labeling Book in PDF, ePub and Kindle

Teaching a computer to read is one of the most interesting and important artificial intelligence tasks. In this thesis, we focus on semantic role labeling (SRL), one important processing step on the road from raw text to a full semantic representation. Given an input sentence and a target verb in that sentence, the SRL task is to label the semantic arguments, or roles, of that verb. For example, in the sentence "Tom eats an apple, " the verb "eat" has two roles, Eater = "Tom" and Thing Eaten = "apple". Most SRL systems, including the ones presented in this thesis, take as input a syntactic analysis built by an automatic syntactic parser. SRL systems rely heavily on path features constructed from the syntactic parse, which capture the syntactic relationship between the target verb and the phrase being classified. However, there are several issues with these path features. First, the path feature does not always contain all relevant information for the SRL task. Second, the space of possible path features is very large, resulting in very sparse features that are hard to learn. In this thesis, we consider two ways of addressing these issues. First, we experiment with a number of variants of the standard syntactic features for SRL. We include a large number of syntactic features suggested by previous work, many of which are designed to reduce sparsity of the path feature. We also suggest several new features, most of which are designed to capture additional information about the sentence not included in the standard path feature. We build an SRL model using the best of these new and old features, and show that this model achieves performance competitive with current state-of-the-art. The second method we consider is a new methodology for SRL based on labeling canonical forms. A canonical form is a representation of a verb and its arguments that is abstracted away from the syntax of the input sentence. For example, "A car hit Bob" and "Bob was hit by a car" have the same canonical form, {Verb = "hit", Deep Subject = "a car", Deep Object = "a car"}. Labeling canonical forms makes it much easier to generalize between sentences with different syntax. To label canonical forms, we first need to automatically extract them given an input parse. We develop a system based on a combination of hand-coded rules and machine learning. This allows us to include a large amount of linguistic knowledge and also have the robustness of a machine learning system. Our system improves significantly over a strong baseline, demonstrating the viability of this new approach to SRL. This latter method involves learning a large, complex probabilistic model. In the model we present, exact learning is tractable, but there are several natural extensions to the model for which exact learning is not possible. This is quite a general issue; in many different application domains, we would like to use probabilistic models that cannot be learned exactly. We propose a new method for learning these kinds of models based on contrastive objectives. The main idea is to learn by comparing only a few possible values of the model, instead of all possible values. This method generalizes a standard learning method, pseudo-likelihood, and is closely related to another, contrastive divergence. Previous work has mostly focused on comparing nearby sets of values; we focus on non-local contrastive objectives, which compare arbitrary sets of values. We prove several theoretical results about our model, showing that contrastive objectives attempt to enforce probability ratio constraints between the compared values. Based on this insight, we suggest several methods for constructing contrastive objectives, including contrastive constraint generation (CCG), a cutting-plane style algorithm that iteratively builds a good contrastive objective based on finding high-scoring values. We evaluate CCG on a machine vision task, showing that it significantly outperforms pseudo-likelihood, contrastive divergence, as well as a state-of-the-art max-margin cutting-plane algorithm.


The Oxford Handbook of Computational Linguistics

The Oxford Handbook of Computational Linguistics
Author: Ruslan Mitkov
Publisher: Oxford University Press
Total Pages: 808
Release: 2004
Genre: Computers
ISBN: 019927634X

Download The Oxford Handbook of Computational Linguistics Book in PDF, ePub and Kindle

This handbook of computational linguistics, written for academics, graduate students and researchers, provides a state-of-the-art reference to one of the most active and productive fields in linguistics.


The Oxford Handbook of Computational Linguistics

The Oxford Handbook of Computational Linguistics
Author: Ruslan Mitkov
Publisher: Oxford University Press
Total Pages: 1377
Release: 2022-03-09
Genre:
ISBN: 0199573697

Download The Oxford Handbook of Computational Linguistics Book in PDF, ePub and Kindle

Ruslan Mitkov's highly successful Oxford Handbook of Computational Linguistics has been substantially revised and expanded in this second edition. Alongside updated accounts of the topics covered in the first edition, it includes 17 new chapters on subjects such as semantic role-labelling, text-to-speech synthesis, translation technology, opinion mining and sentiment analysis, and the application of Natural Language Processing in educational and biomedical contexts, among many others. The volume is divided into four parts that examine, respectively: the linguistic fundamentals of computational linguistics; the methods and resources used, such as statistical modelling, machine learning, and corpus annotation; key language processing tasks including text segmentation, anaphora resolution, and speech recognition; and the major applications of Natural Language Processing, from machine translation to author profiling. The book will be an essential reference for researchers and students in computational linguistics and Natural Language Processing, as well as those working in related industries.


Advanced Intelligent Computing Theories and Applications

Advanced Intelligent Computing Theories and Applications
Author: De-Shuang Huang
Publisher: Springer
Total Pages: 802
Release: 2015-08-12
Genre: Computers
ISBN: 3319220535

Download Advanced Intelligent Computing Theories and Applications Book in PDF, ePub and Kindle

This book - in conjunction with the double volume LNCS 9225-9226 - constitutes the refereed proceedings of the 11th International Conference on Intelligent Computing, ICIC 2015, held in Fuzhou, China, in August 2015. The total of 191 full and 42 short papers presented in the three ICIC 2015 volumes was carefully reviewed and selected from 671 submissions. Original contributions related to this theme were especially solicited, including theories, methodologies, and applications in science and technology. This year, the conference concentrated mainly on machine learning theory and methods, soft computing, image processing and computer vision, knowledge discovery and data mining, natural language processing and computational linguistics, intelligent control and automation, intelligent communication networks and web applications, bioinformatics theory and methods, healthcare and medical methods, and information security.


Computational Linguistics and Intelligent Text Processing

Computational Linguistics and Intelligent Text Processing
Author: Alexander Gelbukh
Publisher: Springer
Total Pages: 554
Release: 2014-04-18
Genre: Computers
ISBN: 3642549063

Download Computational Linguistics and Intelligent Text Processing Book in PDF, ePub and Kindle

This two-volume set, consisting of LNCS 8403 and LNCS 8404, constitutes the thoroughly refereed proceedings of the 14th International Conference on Intelligent Text Processing and Computational Linguistics, CICLing 2014, held in Kathmandu, Nepal, in April 2014. The 85 revised papers presented together with 4 invited papers were carefully reviewed and selected from 300 submissions. The papers are organized in the following topical sections: lexical resources; document representation; morphology, POS-tagging, and named entity recognition; syntax and parsing; anaphora resolution; recognizing textual entailment; semantics and discourse; natural language generation; sentiment analysis and emotion recognition; opinion mining and social networks; machine translation and multilingualism; information retrieval; text classification and clustering; text summarization; plagiarism detection; style and spelling checking; speech processing; and applications.


Semantic Role Labeling Using Lexicalized Tree Adjoining Grammars

Semantic Role Labeling Using Lexicalized Tree Adjoining Grammars
Author: Yudong Liu
Publisher:
Total Pages: 0
Release: 2009
Genre: Computational linguistics
ISBN:

Download Semantic Role Labeling Using Lexicalized Tree Adjoining Grammars Book in PDF, ePub and Kindle

The predicate-argument structure (PAS) of a natural language sentence is a useful representation that can be used for a deeper analysis of the underlying meaning of the sentence or directly used in various natural language processing (NLP) applications. The task of semantic role labeling (SRL) is to identify the predicate-argument structures and label the relations between the predicate and each of its arguments. Researchers have been studying SRL as a machine learning problem in the past six years, after large-scale semantically annotated corpora such as FrameNet and PropBank were released to the research community. Lexicalized Tree Adjoining Grammars (LTAGs), a tree rewriting formalism, are often a convenient representation for capturing locality of predicate-argument relations. Our work in this thesis is focused on the development and learning of the state of the art discriminative SRL systems with LTAGs. Our contributions to this field include: We apply to the SRL task a variant of the LTAG formalism called LTAG-spinal and the associated LTAG-spinal Treebank (the formalism and the Treebank were created by Libin Shen). Predicate-argument relations that are either implicit or absent from the original Penn Treebank are made explicit and accessible in the LTAG-spinal Treebank, which we show to be a useful resource for SRL. We propose the use of the LTAGs as an important additional source of features for the SRL task. Our experiments show that, compared with the best-known set of features that are used in state of the art SRL systems, LTAG-based features can improve SRL performance significantly. We treat multiple LTAG derivation trees as latent features for SRL and introduce a novel learning framework -- Latent Support Vector Machines (LSVMs) to the SRL task using these latent features. This method significantly outperforms state of the art SRL systems. In addition, we adapt an SRL framework to a real-world ternary relation extraction task in the biomedical domain. Our experiments show that the use of SRL related features significantly improves performance over the system using only shallow word-based features.