Predicting Information Retrieval Performance PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Predicting Information Retrieval Performance PDF full book. Access full book title Predicting Information Retrieval Performance.

Predicting Information Retrieval Performance

Predicting Information Retrieval Performance
Author: Robert M. Losee
Publisher: Springer Nature
Total Pages: 59
Release: 2022-05-31
Genre: Computers
ISBN: 303102317X

Download Predicting Information Retrieval Performance Book in PDF, ePub and Kindle

Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.


Predicting Information Retrieval Performance

Predicting Information Retrieval Performance
Author: Robert M. Losee
Publisher: Springer
Total Pages: 59
Release: 2018-12-19
Genre: Computers
ISBN: 9783031011894

Download Predicting Information Retrieval Performance Book in PDF, ePub and Kindle

Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.


Retrieval Performance Prediction and Document Quality

Retrieval Performance Prediction and Document Quality
Author:
Publisher:
Total Pages: 150
Release: 2007
Genre:
ISBN:

Download Retrieval Performance Prediction and Document Quality Book in PDF, ePub and Kindle

The ability to predict retrieval performance has potential applications in many important IR (Information Retrieval) areas. In this thesis, we study the problem of predicting retrieval quality at the granularity of both the retrieved document set as a whole and individual retrieved documents. At the level of ranked lists of documents, we propose several novel prediction models that capture different aspects of the retrieval process that have a major impact on retrieval effectiveness. These techniques make performance prediction both effective and efficient in various retrieval settings including a Web search environment. As an application, we also provide a framework to address the problem of query expansion prediction. At the level of documents, we predict the quality of documents in the context of Web ad-hoc retrieval. We explore document features that are predictive of quality. Furthermore, we propose a document quality language model to improve retrieval effectiveness by incorporating quality information.


Estimating the Query Difficulty for Information Retrieval

Estimating the Query Difficulty for Information Retrieval
Author: David Carmel
Publisher: Springer Nature
Total Pages: 77
Release: 2022-05-31
Genre: Computers
ISBN: 3031022726

Download Estimating the Query Difficulty for Information Retrieval Book in PDF, ePub and Kindle

Many information retrieval (IR) systems suffer from a radical variance in performance when responding to users' queries. Even for systems that succeed very well on average, the quality of results returned for some of the queries is poor. Thus, it is desirable that IR systems will be able to identify "difficult" queries so they can be handled properly. Understanding why some queries are inherently more difficult than others is essential for IR, and a good answer to this important question will help search engines to reduce the variance in performance, hence better servicing their customer needs. Estimating the query difficulty is an attempt to quantify the quality of search results retrieved for a query from a given collection of documents. This book discusses the reasons that cause search engines to fail for some of the queries, and then reviews recent approaches for estimating query difficulty in the IR field. It then describes a common methodology for evaluating the prediction quality of those estimators, and experiments with some of the predictors applied by various IR methods over several TREC benchmarks. Finally, it discusses potential applications that can utilize query difficulty estimators by handling each query individually and selectively, based upon its estimated difficulty. Table of Contents: Introduction - The Robustness Problem of Information Retrieval / Basic Concepts / Query Performance Prediction Methods / Pre-Retrieval Prediction Methods / Post-Retrieval Prediction Methods / Combining Predictors / A General Model for Query Difficulty / Applications of Query Difficulty Estimation / Summary and Conclusions


Predicting Information Retrieval Performance

Predicting Information Retrieval Performance
Author: Robert M. Losee
Publisher: Morgan & Claypool
Total Pages: 79
Release: 2018-12-19
Genre: Computers
ISBN: 9781681734743

Download Predicting Information Retrieval Performance Book in PDF, ePub and Kindle

Information Retrieval performance measures are usually retrospective in nature, representing the effectiveness of an experimental process. However, in the sciences, phenomena may be predicted, given parameter values of the system. After developing a measure that can be applied retrospectively or can be predicted, performance of a system using a single term can be predicted given several different types of probabilistic distributions. Information Retrieval performance can be predicted with multiple terms, where statistical dependence between terms exists and is understood. These predictive models may be applied to realistic problems, and then the results may be used to validate the accuracy of the methods used. The application of metadata or index labels can be used to determine whether or not these features should be used in particular cases. Linguistic information, such as part-of-speech tag information, can increase the discrimination value of existing terminology and can be studied predictively. This work provides methods for measuring performance that may be used predictively. Means of predicting these performance measures are provided, both for the simple case of a single term in the query and for multiple terms. Methods of applying these formulae are also suggested.


String Processing and Information Retrieval

String Processing and Information Retrieval
Author: Alberto Apostolico
Publisher: Springer Science & Business Media
Total Pages: 345
Release: 2004-09-23
Genre: Computers
ISBN: 3540232109

Download String Processing and Information Retrieval Book in PDF, ePub and Kindle

This book constitutes the refereed proceedings of the 11th International Conference on String Processing and Information Retrieval, SPIRE 2004, held in Padova, Italy, in October 2004. The 28 revised full papers and 16 revised short papers presented were carefully reviewed and selected from 123 submissions. The papers address current issues in string pattern searching and matching, string discovery, data compression, data mining, text mining, machine learning, information retrieval, digital libraries, and applications in various fields, such as bioinformatics, speech and natural language processing, Web links and communities, and multilingual data.


Simulating Information Retrieval Test Collections

Simulating Information Retrieval Test Collections
Author: David Hawking
Publisher: Morgan & Claypool Publishers
Total Pages: 186
Release: 2020-09-04
Genre: Science
ISBN: 1681739585

Download Simulating Information Retrieval Test Collections Book in PDF, ePub and Kindle

Simulated test collections may find application in situations where real datasets cannot easily be accessed due to confidentiality concerns or practical inconvenience. They can potentially support Information Retrieval (IR) experimentation, tuning, validation, performance prediction, and hardware sizing. Naturally, the accuracy and usefulness of results obtained from a simulation depend upon the fidelity and generality of the models which underpin it. The fidelity of emulation of a real corpus is likely to be limited by the requirement that confidential information in the real corpus should not be able to be extracted from the emulated version. We present a range of methods exploring trade-offs between emulation fidelity and degree of preservation of privacy. We present three different simple types of text generator which work at a micro level: Markov models, neural net models, and substitution ciphers. We also describe macro level methods where we can engineer macro properties of a corpus, giving a range of models for each of the salient properties: document length distribution, word frequency distribution (for independent and non-independent cases), word length and textual representation, and corpus growth. We present results of emulating existing corpora and for scaling up corpora by two orders of magnitude. We show that simulated collections generated with relatively simple methods are suitable for some purposes and can be generated very quickly. Indeed it may sometimes be feasible to embed a simple lightweight corpus generator into an indexer for the purpose of efficiency studies. Naturally, a corpus of artificial text cannot support IR experimentation in the absence of a set of compatible queries. We discuss and experiment with published methods for query generation and query log emulation. We present a proof-of-the-pudding study in which we observe the predictive accuracy of efficiency and effectiveness results obtained on emulated versions of TREC corpora. The study includes three open-source retrieval systems and several TREC datasets. There is a trade-off between confidentiality and prediction accuracy and there are interesting interactions between retrieval systems and datasets. Our tentative conclusion is that there are emulation methods which achieve useful prediction accuracy while providing a level of confidentiality adequate for many applications.


Estimating the Query Difficulty for Information Retrieval

Estimating the Query Difficulty for Information Retrieval
Author: David Carmel
Publisher: Morgan & Claypool Publishers
Total Pages: 77
Release: 2010
Genre: Computers
ISBN: 160845357X

Download Estimating the Query Difficulty for Information Retrieval Book in PDF, ePub and Kindle

Many information retrieval (IR) systems suffer from a radical variance in performance when responding to users' queries. Even for systems that succeed very well on average, the quality of results returned for some of the queries is poor. Thus, it is desirable that IR systems will be able to identify "difficult" queries so they can be handled properly. Understanding why some queries are inherently more difficult than others is essential for IR, and a good answer to this important question will help search engines to reduce the variance in performance, hence better servicing their customer needs. Estimating the query difficulty is an attempt to quantify the quality of search results retrieved for a query from a given collection of documents. This book discusses the reasons that cause search engines to fail for some of the queries, and then reviews recent approaches for estimating query difficulty in the IR field. It then describes a common methodology for evaluating the prediction quality of those estimators, and experiments with some of the predictors applied by various IR methods over several TREC benchmarks. Finally, it discusses potential applications that can utilize query difficulty estimators by handling each query individually and selectively, based upon its estimated difficulty. Table of Contents: Introduction - The Robustness Problem of Information Retrieval / Basic Concepts / Query Performance Prediction Methods / Pre-Retrieval Prediction Methods / Post-Retrieval Prediction Methods / Combining Predictors / A General Model for Query Difficulty / Applications of Query Difficulty Estimation / Summary and Conclusions


Quality Issues in the Management of Web Information

Quality Issues in the Management of Web Information
Author: Gabriella Pasi
Publisher: Springer Science & Business Media
Total Pages: 210
Release: 2013-04-17
Genre: Technology & Engineering
ISBN: 3642376886

Download Quality Issues in the Management of Web Information Book in PDF, ePub and Kindle

This research volume presents a sample of recent contributions related to the issue of quality-assessment for Web Based information in the context of information access, retrieval, and filtering systems. The advent of the Web and the uncontrolled process of documents' generation have raised the problem of declining quality assessment to information on the Web, by considering both the nature of documents (texts, images, video, sounds, and so on), the genre of documents ( news, geographic information, ontologies, medical records, products records, and so on), the reputation of information sources and sites, and, last but not least the actions performed on documents (content indexing, retrieval and ranking, collaborative filtering, and so on). The volume constitutes a compendium of both heterogeneous approaches and sample applications focusing specific aspects of the quality assessment for Web-based information for researchers, PhD students and practitioners carrying out their research activity in the field of Web information retrieval and filtering, Web information mining, information quality representation and management.


Information Retrieval Technology

Information Retrieval Technology
Author: Guido Zuccon
Publisher: Springer
Total Pages: 458
Release: 2016-01-21
Genre: Computers
ISBN: 3319289403

Download Information Retrieval Technology Book in PDF, ePub and Kindle

This book constitutes the refereed proceedings of the 11th Information Retrieval Societies Conference, AIRS 2015, held in Brisbane, QLD, Australia, in December 2015. The 29 full papers presented together with 11 short and demonstration papers, and the abstracts of 2 keynote lectures were carefully reviewed and selected from 92 submissions. The final programme of AIRS 2015 is divided in 10 tracks: Efficiency, Graphs, Knowledge Bases and Taxonomies, Recommendation, Twitter and Social Media, Web Search, Text Processing, Understanding and Categorization, Topics and Models, Clustering, Evaluation, and Social Media and Recommendation.