Information Access Evaluation. Multilinguality, Multimodality, and Visual Analytics Third International Conference of the CLEF Initiative, CLEF 2012, Rome, Italy, September 17-20, 2012, Proceedings /
This book constitutes the proceedings of the Third International Conference of the CLEF Initiative, CLEF 2012, held in Rome, Italy, in September 2012. The 14 papers and 3 poster abstracts presented were carefully reviewed and selected for inclusion in this volume. Furthermore, the books contains 2 k...
Clasificación: | Libro Electrónico |
---|---|
Autor Corporativo: | |
Otros Autores: | , , , , |
Formato: | Electrónico eBook |
Idioma: | Inglés |
Publicado: |
Berlin, Heidelberg :
Springer Berlin Heidelberg : Imprint: Springer,
2012.
|
Edición: | 1st ed. 2012. |
Colección: | Information Systems and Applications, incl. Internet/Web, and HCI ;
7488 |
Temas: | |
Acceso en línea: | Texto Completo |
Tabla de Contenidos:
- Analysis and Refinement of Cross-Lingual Entity Linking
- Seven Years of INEX Interactive Retrieval Experiments - Lessons and Challenges
- Bringing the Algorithms to the Data: Cloud-Based Benchmarking for Medical Image Analysis
- Going beyond CLEF-IP: The 'Reality' for Patent Searchers?
- MusiClef: Multimodal Music Tagging Task
- Information Access Generating Pseudo Test Collections for Learning to Rank Scientific Articles
- Effects of Language and Topic Size in Patent IR: An Empirical Study
- Cross-Language High Similarity Search Using a Conceptual Thesaurus
- The Appearance of the Giant Component in Descriptor Graphs and Its Application for Descriptor Selection
- Hidden Markov Model for Term Weighting in Verbose Queries
- Evaluation Methodologies and Infrastructure DIRECTions: Design and Specification of an IR Evaluation Infrastructure
- Penalty Functions for Evaluation Measures of Unsegmented Speech Retrieval
- Cumulated Relative Position: A Metric for Ranking Evaluation
- Better than Their Reputation? On the Reliability of Relevance Assessments with Students
- Comparing IR System Components Using Beanplots
- Language Independent Query Focused Snippet Generation
- A Test Collection to Evaluate Plagiarism by Missing or Incorrect References.