Verkauf durch Sack Fachmedien

He / Tang / Ren

Information Retrieval

30th China Conference, CCIR 2024, Wuhan, China, October 18-20, 2024, Revised Selected Papers

Medium: Buch
ISBN: 978-981-961709-8
Verlag: Springer Nature Singapore
Erscheinungstermin: 01.02.2025
Lieferfrist: bis zu 10 Tage

This book constitutes the refereed proceedings of the 30th China Conference on Information Retrieval, CCIR 2024, held in Wuhan, China, during October 18–20, 2024.

The 11 full papers presented in this volume were carefully reviewed and selected from 26 submissions. As the flagship conference of CIPS, CCIR focuses on the development of China’s internet industry and provides a broad platform for the exchange of the latest academic and technological achievements in the field of information retrieval.


Produkteigenschaften


  • Artikelnummer: 9789819617098
  • Medium: Buch
  • ISBN: 978-981-961709-8
  • Verlag: Springer Nature Singapore
  • Erscheinungstermin: 01.02.2025
  • Sprache(n): Englisch
  • Auflage: Erscheinungsjahr 2025
  • Serie: Lecture Notes in Computer Science
  • Produktform: Kartoniert
  • Gewicht: 254 g
  • Seiten: 149
  • Format (B x H x T): 155 x 235 x 9 mm
  • Ausgabetyp: Kein, Unbekannt
Autoren/Hrsg.

Herausgeber

.- Play to Your Strengths: Collaborative Intelligence of Conventional Recommender Models and Large Language Models.
.- A Dual-Aligned Model for Multimodal Recommendation.
.- CASINet: A Context-Aware Social Interaction Rumor Detection Network.
.- A Claim Decomposition Benchmark for Long-form Answer Verification.
.- Dual-granularity Hierarchical Fusion Network for Multimodal Humor Recognition on Memes.
.- Exploring the Potential of Dimension Reduction in Building Efficient Dense Retrieval Systems.
.- Relation Extraction Model Based on Overlap Rules and Abductive Learning.
.- Multi-task Instruction Tuning for Temporal Question Answering over Knowledge Graphs.
.- On the Capacity of Citation Generation by Large Language Models.
.- Are Large Language Models More Honest in Their Probabilistic or Verbalized Confidence?.
.- QUITO: Accelerating Long-Context Reasoning through Query-Guided Context Compression.