The PDF of the full #IRRJ issue, Volume 1 Number 2 of December 2025 is now available online at: https://irrj.org/issue/view/vol1no2 #InformationRetrieval
In the second issue of #IRRJ, Paul Kantor writes the editorial, arguing for a more critical adoption of generative AI in information retrieval. He puts his concerns under three distinct headings: consistency, confidence, and completeness. #GenAI, #GenerativeAI, #InformationRetrieval
In the second issue of #IRRJ, Paul Kantor writes the editorial, arguing for a more critical adoption of generative AI in information retrieval. He puts his concerns under three distinct headings: consistency, confidence, and completeness. #GenAI, #GenerativeAI, #InformationRetrieval
Published at #IRRJ: "Exploring Embedding Interpretability by Correspondences Between Topic Models and Text Embeddings" by Meng Yuan, Lida Rashidi, and Justin Zobel. #InformationRetrieval, #EmbeddingInterpretability, #Explanability, #TopicModelling
Exploring Embedding Interpretability by Correspondences Between Topic Models and Text Embeddings
Published at #IRRJ: "Emancipatory Information Retrieval" by Bhaskar Mitra. #InformationRetrieval, #Society, #EmancipatoryPraxis, #TechnologyAndPower
Emancipatory Information Retrieval
Published at #IRRJ: "Emancipatory Information Retrieval" by Bhaskar Mitra. #InformationRetrieval, #Society, #EmancipatoryPraxis, #TechnologyAndPower
Emancipatory Information Retrieval
Published at #IRRJ: "Exploring Embedding Interpretability by Correspondences Between Topic Models and Text Embeddings" by Meng Yuan, Lida Rashidi, and Justin Zobel. #InformationRetrieval, #EmbeddingInterpretability, #Explanability, #TopicModelling
Exploring Embedding Interpretability by Correspondences Between Topic Models and Text Embeddings
"On the Theoretical Limitations of Embedding-Based Retrieval"
This work shows the limits of vector embedding models under the existing single vector paradigm. If a retrieval task (query鈥揹ocument pattern) has high sign rank, then no low-dimensional embedding (few latent pathways) can reproduce all those relevance patterns correctly.
https://arxiv.org/abs/2508.21038
Sign rank explainer:
https://www.wikigen.ai/wiki/What%20is%20the%20sign%20rank%20of%20a%20matrix%20and%20how%20does%20that%20relate%20to%20information%20retrieval%20tasks?hash=228197cf&style=technical
Published at #IRRJ: "Effectiveness of In-Context Learning for Due Diligence: A Reproducibility Study of Identifying Passages for Due Diligence by Madhukar Dwivedi and Jaap Kamps. #InformationRetrieval, #LegalSearch, #DueDiligence, #PassageRetrieval
https://doi.org/10.54195/irrj.22626
Effectiveness of In-Context Learning for Due Diligence
Published at #IRRJ: "Effectiveness of In-Context Learning for Due Diligence: A Reproducibility Study of Identifying Passages for Due Diligence by Madhukar Dwivedi and Jaap Kamps. #InformationRetrieval, #LegalSearch, #DueDiligence, #PassageRetrieval
https://doi.org/10.54195/irrj.22626
Effectiveness of In-Context Learning for Due Diligence
"On the Theoretical Limitations of Embedding-Based Retrieval"
This work shows the limits of vector embedding models under the existing single vector paradigm. If a retrieval task (query鈥揹ocument pattern) has high sign rank, then no low-dimensional embedding (few latent pathways) can reproduce all those relevance patterns correctly.
https://arxiv.org/abs/2508.21038
Sign rank explainer:
https://www.wikigen.ai/wiki/What%20is%20the%20sign%20rank%20of%20a%20matrix%20and%20how%20does%20that%20relate%20to%20information%20retrieval%20tasks?hash=228197cf&style=technical