Evaluating information retrieval

scroll ↓ to Resources

Contents

Note

  • end-to-end model evaluation is challenging, unless we expect a single short answer. Evaluate separately:
    • information extraction (did the system find the correct information?)
    • reasoning (Given correct information, did the system make the right conclusions?)
    • output generation (Was the final response clear and actionable?)
    • some domains are easier than others
      • coding: does the code pass tests?
    • user feedback or the way they interact with the results can be the ultimate metric
  • Implement evaluation before increasing the system complexity, otherwise you can’t monitor performance progression.
  • Separate retrieval evaluations vs generation evals and focus on the retrieval part first
    • retrieval is cheap, generation expensive
    • generation comes later in the pipeline and assumes the retrieval is correct
  • when evaluating the performance of the system you have, don’t forget to register what is missing
    • Inventory issues - Lack of data to fulfill certain user requests. Better algorithm can’t help with that -⇒ design a fallback scenario or a baseline algorithm
    • Capability issues - Functionality gaps where a system can’t perform certain types of queries or filters.
    • Examine false negatives of your retrieval; what should have been retrieved, but it wasn’t.
  • repeat your test questions multiple times and calculate answers consistency, probability of being accurate
  • Retrieval sufficiency - evaluate if retrieved docs provide enough information to completely answer the query, not just whether they all are relevant.
  • RAG impact is dependent on the quality of retrieved documents, which in turn is evaluated by:
    • relevance: how good the system is at ranking relevant documents higher and irrelevant documents lower
    • information density: if two documents are equally relevant, we should prefer one that’s more concise and has fewer extraneous details
    • level of detail:
  • Group your evaluation set queries by difficulty into N groups (e.g. 5x20) and only start evaluating the next group once you reach the desired accuracy or recall on simpler questions.
  • Know what you improve ^86c765
    • Prioritize high volume, low satisfaction result queries
    • Collect user feedback in real time with simplest methods To identify emerging issues and query trends.
    • query classification and clustering
      • Use few-shot classifiers, segment queries into categories or put labels such as time sensitive, financial, multi-step.
  • Build your own relevance dataset
    • Better data is better than better models
    • public benchmark like MTEB rarely are as relevant as specific-application dataset
      • too generic, even when specialized on a topic
      • too clean, correct
      • Data may have been seen by the embedding model during pre-training
  • statistical validation of potential improvements to quantify confidence in performance differences and avoid investing in unreliable improvements.
    • create a @dataclass ExperimentConfig, functions to sample from available data and calculate metrics
    • bootstraping on N samples with various RAG configurations and calculate confidence interval
      • plot Recall@k for different K for pairs of ExperimentConfig
        • if confidence intervals are too large - increase N
        • if confidence intervals for two different configurations overlap - it is possible that the difference in performance was due to chance
    • t-test is another way to tell if the difference in the means of the two configurations is due to chance
      • use distribution of the means from bootstraping, not means themselves
      • high p-value and low t-statistics points to NO statistical significance
  • if you have a number of tools for various search-use-cases (somewhat similar to intent recognition) evaluate them independently
    • ask the model to make a plan which tools to use, track plan acceptance rates by the users
  • LLM-as-a-judge

Build your own relevance dataset

Real data

  • If available, sample user queries and their outputs from a production RAG and put time to rank the results yourself or with the help of LLM
    • fancy way - use tracing tools like Langsmith, Log Fire
    • simply save user\session info, queries-answer pairs, retrieved chunks to Postgres
  • collect unstructured feedback (comments, issue reports)
    • hierarchical clustering to identify patterns and create a taxonomy of categories

Synthetic data

Evaluation datasets \ benchmarks

Types of experiments

  • Prioritize experiments based on potential impact and resources, log everything and present in tidy format

System architecture decisions

Other

  • invariance testing
    • the system’s output shouldn’t change due to
      • rephrasing
      • shorter\longer queries, abbreviations, punctuation, grammar mistakes
      • change of irrelevant details (names, genders, etc.)
  • Experiment with the top-k sampling with cosine similarity and top N with reranker to see how to get better recall where N << K
  • compare latency trade-offs
  • Head (frequent) queries vs tail queries

Metrics

ML metric

Technical

Industrial

  • end-user-engagement: click, add, dwell
  • human-involved evaluation, but not end-users:
    • for instance, AI system generates emails to prospective buyers with several price options, conditional discounts and other upselling tricks. Prior sending these emails reviewed by sales people. If they make corrections\edits to the email, we consider that smth went wrong in model reasoning and analyze the pitfall.
  • satisfaction feedback
  • conversion, retention, engagement
  • ratio of FAQ requests forwarded to a live agent
  • Revenue
  • Multi-objective ranking, not just optimizing relevance.

Resources


table file.inlinks, file.outlinks from [[]] and !outgoing([[]])  AND -"Changelog"