LLM

scroll ↓ to Resources

Contents

Note

LLM training stages

Libraries for finetuning

How to evaluate LLMs?

model evaluation often is done using multiple metrics and approaches, depending on the task, labels and resources

How to improve LLMs without fine-tuning?

Prompting

  • prompting is the most popular, cheap and fast method of improving LLM output quality.
  • reasoning often falls below prompting, but is so effective and developed into multitudes of methods and combinations

Choose the best sampling strategy

  • Under the hood, LLM outputs probability distribution over the whole token vocabulary. sampling strategy defines how to pick one of all probably tokens.

Answer validation

Resources


table file.tags from [[]] and !outgoing([[]])  AND -"Changelog"