Retrieval Augmented Generation (RAG) is what it's called when [[Large Language Models|LLMs]] are given a corpus of content to use in crafting their responses. This allows models to pull relevant facts and synthesize things given your particular context, for example:
- Analyzing your [[Obsidian]] vault - which [[Local LLMs#Msty|Msty]] supports natively
- Analyzing a body of requirements documents from work
- Pulling out information from a particular book or series of books
The model itself isn't being *changed* by providing it materials, instead it's like its being handed a reference book it can use at runtime to source information.
****
# More
## Source
- Various conversations
- https://docs.msty.app/features/knowledge-stack/rag-explained
## Related
-