Turns out it's both easy and fast to run your own [[Large Language Models]] **locally** on your own hardware.
## Ollama
The tool [ollama](https://ollama.com/) makes it fast and easy to download local models, run them, and interact with them.
![[Pasted image 20250214121053.png]]
## Msty
For a more mass-consumer option, I'm testing out https://msty.app/, which gives you a UI for downloading and interacting with LLMs. This is actually **easier** than the already very easy ollama. It also supports [[Retrieval Augmented Generation]] which I'm testing out using these very notes you're reading right now.
****
# More
## Source
- 20 minutes of experimentation
- https://ollama.com/
## Related
- [[Large Language Models]]