ollama
ollama/ollama
https://hub.docker.com/r/ollama/ollama/
bridge
bash
false
https://hub.docker.com/r/ollama/ollama/
The easiest way to get up and running with large language models locally.
AI: Other:
http://[IP]:[PORT:11434]/
https://raw.githubusercontent.com/Joly0/docker-templates/main/templates/ollama.xml
https://ollama.ai/public/ollama.png
--gpus=all
**Nvidia Driver plugin** (nVidia Support)
/mnt/user/appdata/ollama
11434
*