gpt4all
ghcr.io/huggingface/text-generation-inference:latest
https://github.com/huggingface/text-generation-inference/pkgs/container/text-generation-inference
bridge
true
https://github.com/nomic-ai/gpt4all/issues
https://www.nomic.ai/gpt4all
An all-in-one LLM server and chat UI
False
AI: Productivity: Tools: Other: Status:Stable
https://raw.githubusercontent.com/nwithan8/unraid_templates/master/images/gpt4all-icon.png
https://raw.githubusercontent.com/nwithan8/unraid_templates/main/templates/gpt4all.xml
https://github.com/nwithan8
Requires an Nvidia GPU.
[br]
In **Post Arguments**, replace `$MODEL` with the model you want to host and `$NUM_SHARD` with the number of shards you want to use (recommended: 1).
### 2024-07-10
Initial release
--gpus all --shm-size 1g
--model-id $MODEL --num-shard $NUM_SHARD
8080
/mnt/user/appdata/gpt4all/data