Intel-IPEX-LLM-Ollama
ghcr.io/justjoseorg/ollama-intel-gpu:main
bridge
bash
false
https://github.com/justjoseorg/ollama-intel-gpu/issues/new
IPEX-LLM is an LLM acceleration library for Intel GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max), NPU and CPU
This Image is meant to be a one way stop for AI enthusiasts with an Intel iGPU or dGPU to run models using Ollama.
Make sure to have reBar enabled
AI: Status:Beta
https://raw.githubusercontent.com/justjoseorg/ollama-intel-gpu/main/doc/LlamaIntel.png
1744477742
**Unraid 7+**
/mnt/user/Models/ollama
/dev/dri
0.0.0.0
level_zero:0
16384
11434
1
999
1
1