Kokoro FastAPI - CPU
ghcr.io/remsky/kokoro-fastapi-cpu:latest
ghcr.io/remsky/kokoro-fastapi-cpu
latest
Latest stable release
bridge
http://[IP]:[PORT:8880]/
false
https://github.com/remsky/Kokoro-FastAPI/issues
https://github.com/remsky/Kokoro-FastAPI
Dockerized FastAPI wrapper for Kokoro-82M text-to-speech model w/CPU ONNX and NVIDIA GPU PyTorch support, handling, and auto-stitching.
[br]
This is a version meant for running on CPUs. It is not recommended to run this on a CPU unless you have a very powerful CPU, as it will be slow. If you have a GPU, please use the GPU version of this container.
False
AI: Productivity: Tools: Other: Status:Stable
ai docker fastapi kokoro text-to-speech tts speech synthesis voice generation onnx cpu gpu nvidia
https://raw.githubusercontent.com/nwithan8/unraid_templates/master/images/kokoro-fastapi-icon.png
https://raw.githubusercontent.com/remsky/Kokoro-FastAPI/master/githubbanner.png
https://raw.githubusercontent.com/nwithan8/unraid_templates/main/templates/kokoro_fastapi_cpu.xml
https://raw.githubusercontent.com/remsky/Kokoro-FastAPI/master/assets/docs-screenshot.png
https://raw.githubusercontent.com/remsky/Kokoro-FastAPI/master/assets/webui-screenshot.png
https://github.com/nwithan8
### 2025-11-07
Update environmental variables
### 2025-04-25
Initial release
8880
/mnt/user/appdata/kokoro-fastapi/data
/app:/app/api
8
4
parallel
all
true
kNextPowerOfTwo
DEBUG