# Community Projects The following projects are built and maintained by the community. We appreciate all contributions! Note that these projects are not officially supported by the OmniVoice team. If you have a project you'd like to add, please open a PR. --- - **[ComfyUI-OmniVoice-TTS](https://github.com/Saganaki22/ComfyUI-OmniVoice-TTS)** — ComfyUI custom node for OmniVoice text-to-speech generation. - **[vLLM-Omni](https://github.com/vllm-project/vllm-omni)** — A framework for efficient model inference with omni-modality model. Supports OmniVoice serving. - **[pyVideoTrans](https://github.com/jianchang512/pyvideotrans)** — Video translation tool with dubbing & subtitles. Supports OmniVoice as a TTS engine. - **[MLX-Audio](https://github.com/Blaizzy/mlx-audio)** — TTS, STT, and STS library built on Apple's MLX framework. Supports OmniVoice among other models for efficient speech processing on Apple Silicon. - **[RealtimeTTS](https://github.com/KoljaB/RealtimeTTS)** — Converts text to speech in realtime. Supports OmniVoice as a TTS engine. - **[TTS-WebUI](https://github.com/rsxdalv/TTS-WebUI)** — Gradio web UI for multiple TTS models. Supports OmniVoice as one of its backends. - **[OmniVoice-Studio](https://github.com/debpalash/OmniVoice-Studio)** — Desktop application for OmniVoice voice generation. - **[omnivoice-server](https://github.com/maemreyo/omnivoice-server)** — OpenAI-compatible HTTP server for serving OmniVoice via `/v1/audio/speech`. Supports voice profiles for persistent cloning, sentence-level streaming, and optional Bearer auth. - **[omnivoice-rs](https://github.com/FerrisMind/omnivoice-rs)** — GPU-first Rust workspace for OmniVoice inference, parity validation, CLI execution, and an OpenAI-compatible HTTP server built with Candle. - **[omnivoice-trtllm](https://github.com/tlitech/omnivoice-trtllm)** — Deploy OmniVoice TTS model using TensorRT-LLM and Triton Inference Server on Modal, faster than PyTorch.