# ============================================================================= # OpenGame — example environment configuration # ============================================================================= # # Copy this file to `.env` (or export the variables in your shell / CI / Docker) # and fill in your own keys. ALL keys below belong to YOU — OpenGame does not # ship with any default credentials and will not call any third-party API # unless you provide a key. # # OpenGame talks to four kinds of generative API, each configured INDEPENDENTLY # so you can mix providers (e.g. DashScope for image, Doubao for video, # OpenAI for reasoning). # # Resolution order, highest priority first: # 1. The OPENGAME__* env vars below. # 2. ~/.qwen/settings.json -> openGame.providers..{provider,apiKey,baseUrl,model} # 3. Legacy upstream env vars (DASHSCOPE_API_KEY, IMAGE_MODEL_API_KEY, ...) # — kept for backward compatibility but undocumented. # # See docs/users/configuration/api-keys.md for the full reference. # ============================================================================= # ----------------------------------------------------------------------------- # 1. Main agent LLM (qwen-code's existing OpenAI-compatible flow) # ----------------------------------------------------------------------------- # This is the model that drives the agent loop itself. It already exists in # upstream qwen-code and is not part of the OpenGame provider system; you # can also configure it interactively with the `/auth` command. # OPENAI_API_KEY=sk-... # OPENAI_BASE_URL=https://api.openai.com/v1 # OPENAI_MODEL=gpt-4o # ----------------------------------------------------------------------------- # 2. Reasoning model (used by `classify-game-type` and `generate-gdd`, # and by the audio tool when synthesizing ABC notation) # ----------------------------------------------------------------------------- # Provider: one of `tongyi` | `doubao` | `openai-compat` # OPENGAME_REASONING_PROVIDER=openai-compat # OPENGAME_REASONING_API_KEY=sk-... # OPENGAME_REASONING_BASE_URL=https://api.openai.com/v1 # OPENGAME_REASONING_MODEL=gpt-4o-mini # ----------------------------------------------------------------------------- # 3. Image generation (sprites, backgrounds, tilesets, animation base frames) # ----------------------------------------------------------------------------- # Provider: one of `tongyi` | `doubao` | `openai-compat` # - `tongyi` -> Aliyun DashScope (Wan family). baseUrl/model optional. # - `doubao` -> Volcengine ARK (Seedream family). baseUrl/model optional. # - `openai-compat` -> any endpoint that implements POST /images/generations # in the OpenAI shape (OpenAI, fal.ai shim, OpenRouter, # Together image routes, Stability proxies, ...). # baseUrl AND model are REQUIRED. # OPENGAME_IMAGE_PROVIDER=tongyi # OPENGAME_IMAGE_API_KEY=sk-... # OPENGAME_IMAGE_BASE_URL=https://dashscope.aliyuncs.com # OPENGAME_IMAGE_MODEL=wan2.5-t2i-preview # ----------------------------------------------------------------------------- # 4. Video generation (image-to-video for animation frames; text-to-video as # a source for audio extraction) # ----------------------------------------------------------------------------- # Provider: one of `tongyi` | `doubao` # `openai-compat` is NOT yet supported for video — Sora / Veo are not part of # the public OpenAI shape. # Leave UNSET to disable video generation (animations will fall back to I2I, # audio to ABC + symusic / procedural). # OPENGAME_VIDEO_PROVIDER=doubao # OPENGAME_VIDEO_API_KEY=... # OPENGAME_VIDEO_BASE_URL=https://ark.cn-beijing.volces.com/api/v3 # OPENGAME_VIDEO_MODEL=doubao-seedance-1-0-pro-250528 # ----------------------------------------------------------------------------- # 5. Audio generation (LLM that writes ABC music notation; the WAV itself is # rendered locally via Python `symusic` or a procedural fallback) # ----------------------------------------------------------------------------- # Provider: one of `tongyi` | `doubao` | `openai-compat` # Often you can reuse your reasoning provider here. # Leave UNSET to disable LLM-driven audio (procedural fallback still works). # OPENGAME_AUDIO_PROVIDER=tongyi # OPENGAME_AUDIO_API_KEY=sk-... # OPENGAME_AUDIO_BASE_URL=https://dashscope.aliyuncs.com # OPENGAME_AUDIO_MODEL=qwen-plus # ----------------------------------------------------------------------------- # 6. Local helpers # ----------------------------------------------------------------------------- # Optional: path to a Python interpreter that has `symusic` installed # (used by the audio tool to render ABC notation -> WAV). # PYTHON_PATH=/usr/local/bin/python3 # Optional: background removal backend for sprite generation. # `imgly` (default, pure-JS) | `rembg` (Python) # BACKGROUND_REMOVAL_BACKEND=imgly