--- name: cpln description: Use when deploying and managing containerized workloads across multiple clouds (AWS, GCP, Azure, private), configuring multi-cloud infrastructure, managing secrets and access control, setting up identities for credential-free cloud access, automating deployments with GitOps, or connecting AI tools to Control Plane via the MCP Server. license: Apache-2.0 compatibility: Requires cpln CLI (npm @controlplane/cli, Homebrew, or binary) or MCP Server (https://mcp.cpln.io/mcp). Works with any CI/CD platform, Terraform, Pulumi, and Kubernetes. allowed-tools: cpln metadata: mintlify-proj: controlplanecorporation author: controlplane version: '3.0' --- # Control Plane Skill ## What is Control Plane Control Plane is a hybrid platform for deploying and managing containerized workloads across AWS, GCP, Azure, and private clouds from a unified interface. It abstracts cloud provider differences behind a consistent API, CLI (`cpln`), Console UI, Terraform provider, Pulumi provider, Kubernetes Operator, and MCP Server. Certified PCI DSS Level 1, SOC 2 Type II, and HIPAA-eligible. **Key entry points:** - CLI: `cpln` — install via npm `@controlplane/cli`, Homebrew (`brew tap controlplane-com/cpln && brew install cpln`), or binary - Console: https://console.cpln.io - API: https://api.cpln.io - MCP Server: `https://mcp.cpln.io/mcp` (80+ tools for AI agents) - Docs: https://docs.controlplane.com (page index for AI agents: https://docs.controlplane.com/llms.txt) - Full CLI conventions and hallucination traps: https://docs.controlplane.com/cli-conventions.md ## When to use this skill Deploying workloads across multi-cloud GVCs, configuring infrastructure (locations, networking, firewall, domains), managing secrets + identity + policy access chains, automating with `cpln apply` / CI/CD, debugging via `cpln logs` / `exec` / `connect` / `port-forward`, building and pushing images, migrating from Kubernetes / Docker Compose / Helm, working with mk8s / BYOK / Kubernetes Operator, or connecting AI tools via MCP Server. ## Resource model ``` Org (Organization) — top-level isolation boundary, globally unique name ├── Principals: Users, Groups, Service Accounts (org-scoped) ├── Governance: Policies, Quotas, Audit Contexts (org-scoped) ├── Infrastructure: Cloud Accounts, Agents, Locations, (org-scoped) │ IP Sets, mk8s clusters ├── Assets: Secrets (12 types), Images, Domains (org-scoped) └── GVC (Global Virtual Cloud) — deployment environment ├── Workloads (1+ containers, four types) (GVC-scoped) ├── Identities (cloud access, secrets, networks) (GVC-scoped) └── Volume Sets (persistent storage) (GVC-scoped) ``` **Scoping rules:** - **Org-scoped**: Secrets, Domains, Cloud Accounts, Agents, Policies, Images, Groups, Service Accounts, IP Sets, mk8s - **GVC-scoped**: Workloads, Identities, Volume Sets - A workload can reference secrets from its parent org but only volume sets and identities from its own GVC - Domains are org-scoped but associated with exactly one GVC at a time - Pull secrets are configured at the **GVC level**, not per workload — only Docker, ECR, and GCP secret types work as pull secrets - One identity per workload, but an identity can be shared across multiple workloads within the same GVC. Identities cannot be shared across GVCs — recreate the identity with the same spec in each GVC that needs it. ## Platform capabilities | Capability | What it is | When to use | | --- | --- | --- | | **Workloads** | Deploy containers as serverless, standard, cron, or stateful | Primary deployment unit — most users start here | | **Template Catalog** | 30+ production-ready templates (Postgres, Redis, Kafka, MongoDB, etc.) | Need a database, queue, or common service — install instead of building from scratch | | **Managed Kubernetes (mk8s)** | Provision Kubernetes clusters across AWS, GCP, Azure, Hetzner, and more | Need a full Kubernetes cluster (teams deploy INTO mk8s clusters) | | **CPLN Platform (BYOK)** | Register existing Kubernetes clusters as Control Plane locations | Already have Kubernetes — want Control Plane workload management on top | | **Kubernetes Operator** | Manage Control Plane resources as Kubernetes CRDs (ArgoCD/GitOps) | Want Kubernetes-native GitOps for Control Plane infrastructure | | **Agents** | Secure tunnels to private networks (VPCs, on-prem, data centers) | Workloads need to reach private TCP endpoints behind firewalls | | **External Logging** | Ship logs to S3, CloudWatch, Coralogix, Datadog, Logz.io, Stackdriver | Compliance, long-term retention, or external log analysis | | **Domains** | Custom domain routing with auto-TLS, geo-routing, path-based routing | Expose workloads on your own domain with CNAME or NS delegation | | **MCP Server** | 80+ tools for AI agents to manage infrastructure programmatically | AI-assisted infrastructure management | ## Guardrails — read these first Eight rules that prevent the production failures real users have hit. Skipping them costs data loss, cross-tenant changes, silent runtime failures, or burned token budgets. ### 1. Org / profile / GVC confirmation before mutating Before any state-mutating `cpln` command (`create`, `delete`, `update`, `apply`, `patch`, `edit`, `add-binding`, `remove-binding`, `add-key`, `force-redeployment`, `clone`, `image build --push`, secret `create-*` variants, `add-location`, `remove-location`), the target **org**, **profile**, and (where applicable) **GVC** must be unambiguously established. If any is missing, **stop and ask. Never silently fall back to the active CLI profile.** Context is established only when: the user named it in the current request, named it earlier this conversation, called MCP `set_context` this session, or gave an explicit "use my default profile" instruction. Otherwise ask: > Before I run this, I want to confirm the target. Your active profile appears to be `` (org: ``, GVC: ``). Should I use that, or a different org / profile / GVC? For **read-only** commands (`get`, `query`, `audit`, `logs`, `permissions`, `access-report`, `eventlog`), defaulting is acceptable — but **announce the target first**: *"Using profile `` → org ``, GVC ``…"* ### 2. Secret access — 3 mandatory steps A workload CANNOT access a secret without ALL three: 1. **Identity** created and assigned: `cpln workload update WL --gvc GVC --set spec.identityLink=//identity/ID` 2. **Policy** granting the identity `reveal`: `cpln policy create --name P --target-kind secret --resource SECRET` then `cpln policy add-binding P --permission reveal --identity //gvc/GVC/identity/ID` 3. **Reference** in env vars or volumes: `cpln://secret/NAME.payload` (opaque), `cpln://secret/NAME.KEY` (dictionary), `.username`/`.password` (userpass), `.cert`/`.key` (tls), etc. Missing any one = silent runtime failure. The #1 support issue. ### 3. Image references - **NEVER** prefix external images with `docker.io/`. Use `nginx:latest`, not `docker.io/library/nginx:latest`. - **Own org's registry** in workload specs: `//image/NAME:TAG`. The hostname `.registry.cpln.io` is only for `docker login`/`push`, never in workload specs. - **Cross-org pull**: `.registry.cpln.io/NAME:TAG`. - Images must be `linux/amd64`. `cpln image build --push` defaults to this. Wrong platform = `exec format error` at runtime. - The workload spec `port` must match the port the container actually listens on, or health checks fail. ### 4. Firewall defaults — everything is denied - **Internal** (workload-to-workload): `inboundAllowType: none` — all blocked. Set to `same-gvc`, `same-org`, or `workload-list`. - **External inbound**: disabled. Add CIDRs (`0.0.0.0/0` for all) or use `--public` on `cpln workload create`. - **External outbound**: disabled. Add CIDRs or hostnames. Hostname egress restricted to ports 80, 443, 445; CIDR rules take precedence over hostname; blocked rules take precedence over allowed. A workload without firewall config cannot reach its database, be reached by users, or talk to peers. Always configure explicitly. ### 5. Workload type constraints (immutable after creation) | Feature | serverless | standard | stateful | cron | | --- | --- | --- | --- | --- | | Scale to zero | `rps` or `concurrency` | KEDA only | KEDA only | No | | Ports | Exactly 1 container × 1 port (required) | 0 or more | 0 or more | Must NOT expose any | | Capacity AI | Yes (default) | Yes (default) | **Always disabled** | N/A | | Persistent volumes | No | No | Yes (volume sets) | No | | `replicaDirect` LB | No | No | **Only this type** | No | | `spec.job` | Forbidden | Forbidden | Forbidden | Required | | Multi-metric autoscaling | No | Yes (cpu/memory/rps) | Yes (cpu/memory/rps) | N/A | | `maxConcurrency` | Used | Ignored | Ignored | N/A | | `timeoutSeconds` max | 600 | 3600 | 3600 | N/A | | Max containers per workload | 8 | 8 | 8 | 8 | - **Workload type is immutable.** Changing type requires delete + recreate. Capture state first: `cpln workload get NAME --gvc GVC -o yaml-slim > NAME.bak.yaml`. - **Capacity AI** is incompatible with: Stateful, CPU autoscaling, multi-metric autoscaling, GPUs. - **Cron** deploys to ALL GVC locations with no overrides. `spec.job` with `schedule` required; probes, autoscaling, `timeoutSeconds`, `capacityAI`, `debug` ignored. - **Workload name** max 49 chars; cannot end with `-headless`. - For container name reservations, probe XOR rules, GPU constraints, and full validation, fetch [/reference/workload/general](https://docs.controlplane.com/reference/workload/general) when authoring manifests. ### 6. Destructive operations — confirm with blast radius Before any destructive operation, present a structured summary AND wait for explicit confirmation — **even when permissions auto-approve.** Permission mode is tool-prompt UX; this is conversation-level safety, independent. - **Always destructive**: any `cpln delete`, `gvc delete-all-workloads`, `volumeset shrink`, `volumeset snapshot delete`, `volumeset volume delete`. - **Service-disrupting**: `policy remove-binding` (breaks runtime access), `serviceaccount remove-key` (breaks CI/CD), `group remove-member` (locks users out), `gvc remove-location` (forces redeployment). - **Implicit destructive (immutability traps)**: org delete impossible; workload type/name immutable (rename via `cpln workload clone OLD --name NEW --gvc GVC`); volume set `fileSystemType` and `performanceClass` immutable; `cpln apply` of a renamed resource creates a new one (old must be deleted with `cpln delete --file ...`). When delete + recreate is the only path: (1) capture state with `-o yaml-slim`, (2) reuse the same name to preserve URL/internal DNS/domain routes/policy targetLinks/identity bindings, (3) confirm — the user authorized the *goal*, not the *technique*. **Required confirmation shape:** > I need to run a destructive operation: > > - **Action**: `` > - **Affected**: `` in `` / `` > - **Blast radius**: `` > - **Reversibility**: `` > - **Mitigation**: `.bak.yaml; will reuse same name; etc.>` > > Confirm to proceed. Anything but an unambiguous yes means stop. Bundle multi-destructive tasks into one upfront ask; don't bundle destructive with non-destructive to slip it through. ### 7. Constraint conflicts — surface, don't silently default When a compatibility constraint blocks the user's intent (concurrency autoscaling on stateful, scale-to-zero on cron, snapshots on `shared` volume sets, Capacity AI with CPU metric), surface it and present alternatives. **Never silently downgrade to `disabled` / `none` / `1 replica` / `manual`** — those often ship an under-provisioning bug or SPOF. **Required shape:** > I hit a constraint configuring ``: > > - **You asked for**: `` > - **Constraint**: `` > - **Alternatives**: > - **``** — `` > - **``** — `<...>` > - **My recommendation**: `