--- name: sf-datacloud-connect description: > Salesforce Data Cloud Connect phase. TRIGGER when: user manages Data Cloud connections, connectors, connector metadata, tests a connection, browses source objects or databases, or sets up a new source system. DO NOT TRIGGER when: the task is about data streams or DLOs (use sf-datacloud-prepare), DMOs or identity resolution (use sf-datacloud-harmonize), retrieval/search (use sf-datacloud-retrieve), or STDM telemetry (use sf-ai-agentforce-observability). license: MIT compatibility: "Requires an external community sf data360 CLI plugin and a Data Cloud-enabled org" metadata: version: "1.0.0" author: "Gnanasekaran Thoppae" phase: "Connect" --- # sf-datacloud-connect: Data Cloud Connect Phase Use this skill when the user needs **source connection work**: connector discovery, connection metadata, connection testing, source-object browsing, connector schema inspection, or connector-specific setup payloads for external sources. ## When This Skill Owns the Task Use `sf-datacloud-connect` when the work involves: - `sf data360 connection *` - connector catalog inspection - connection creation, update, test, or delete - browsing source objects, fields, databases, or schemas - identifying connector types already in use - preparing connector definitions for Snowflake, SharePoint Unstructured, or Ingestion API sources Delegate elsewhere when the user is: - creating data streams or DLOs → [sf-datacloud-prepare](../sf-datacloud-prepare/SKILL.md) - creating DMOs, mappings, IR rulesets, or data graphs → [sf-datacloud-harmonize](../sf-datacloud-harmonize/SKILL.md) - writing Data Cloud SQL or search-index workflows → [sf-datacloud-retrieve](../sf-datacloud-retrieve/SKILL.md) --- ## Required Context to Gather First Ask for or infer: - target org alias - connector type or source system - whether the user wants inspection only or live mutation - connection name or ID if one already exists - whether credentials are already configured outside the CLI - whether the user also expects stream creation right after connection setup - whether the source is a database, an unstructured document source, or an Ingestion API feed --- ## Core Operating Rules - Verify the plugin runtime first; see [../sf-datacloud/references/plugin-setup.md](../sf-datacloud/references/plugin-setup.md). - Run the shared readiness classifier before mutating connections: `node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o --phase connect --json`. - Prefer read-only discovery before connection creation. - Suppress linked-plugin warning noise with `2>/dev/null` for standard usage. - Remember that `connection list` requires `--connector-type`. - For `connection test`, pass `--connector-type` when resolving a non-Salesforce connection by name. - Discover existing connector types from streams first when the org is unfamiliar. - Use curated example payloads before inventing connector-specific credentials or parameters. - For connector types outside the curated examples, inspect a known-good UI-created connection via REST before building JSON. - Do not promise API-based stream creation for every connector type just because connection creation succeeds. --- ## Recommended Workflow ### 1. Classify readiness for connect work ```bash node ~/.claude/skills/sf-datacloud/scripts/diagnose-org.mjs -o --phase connect --json ``` ### 2. Discover connector types ```bash sf data360 connection connector-list -o 2>/dev/null sf data360 data-stream list -o 2>/dev/null ``` ### 3. Inspect connections by type ```bash sf data360 connection list -o --connector-type SalesforceDotCom 2>/dev/null sf data360 connection list -o --connector-type REDSHIFT 2>/dev/null sf data360 connection list -o --connector-type SNOWFLAKE 2>/dev/null ``` ### 4. Inspect a specific connection or uploaded schema ```bash sf data360 connection get -o --name 2>/dev/null sf data360 connection objects -o --name 2>/dev/null sf data360 connection fields -o --name 2>/dev/null sf data360 connection schema-get -o --name 2>/dev/null ``` ### 5. Test or create only after discovery ```bash sf data360 connection test -o --name --connector-type 2>/dev/null sf data360 connection create -o -f connection.json 2>/dev/null ``` ### 6. Start from curated example payloads for external connectors Use the phase-owned examples before inventing a payload from scratch: - `examples/connections/heroku-postgres.json` - `examples/connections/redshift.json` - `examples/connections/sharepoint-unstructured.json` - `examples/connections/snowflake-connection.json` - `examples/connections/ingest-api-connection.json` - `examples/connections/ingest-api-schema.json` Typical Ingestion API setup flow: ```bash sf data360 connection create -o -f examples/connections/ingest-api-connection.json 2>/dev/null sf data360 connection schema-upsert -o --name -f examples/connections/ingest-api-schema.json 2>/dev/null sf data360 connection schema-get -o --name 2>/dev/null ``` ### 7. Discover payload fields for unknown connector types Create one in the UI, then inspect it directly: ```bash sf api request rest "/services/data/v66.0/ssot/connections/" -o ``` --- ## High-Signal Gotchas - `connection list` has no true global "list all" mode; query by connector type. - The connector catalog name and connection connector type are not always the same label. - `connection test` may need `--connector-type` for name resolution when the source is not a default Salesforce connector. - An empty connection list usually means "enabled but not configured yet", not "feature disabled". - Heroku Postgres, Redshift, Snowflake, SharePoint Unstructured, and Ingestion API all use different credential and parameter shapes; reuse the curated examples instead of guessing. - SharePoint Unstructured uses `clientId`, `clientSecret`, and `tokenEndpoint` in the `credentials` array and does not require a `parameters` array. - Snowflake uses key-pair auth and can often be created through the API, but downstream stream creation can still remain UI-only. - Ingestion API connector setup is incomplete until `connection schema-upsert` has uploaded the object schema. - Some external connector credential setup still depends on UI-side configuration or external-system permissions. --- ## Output Format ```text Connect task: Connector type: Target org: Commands: Verification: Next step: ``` --- ## References - [README.md](README.md) - [examples/connections/heroku-postgres.json](examples/connections/heroku-postgres.json) - [examples/connections/redshift.json](examples/connections/redshift.json) - [examples/connections/sharepoint-unstructured.json](examples/connections/sharepoint-unstructured.json) - [examples/connections/snowflake-connection.json](examples/connections/snowflake-connection.json) - [examples/connections/ingest-api-connection.json](examples/connections/ingest-api-connection.json) - [examples/connections/ingest-api-schema.json](examples/connections/ingest-api-schema.json) - [../sf-datacloud/references/plugin-setup.md](../sf-datacloud/references/plugin-setup.md) - [../sf-datacloud/references/feature-readiness.md](../sf-datacloud/references/feature-readiness.md) - [../sf-datacloud/UPSTREAM.md](../sf-datacloud/UPSTREAM.md)