https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-BTP-Kyma-runtime-blog-posts.xmlSAP Community - SAP BTP, Kyma runtime2026-04-21T23:00:23.056921+00:00python-feedgenSAP BTP, Kyma runtime blog posts in SAP Communityhttps://community.sap.com/t5/technology-blog-posts-by-sap/mastering-kyma-multi-tenancy-mapping-namespaces-to-different-btp/ba-p/14316995Mastering Kyma Multi-Tenancy: Mapping namespaces to different BTP Subaccounts2026-01-29T07:09:07.312000+01:00ChristianWeisshttps://community.sap.com/t5/user/viewprofilepage/user-id/136917<H2 id="toc-hId-1788724800"><SPAN>Introduction</SPAN></H2><P><SPAN>In the world of SAP BTP, the </SPAN><STRONG>Kyma runtime</STRONG><SPAN><STRONG> is often seen as an expensive resource</STRONG>, as by default, a Kyma cluster is tied to the subaccount where it was created. This means every BTP Service Instance you create in your Kyma Cluster using the pre-install <A href="https://github.com/SAP/sap-btp-service-operator" target="_self" rel="nofollow noopener noreferrer">BTP Service Operator</A> is provisioned in that single "home" subaccount.</SPAN></P><P><SPAN>However, to leverage Kyma with <STRONG>keeping costs low you need to share one cluster across multiple applications / provider</STRONG> <STRONG>subaccounts.</STRONG> To do this properly, you must be able to isolate entitlements and manage application-specific services in dedicated so-called provider subaccounts.</SPAN></P><P><SPAN>In this post, I will show you how to achieve a </SPAN><STRONG>1:1 mapping between a Kyma Namespace and a BTP Subaccount</STRONG><SPAN>.</SPAN></P><H2 id="toc-hId-1592211295"><STRONG>The Core Concept: Overriding the default BTP Service Operator configuration</STRONG></H2><P><SPAN>The magic happens via the </SPAN><STRONG>SAP BTP Service Operator</STRONG><SPAN>. By default, it uses a cluster-wide configuration. However, if you create a secret named </SPAN><SPAN>sap-btp-service-operator</SPAN><SPAN> with the label </SPAN><SPAN>services.cloud.sap.com/config: "true"</SPAN><SPAN> inside a specific namespace, the operator will prioritize those credentials for any resource created in that namespace.</SPAN></P><H2 id="toc-hId-1395697790"><STRONG>Option 1: The manual approach</STRONG></H2><P><SPAN>If you want to quickly test this for a single project, follow these steps:</SPAN></P><H3 id="toc-hId-1328267004"><STRONG>1. Prepare the Provider Subaccount</STRONG></H3><OL><LI><SPAN>Go to your </SPAN><STRONG>Provider BTP Subaccount</STRONG><SPAN>.</SPAN></LI><LI><SPAN>Create a Service Instance for </SPAN><STRONG>Service Manager</STRONG><SPAN> using the plan </SPAN><SPAN>service-operator-access</SPAN><SPAN>.</SPAN></LI><LI><SPAN>Create a </SPAN><STRONG>Service Binding</STRONG><SPAN> and note the following: </SPAN><SPAN>clientid</SPAN><SPAN>, </SPAN><SPAN>clientsecret</SPAN><SPAN>, </SPAN><SPAN>sm_url</SPAN><SPAN>, and the </SPAN><SPAN>url</SPAN><SPAN> (which we will use as </SPAN><SPAN>tokenurl</SPAN><SPAN>).</SPAN></LI></OL><H3 id="toc-hId-1131753499"><STRONG>2. Create the Secret in Kyma</STRONG></H3><P><SPAN>Apply the following YAML template adjusted to your configuration to your specific application namespace (e.g., </SPAN><SPAN>incident</SPAN><SPAN>). This tells Kyma to "look" at the provider subaccount instead of the default one which is configured under your kyma-system namespace. </SPAN></P><pre class="lia-code-sample language-yaml"><code>apiVersion: v1
kind: Secret
metadata:
name: sap-btp-service-operator
namespace: incident
labels:
services.cloud.sap.com/config: "true" # Mandatory label
type: Opaque
stringData:
clientid: "<cliendid>"
clientsecret: "<clientsecret>"
sm_url: "https://service-manager.cfapps.us10.hana.ondemand.com"
tokenurl: "https://your-subaccount.authentication.us10.hana.ondemand.com"
tokenurlsuffix: "/oauth/token"</code></pre><H2 id="toc-hId-806157275"><STRONG>Option 2: The automated approach using Terraform</STRONG></H2><P><SPAN>Doing this manually is exhausting and error-prone. Using the </SPAN><STRONG>SAP BTP Terraform Provider</STRONG><SPAN>, you can automate the entire "handshake" between the subaccount and the Kyma cluster.</SPAN></P><P><SPAN>Create a folder where you place the following 2 Terraform files.</SPAN></P><H3 id="toc-hId-738726489"><SPAN>1. File provider.tf </SPAN></H3><pre class="lia-code-sample language-yaml"><code>terraform {
required_providers {
btp = {
source = "SAP/btp"
version = "1.18.1" # Use the latest stable version
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
}
}
# Configure the BTP Provider
provider "btp" {
globalaccount = "xxxxtrial-ga" # "your-global-account-subdomain"
# Credentials are best passed via Environment Variables
# (BTP_USERNAME and BTP_PASSWORD)
}
# Configure the Kubernetes Provider
provider "kubernetes" {
# This targets your current active Kyma context
config_path = "~/.kube/config"
}</code></pre><H3 id="toc-hId-542212984"><SPAN>2. File main.tf</SPAN></H3><pre class="lia-code-sample language-yaml"><code>variable "provider_subaccount_id" {
type = string
description = "The GUID of the target BTP Subaccount"
}
variable "target_kyma_namespace" {
type = string
description = "The Namespace created and mapped to the BTP Provider Subaccount"
}
# 1. Ensure the subaccount is entitled first
resource "btp_subaccount_entitlement" "sm_entitlement" {
subaccount_id = var.provider_subaccount_id
service_name = "service-manager"
plan_name = "service-operator-access"
}
# 2. Lookup the Service Plan ID after entitlement is created
data "btp_subaccount_service_plan" "sm_plan" {
depends_on = [btp_subaccount_entitlement.sm_entitlement]
subaccount_id = var.provider_subaccount_id
offering_name = "service-manager"
name = "service-operator-access"
}
# 3. Create the Service Manager instance using the plan ID
resource "btp_subaccount_service_instance" "sm_operator_access" {
depends_on = [data.btp_subaccount_service_plan.sm_plan]
subaccount_id = var.provider_subaccount_id
serviceplan_id = data.btp_subaccount_service_plan.sm_plan.id
name = "sm-operator-for-kyma"
}
# 4. Create the Service Binding
resource "btp_subaccount_service_binding" "sm_binding" {
subaccount_id = var.provider_subaccount_id
service_instance_id = btp_subaccount_service_instance.sm_operator_access.id
name = "kyma-operator-binding"
}
# 5. Create the Kubernetes Namespace with Istio enabled
resource "kubernetes_namespace" "app_namespace" {
metadata {
name = var.target_kyma_namespace
labels = {
"istio-injection" = "enabled"
}
}
}
# 6. Create the Secret inside that new Namespace
resource "kubernetes_secret" "sap_btp_service_operator" {
metadata {
name = "sap-btp-service-operator"
namespace = kubernetes_namespace.app_namespace.metadata[0].name
labels = {
"services.cloud.sap.com/config" = "true"
}
}
data = {
clientid = jsondecode(btp_subaccount_service_binding.sm_binding.credentials).clientid
clientsecret = jsondecode(btp_subaccount_service_binding.sm_binding.credentials).clientsecret
sm_url = jsondecode(btp_subaccount_service_binding.sm_binding.credentials).sm_url
tokenurl = jsondecode(btp_subaccount_service_binding.sm_binding.credentials).url
tokenurlsuffix = "/oauth/token"
}
type = "Opaque"
}</code></pre><P><SPAN>Set your BTP cli credentials as environment variables before running Terraform (prerequisite Terraform, btp cli and kubectl cli are installed on the maschine where you execute it). </SPAN></P><PRE><SPAN>For Bash:<BR /></SPAN><BR /><SPAN>export BTP_USERNAME="your-email@example.com" <BR /></SPAN><BR /><SPAN>export BTP_PASSWORD="your-password"</SPAN></PRE><PRE><SPAN>For Powershell:<BR /></SPAN><BR /><SPAN>$Env:BTP_USERNAME = "your-email@example.com"<BR /><BR />$Env:BTP_PASSWORD = "your-password"</SPAN></PRE><P><SPAN>Run the necessary commands like terraform init, plan, apply or destroy. You can pass the input variables as parameters. Example: </SPAN></P><PRE><SPAN>terraform apply -var="target_kyma_namespace=tf1" -var="provider_subaccount_id=553bcbbb-e809-450f-82db-60de3ef76b1f"</SPAN></PRE><H2 id="toc-hId-216616760"><STRONG>Summary</STRONG></H2><P><SPAN>By using this 1:1 mapping technique, you can:</SPAN></P><UL><LI><STRONG>Reduce Costs:</STRONG><SPAN> Share one Kyma cluster across many applications and business units.</SPAN></LI><LI><STRONG>Isolate Resources:</STRONG><SPAN> Ensures that </SPAN><SPAN>applications are running in a separate namespaces using its services and quotas from its provider subaccount.</SPAN></LI><LI><STRONG>Scale Securely:</STRONG><SPAN> Use Terraform to ensure that credentials are never handled manually by developers and integrate it into your workflows like GitHub Actions.</SPAN></LI></UL><P><SPAN>This makes Kyma not just a powerful developer tool, but as well a cost-effective enterprise platform.</SPAN></P>2026-01-29T07:09:07.312000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/runtime-threat-detection-for-sap-btp-kyma-with-azure-arc-microsoft-defender/ba-p/14319899Runtime Threat Detection for SAP BTP Kyma with Azure Arc + Microsoft Defender for Containers2026-02-02T15:52:32.766000+01:00haithamshahinhttps://community.sap.com/t5/user/viewprofilepage/user-id/2275053<H1 id="securing-an-external-kubernetes-cluster-with-microsoft-defender-for-containers-via-azure-arc-" id="toc-hId-1659730497">Securing an external Kubernetes cluster with Microsoft Defender for Containers (via Azure Arc)</H1><P>When I say "secure Kubernetes", I'm not just thinking about admission policies and CIS checklists. I'm thinking about what happens when <STRONG>something is already running</STRONG> and turns malicious — a web shell lands in a pod, a container starts burning CPU for crypto mining, or someone drops network scanning tools into an otherwise boring workload.</P><P>If you're running <STRONG>SAP BTP Kyma runtime</STRONG>, this matters. Kyma has strong <A href="https://help.sap.com/docs/btp/sap-business-technology-platform/kyma-security-concepts#kubernetes-control-plane" target="_blank" rel="noopener noreferrer">platform hardening</A> (Gardener-managed control plane, DISA STIG alignment), and API server audit logs exist — but those logs go to <A href="https://help.sap.com/docs/btp/sap-business-technology-platform/auditing-and-logging-information-in-kyma" target="_blank" rel="noopener noreferrer">SAP's Platform Logging Service</A>, not directly to you. That's fine for platform-level auditing, but it's not the same as <STRONG>seeing threats inside your workloads at runtime</STRONG>.</P><P>That's the gap I'm filling: <STRONG>runtime threat detection</STRONG> — the ability to detect and alert on malicious activity (crypto mining, web shells, credential theft) while workloads are running.</P><HR /><H2 id="real-world-threats" id="toc-hId-1592299711">Real-world threats</H2><P>These aren't hypotheticals — crypto mining and container compromise campaigns are actively targeting Kubernetes clusters:</P><P><STRONG>DERO Cryptojacking (2023–2024)</STRONG>: Attackers scanned for misconfigured Kubernetes API servers, then deployed DaemonSets named "proxy-api" to blend in with legitimate cluster components. The mining process itself was named "pause" — masquerading as the standard Kubernetes pause container. CrowdStrike found malicious images with over 10,000 pulls on Docker Hub. <STRONG>How runtime detection helps</STRONG>: Defender's eBPF monitoring catches unusual process spawning from "pause" containers and flags sustained high CPU from processes that shouldn't be compute-intensive. (Source: <A href="https://www.crowdstrike.com/en-us/blog/crowdstrike-discovers-first-ever-dero-cryptojacking-campaign-targeting-kubernetes/" target="_blank" rel="noopener nofollow noreferrer">CrowdStrike — DERO Cryptojacking Discovery</A>)</P><P><STRONG>Kinsing Campaign (2023–ongoing)</STRONG>: This campaign exploits vulnerabilities in PostgreSQL, WebLogic, Liferay, and WordPress to gain initial access to containers, then pivots to deploy crypto miners across the cluster. The campaign has affected 75+ cloud-native applications. <STRONG>How runtime detection helps</STRONG>: Defender detects process genealogy anomalies — for example, a WebLogic process spawning shell commands that enumerate Kubernetes resources or deploy new containers.</P><P>The pattern: attackers get in through a misconfiguration or vulnerability, then run workloads <STRONG>inside</STRONG> the cluster. Admission policies and CIS benchmarks don't catch threats that start after deployment — that's the gap runtime detection fills.</P><HR /><H2 id="the-solution-azure-arc-defender-for-containers" id="toc-hId-1395786206">The solution: Azure Arc + Defender for Containers</H2><P>For non-AKS clusters, the approach is: <STRONG>Azure Arc</STRONG> (makes the cluster an Azure resource) + <STRONG>Defender for Containers</STRONG> (deploys the runtime sensor as an Arc extension).</P><P><STRONG>What gets installed</STRONG>:</P><UL><LI><STRONG>Arc agents</STRONG> (<CODE>azure-arc</CODE> namespace): maintain outbound connection to Azure</LI><LI><STRONG>Defender sensor</STRONG> (DaemonSet on each node): collects runtime telemetry via eBPF — process creation, network activity, system calls</LI></UL><P><STRONG>What the sensor detects</STRONG>: crypto mining patterns, web shell activity, network scanning tools, binary drift. (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/alerts-containers#workload-runtime-detection" target="_blank" rel="noopener nofollow noreferrer">Workload runtime detection</A>)</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="kyma-defender-architecture.png" style="width: 942px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368106iA5FA9AA45DFD92C3/image-size/large?v=v2&px=999" role="button" title="kyma-defender-architecture.png" alt="kyma-defender-architecture.png" /></span></P><P>Arc also provides an <STRONG>extension platform</STRONG> — Defender isn't the only add-on you can deploy this way. And Microsoft provides a <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">verification checklist</A> so you can prove it's working.</P><P><STRONG>Networking note</STRONG>: Both Arc and Defender require outbound connectivity. If egress is blocked, onboarding fails silently. Check the <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/network-requirements" target="_blank" rel="noopener nofollow noreferrer">Arc network requirements</A> and ensure <CODE>*.cloud.defender.microsoft.com:443</CODE> is allowed.</P><HR /><H2 id="how" id="toc-hId-1199272701">How</H2><P>I’ll show a portal-first path (fastest to understand), then a programmatic path (fastest to automate).</P><H3 id="step-0-pre-flight-checklist" id="toc-hId-1131841915">Step 0 — Pre-flight checklist</H3><P>Here’s what I personally confirm before I touch the portal:</P><P>1) <STRONG>Network egress (outbound)</STRONG></P><UL><LI>Arc agents require outbound access to a set of URLs (Azure Resource Manager, Entra ID token endpoints, container registries for pulling agent images, and more depending on features). (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/network-requirements" target="_blank" rel="noopener nofollow noreferrer">Azure Arc-enabled Kubernetes network requirements</A>)</LI><LI>Defender for Containers on Arc requires outbound access to <CODE>*.cloud.defender.microsoft.com:443</CODE>. (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A>)</LI></UL><P>2) <STRONG>Tooling</STRONG></P><UL><LI>Azure CLI + the <CODE>connectedk8s</CODE> extension (for Arc onboarding). (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A>)</LI><LI>If I want to script extension deployment, I also install the <CODE>k8s-extension</CODE> extension. (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/extensions" target="_blank" rel="noopener nofollow noreferrer">Deploy and manage Arc-enabled Kubernetes extensions</A>)</LI></UL><P>3) <STRONG>Cluster access</STRONG></P><UL><LI><CODE>kubectl</CODE> works and points at the cluster I’m onboarding.</LI><LI>If I’m missing kubeconfig on my workstation, the Kyma Dashboard has a <STRONG>Download kubeconfig</STRONG> link for the cluster.</LI><LI>I sanity-check that my kubeconfig/current context is the Kyma cluster before running anything destructive:</LI></UL><PRE><CODE>kubectl <SPAN class="">config</SPAN> current-<SPAN class="">context</SPAN>
kubectl cluster-info</CODE></PRE><UL><LI>I have capacity for Arc agents (the Arc quickstart calls out resource requirements and that agents are deployed on connect). (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A>)</LI></UL><H3 id="step-1-connect-the-cluster-to-azure-arc" id="toc-hId-935328410">Step 1 — Connect the cluster to Azure Arc</H3><P>I typically do this from a workstation that already has <CODE>kubectl</CODE> access to the cluster.</P><H4 id="1-1-register-providers-if-needed-" id="toc-hId-867897624">1.1 Register providers (if needed)</H4><P>The Arc quickstart includes registering resource providers like <CODE>Microsoft.Kubernetes</CODE>, <CODE>Microsoft.KubernetesConfiguration</CODE>, and <CODE>Microsoft.ExtendedLocation</CODE>. (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A>)</P><H4 id="1-2-run-the-connect-command" id="toc-hId-671384119">1.2 Run the connect command</H4><P>From the Arc quickstart, the core command is:</P><PRE><CODE>az connectedk8s connect --<SPAN class="">name</SPAN> <cluster-<SPAN class="">name</SPAN>> --resource-<SPAN class="">group</SPAN> <resource-<SPAN class="">group</SPAN>></CODE></PRE><P>In practice, I prefer to be explicit (especially on shared subscriptions) and set <CODE>--location</CODE> and <CODE>--tags</CODE>:</P><PRE><CODE>az connectedk8s connect \
--name <SPAN class=""><cluster-name></SPAN> \
--resource-group <SPAN class=""><resource-group></SPAN> \
--location <SPAN class=""><azure-region></SPAN> \
--<SPAN class="">tags</SPAN> env=<SPAN class=""><env></SPAN> owner=<SPAN class=""><team></SPAN> <SPAN class="">system</SPAN>=<SPAN class=""><system></SPAN>
</CODE></PRE><P>What I’m explicitly setting there:</P><UL><LI><CODE>--location</CODE>: the Azure region where the <STRONG>Azure Arc-enabled Kubernetes resource</STRONG> is created. If you omit it, it’s created in the same region as the resource group.</LI><LI><CODE>--tags</CODE>: Azure Resource Manager tags on the Arc resource (space-separated <CODE>key[=value]</CODE>).</LI></UL><P>If this command hangs or fails in weird ways, I go back to egress first — the Arc network requirements doc is the authoritative “what URLs/ports must my cluster reach?” list. (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/network-requirements" target="_blank" rel="noopener nofollow noreferrer">Azure Arc-enabled Kubernetes network requirements</A>)</P><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A> and <A href="https://learn.microsoft.com/en-us/cli/azure/connectedk8s?view=azure-cli-latest#az-connectedk8s-connect" target="_blank" rel="noopener nofollow noreferrer">Azure CLI reference — az connectedk8s connect</A>)</P><H4 id="1-3-verify-arc-agents-in-the-cluster" id="toc-hId-474870614">1.3 Verify Arc agents in the cluster</H4><P>The quickstart calls out that Arc deploys agents into the <CODE>azure-arc</CODE> namespace. I validate that they’re <CODE>Running</CODE>:</P><PRE><CODE>kubectl <SPAN class="">get</SPAN> deployments,pods -n azure-<SPAN class="">arc</SPAN>
</CODE></PRE><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A>)</P><P>Here’s what that looks like in practice on my Kyma cluster:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arc-pods-kyma.png" style="width: 904px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368107i5371DF84949A9569/image-size/large?v=v2&px=999" role="button" title="arc-pods-kyma.png" alt="arc-pods-kyma.png" /></span></P><P>And here’s the connected cluster resource in Azure (showing things like connectivity status, location, and tags):</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arc-kyma-ui.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368108i00A2C62A41C26166/image-size/large?v=v2&px=999" role="button" title="arc-kyma-ui.png" alt="arc-kyma-ui.png" /></span></P><P>At this point, if Arc isn’t healthy, I stop and fix that first. Everything else depends on it.</P><H3 id="step-2-enable-the-containers-plan-in-microsoft-defender-for-cloud" id="toc-hId-149274390">Step 2 — Enable the Containers plan in Microsoft Defender for Cloud</H3><P>Now I go to Defender for Cloud and enable the <STRONG>Containers</STRONG> plan for the subscription where my Arc-enabled cluster lives.</P><P>The portal walkthrough is:</P><UL><LI>Microsoft Defender for Cloud → <STRONG>Environment settings</STRONG> → pick subscription → toggle <STRONG>Containers</STRONG> plan On</LI><LI>Select <STRONG>Settings</STRONG> next to the Containers plan → choose <STRONG>Enable specific components</STRONG></LI></UL><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A>)</P><P>At this point you’ll be asked which Containers plan components to enable.</P><P>You <EM>can</EM> enable everything, but for this post I’m intentionally focusing on the <STRONG>Defender sensor</STRONG> (runtime detections). The important callout: <STRONG>from a pricing perspective there’s no cost benefit to enabling one vs. many — the cost is the same</STRONG> — so this is purely about keeping the walkthrough scoped to runtime detection.</P><P>Here’s what that looks like in the portal (first the Containers plan settings, then the component selection where I keep only the sensor in scope):</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="enable-defender-containers-settings.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368109iBE82CDA5D44C2406/image-size/large?v=v2&px=999" role="button" title="enable-defender-containers-settings.png" alt="enable-defender-containers-settings.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="defender-settings-details.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368110i15410E5DD773E877/image-size/large?v=v2&px=999" role="button" title="defender-settings-details.png" alt="defender-settings-details.png" /></span></P><H3 id="step-3-deploy-defender-components-to-the-arc-enabled-cluster" id="toc-hId--122470484">Step 3 — Deploy Defender components to the Arc-enabled cluster</H3><P>I use one of two flows.</P><H4 id="option-a-recommended-deploy-via-defender-for-cloud-recommendations" id="toc-hId--612386996">Option A (recommended): Deploy via Defender for Cloud Recommendations</H4><P>This is the “guided remediation” path:</P><UL><LI>Defender for Cloud → <STRONG>Recommendations</STRONG></LI><LI>Find “Azure Arc-enabled Kubernetes clusters should have Defender extension installed”</LI><LI>Select the clusters → <STRONG>Fix</STRONG></LI></UL><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A>)</P><H4 id="option-b-deploy-manually-from-the-arc-cluster-resource" id="toc-hId--808900501">Option B: Deploy manually from the Arc cluster resource</H4><P>If I want explicit control (or I’m debugging), I do:</P><UL><LI>Arc-enabled Kubernetes resource → <STRONG>Extensions</STRONG> → <STRONG>+ Add</STRONG></LI><LI>Install <STRONG>Microsoft Defender for Containers</STRONG></LI><LI>Choose/configure the <STRONG>Log Analytics workspace</STRONG> during installation (this is where the extension sends collected logs/telemetry used by Defender for Cloud and Azure Monitor Logs)<UL><LI>I can select an existing workspace, create a new one, or use the default: <CODE>DefaultWorkspace-[subscription-id]-[region]</CODE></LI></UL></LI></UL><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A>)</P><H3 id="step-4-optional-programmatic-deployment-repeatable-automation-" id="toc-hId--712010999">Step 4 (optional) — Programmatic deployment (repeatable automation)</H3><P>If I’m onboarding clusters at scale, I don’t want a click path. The programmatic doc gives the Azure CLI commands for creating the Defender extension.</P><P>Defender sensor extension:</P><P>Note: Some examples include an <CODE>auditLogPath</CODE> setting for clusters where you control the API server audit log file location. In Kyma, audit logs are handled via SAP’s Platform Logging Service and you generally don’t have direct access to that file path, so I’m omitting it here.</P><PRE><CODE>az k8s-extension create \
-<SPAN class="">-name microsoft.azuredefender.kubernetes \</SPAN> -<SPAN class="">-cluster-type connectedClusters \</SPAN> -<SPAN class="">-cluster-name <cluster-name> \</SPAN> -<SPAN class="">-resource-group <resource-group> \</SPAN> -<SPAN class="">-extension-type microsoft.azuredefender.kubernetes \</SPAN> -<SPAN class="">-configuration-settings \</SPAN> logAnalyticsWorkspaceResourceID="/subscriptions/<subscription-id>/resourceGroups/<rg>/providers/Microsoft.OperationalInsights/workspaces/<workspace-name>"</CODE></PRE><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-programmatically" target="_blank" rel="noopener nofollow noreferrer">Deploy Defender for Containers on Arc-enabled Kubernetes (programmatic)</A>)</P><P>If you need the generic “how do extensions work / how do I list/update/delete them” reference, the Arc extensions doc is the canonical place. (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/extensions" target="_blank" rel="noopener nofollow noreferrer">Deploy and manage Arc-enabled Kubernetes extensions</A>)</P><H3 id="step-5-verify-it-s-actually-working" id="toc-hId--908524504">Step 5 — Verify it’s actually working</H3><P>This is where I slow down and prove success.</P><P>Microsoft’s verification checklist is:</P><UL><LI>Arc connection is healthy</LI><LI>Defender extension shows as installed</LI><LI>Sensor pods are running</LI><LI>Alerts appearing</LI></UL><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A>)</P><H4 id="5-1-verify-arc-connectivity" id="toc-hId--1398441016">5.1 Verify Arc connectivity</H4><PRE><CODE>az connectedk8s show \
-<SPAN class="">-name <cluster-name> \</SPAN> -<SPAN class="">-resource-group <resource-group> \</SPAN> -<SPAN class="">-query connectivityStatus</SPAN>
</CODE></PRE><P>The expected output is <CODE>Connected</CODE>. (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A>)</P><H4 id="5-2-verify-defender-extension-provisioning" id="toc-hId--1594954521">5.2 Verify Defender extension provisioning</H4><PRE><CODE>az k8s-extension show \
-<SPAN class="">-name microsoft.azuredefender.kubernetes \</SPAN> -<SPAN class="">-cluster-type connectedClusters \</SPAN> -<SPAN class="">-cluster-name <cluster-name> \</SPAN> -<SPAN class="">-resource-group <resource-group> \</SPAN> -<SPAN class="">-query provisioningState</SPAN>
</CODE></PRE><P>The expected output is <CODE>Succeeded</CODE>. (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A>)</P><H4 id="5-3-verify-sensor-pods" id="toc-hId--1791468026">5.3 Verify sensor pods</H4><PRE><CODE>kubectl <SPAN class="">get</SPAN> pods -n kube-<SPAN class="">system</SPAN> -l app=microsoft-defender
<SPAN class=""># If you don’t see anything in kube-system, also check the mdc namespace:</SPAN>
kubectl <SPAN class="">get</SPAN> ds -n mdc
kubectl <SPAN class="">get</SPAN> pods -n mdc</CODE></PRE><P>This is the simplest “is the sensor deployed?” check. If the DaemonSet exists and the pods are <CODE>Running</CODE>, you’re in good shape.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="defender-daemonsets.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368111iC3A48CB4F95070E2/image-size/large?v=v2&px=999" role="button" title="defender-daemonsets.png" alt="defender-daemonsets.png" /></span></P><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A>)</P><H4 id="5-4-verify-in-the-portal" id="toc-hId--1987981531">5.4 Verify in the portal</H4><P>This is the “did Azure actually receive the signals?” check.</P><P>After you’ve deployed the Defender extension and the sensor is running, go to <STRONG>Microsoft Defender for Cloud</STRONG> and look at <STRONG>Security alerts</STRONG> (or the Alerts view in the Defender for Cloud experience). If you just ran the simulator (next step), this is where you’ll see the resulting alerts.</P><P>It can take a bit of time (think minutes, not seconds) for the cluster and alerts to show up after onboarding. (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A>)</P><H4 id="5-5-optional-prove-runtime-detection-by-simulating-alerts" id="toc-hId-2110472260">5.5 (Optional) Prove runtime detection by simulating alerts</H4><P>If I want hard proof that the sensor-backed detections are flowing end-to-end, I use Microsoft’s Kubernetes alerts simulation tool.</P><P>It has two prerequisites that matter in practice:</P><UL><LI>Defender for Containers is enabled and the Defender sensor is deployed.</LI><LI>I have admin permissions on the cluster.</LI></UL><P>Then I download and run the simulator:</P><PRE><CODE>curl -O http<SPAN class="">s:</SPAN>//raw.githubusercontent.<SPAN class="">com</SPAN>/microsoft/Defender-<SPAN class="">for</SPAN>-Cloud-Attack-Simulation/refs/heads/main/simulation.<SPAN class="">py</SPAN>
<SPAN class="">python</SPAN> simulation.<SPAN class="">py</SPAN>
</CODE></PRE><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="run-simulation-alerts.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368112iC3515E0A16A305FC/image-size/large?v=v2&px=999" role="button" title="run-simulation-alerts.png" alt="run-simulation-alerts.png" /></span></P><P>After it runs, I go back to Defender for Cloud and look at the alerts that were generated:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="defender-alerts-simulation.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368113i931D880C4A9ABCFE/image-size/large?v=v2&px=999" role="button" title="defender-alerts-simulation.png" alt="defender-alerts-simulation.png" /></span></P><P>(Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/alerts-containers#kubernetes-alerts-simulation-tool" target="_blank" rel="noopener nofollow noreferrer">Kubernetes alerts — Kubernetes alerts simulation tool</A>)</P><H4 id="5-6-inspect-the-alert-details-example-binary-drift-" id="toc-hId-2082142446">5.6 Inspect the alert details (example: binary drift)</H4><P>To make this feel real (and to sanity-check what Defender is actually flagging), I open one of the generated alerts and look at the <STRONG>Alert details</STRONG> pane. For example, the “A drift binary detected executing in the container” alert includes fields like the <STRONG>suspicious process path</STRONG>, <STRONG>command line</STRONG>, <STRONG>parent process</STRONG>, and the <STRONG>affected Arc-enabled Kubernetes resource</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="details-drift-binary.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368114i0C069D1D3E3DB66D/image-size/large?v=v2&px=999" role="button" title="details-drift-binary.png" alt="details-drift-binary.png" /></span></P><H3 id="step-6-troubleshooting-the-short-list-" id="toc-hId--2115935348">Step 6 — Troubleshooting (the short list)</H3><H4 id="6-1-if-an-extension-is-stuck-check-egress-first" id="toc-hId-1689115436">6.1 If an extension is stuck, check egress first</H4><UL><LI>Arc-required outbound URLs: (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/network-requirements" target="_blank" rel="noopener nofollow noreferrer">Azure Arc-enabled Kubernetes network requirements</A>)</LI><LI>Defender-required outbound endpoint (<CODE>*.cloud.defender.microsoft.com:443</CODE>) (Docs: <A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A>)</LI></UL><H4 id="6-2-if-things-drift-over-time" id="toc-hId-1492601931">6.2 If things drift over time</H4><P>The Arc extensions doc notes that if Arc agents don’t have network connectivity for an extended period, an extension can transition to <CODE>Failed</CODE>, and you may need to recreate the extension. (Docs: <A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/extensions" target="_blank" rel="noopener nofollow noreferrer">Deploy and manage Arc-enabled Kubernetes extensions</A>)</P><HR /><H2 id="closing-thoughts" id="toc-hId-1882894440">Closing thoughts</H2><P>If you’re running Kubernetes outside AKS, it’s easy to end up with fragmented security tooling. The Arc + Defender for Containers pattern is one of the cleaner ways I’ve found to bring:</P><UL><LI>centralized visibility,</LI><LI>actionable runtime alerts,</LI><LI>and runtime security signals</LI></UL><P>into a hybrid Kubernetes estate—without replatforming.</P><P>In future posts, I’ll explore what else we can do with <STRONG>Kyma + Azure Arc + Azure</STRONG> beyond Defender for Containers (observability, more security patterns, etc.).</P><HR /><H2 id="references-microsoft-learn-" id="toc-hId-1686380935">References (Microsoft Learn)</H2><UL><LI><A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/overview" target="_blank" rel="noopener nofollow noreferrer">Azure Arc-enabled Kubernetes overview</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/network-requirements" target="_blank" rel="noopener nofollow noreferrer">Azure Arc-enabled Kubernetes network requirements</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster" target="_blank" rel="noopener nofollow noreferrer">Quickstart: Connect an existing Kubernetes cluster to Azure Arc</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/extensions" target="_blank" rel="noopener nofollow noreferrer">Deploy and manage Arc-enabled Kubernetes extensions</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-portal" target="_blank" rel="noopener nofollow noreferrer">Enable Defender for Containers on Arc-enabled Kubernetes (portal)</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-enable-programmatically" target="_blank" rel="noopener nofollow noreferrer">Deploy Defender for Containers on Arc-enabled Kubernetes (programmatic)</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-arc-verify" target="_blank" rel="noopener nofollow noreferrer">Verify Defender for Containers on Arc-enabled Kubernetes</A></LI><LI><A href="https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-architecture" target="_blank" rel="noopener nofollow noreferrer">Defender for Containers architecture</A></LI></UL>2026-02-02T15:52:32.766000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/kyma-evolution-transforming-sap-kyma-into-a-tailor-made-saas-platform-for/ba-p/14317418Kyma Evolution: Transforming SAP Kyma into a tailor-made SaaS Platform for sbs extensions2026-02-03T07:53:06.319000+01:00ChristianWeisshttps://community.sap.com/t5/user/viewprofilepage/user-id/136917<H2 id="toc-hId-1788749541"><STRONG>Introduction</STRONG></H2><P><SPAN>Hello SAP Community!</SPAN></P><P data-unlink="true"><SPAN>As an Extensibility Expert, I’m constantly looking for the most efficient ways to build and operate enterprise-grade extensions for the SAP Cloud ERP following the <SPAN><A href="https://www.sap.com/resources/what-is-a-clean-core" target="_self" rel="noopener noreferrer">Clean Core principles</A><SPAN>. We all know the Cloud Application Programming Model (CAP) is the go-to framework for Cloud Native side-by-side Extensions on BTP, but where you run them on large scale matters just as much as how you code it.</SPAN></SPAN> </SPAN></P><P data-unlink="true"><SPAN>Today, I’m starting a 3-part series (<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915" target="_self">Part 2,</A> <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-3-automated-application-lifecycle-management-in-action/ba-p/14317937" target="_self">Part 3 )</A> "Kyma Evolution". We will explore how the powercouple of Kyma in combination with the <SPAN><A href="https://sap.github.io/cap-operator/" target="_self" rel="nofollow noopener noreferrer">CAP Operator</A><SPAN> provides a scalable, high-performance, cost-effective runtime for CAP Multitenancy SaaS applications. In case you don’t know SAP Kyma and its benefits yet, please have a look at the </SPAN></SPAN><SPAN><A href="https://learning.sap.com/courses/developing-applications-in-sap-btp-kyma-runtime/exploring-the-benefits-of-the-sap-btp-kyma-runtime_f093d2b5-a598-43bb-9a25-e224e97b747a" target="_self" rel="noopener noreferrer">Kyma Learning</A><SPAN> which explains it quite well. </SPAN></SPAN> </SPAN></P><H2 id="toc-hId-1592236036"><STRONG>The paradigm shift: From monolithic to a modular environment</STRONG></H2><P data-unlink="true"><SPAN>For a long time, SAP Kyma was seen as a fixed bundle of tools. You got everything, whether you needed it or not. That has changed. With the <SPAN><A href="https://help.sap.com/docs/btp/sap-business-technology-platform/kyma-modules" target="_self" rel="noopener noreferrer">Kyma Module Concept</A><SPAN>, the platform is now modular and extensible. </SPAN></SPAN> </SPAN><SPAN>Even more exciting is the introduction of <SPAN><A href="https://kyma-project.io/external-content/community-modules/docs/user/README.html" target="_self" rel="nofollow noopener noreferrer">Community Modules</A><SPAN>. This means Kyma is no longer limited to what SAP provides out-of-the-box. The community can now contribute extensions. One of the most powerful examples so far is the CAP Operator. </SPAN></SPAN> </SPAN><SPAN>By adding the CAP Operator as a module, you transform a generic Kyma cluster into a specialized CAP SaaS Runtime.</SPAN></P><H2 id="toc-hId-1395722531"><STRONG>The "Smart Broker": Why the CAP Operator is a game-changer</STRONG></H2><P><SPAN>When we compare Kyma to SAP BTP Cloud Foundry, the advantages of using a dedicated Operator become clear. While Cloud Foundry is a great general-purpose platform, the CAP Operator on Kyma acts as a "smart broker" specifically for CAP Applications.</SPAN></P><P><SPAN>Key Benefits for Partners and Customers:</SPAN></P><UL><LI><STRONG>100% Automated Lifecycle Management<SPAN>: </SPAN></STRONG>The Operator understands CAP. It handles application deployment, tenant management, DB model updates, and service bindings automatically and new capabilities like <A href="https://help.sap.com/docs/hana-cloud/sap-hana-cloud-multitenancy/introducing-sap-hana-cloud-multitenancy" target="_self" rel="noopener noreferrer">SAP HANA Cloud Native tenants </A>(<A href="https://www.sap.com/assetdetail/2025/01/fae009ce-f07e-0010-bca6-c68f7e60039b.html" target="_self" rel="noopener noreferrer">Videocast</A>) including tenant backup and restore management can be added once available.</LI><LI><STRONG>Cost-Efficiency via Higher Container Density<SPAN>: </SPAN></STRONG>In contrast to Cloud Foundry, Kyma allows for fine-grained resource limits. This means you can run more services on the same infrastructure, lowering your BTP bill when you are running multiple CAP Applications.</LI><LI><STRONG>Cost-Efficiency via Resource Sharing:</STRONG> Expensive Services like SAP HANA Cloud and Cloud Logging can be shared in CF and Kyma. However we will see in <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915" target="_self">Part 2</A> that in the Kyma environment this will be more elegant and simplified using the CAP Operator.</LI><LI><STRONG>Enterprise Resilience & Day-2 Ops:</STRONG> It provides built-in "Reconciliation Loops." If a tenant's database connection fails, the Operator detects and fixes it without human intervention.</LI><LI><STRONG>Advanced Networking & Security (Zero Trust Architecture):</STRONG> One of the most significant advantages over Cloud Foundry is how Kyma handles connectivity. In Cloud Foundry, preventing a service from being publicly accessible often requires manual effort. In Kyma, thanks to the CAP Operator's integration with Istio, your internal services stay cluster-internal by default. You have full control over the attack surface.</LI><LI><STRONG><STRONG>Integrated Domain Management:</STRONG></STRONG><SPAN> Managing Custom Domains becomes a native experience using the CAP Operator. Instead of juggling external services for certificates, you can orchestrate your SaaS-brand URLs directly within the Kyma environment, providing a seamless, secure and automated experience.</SPAN></LI></UL><H2 id="toc-hId-1199209026"><STRONG>The ecosystem enablers: Extensibility using CAP Plugins + Kyma Modules</STRONG></H2><P><SPAN>SAP is providing enablers for the ecosystem on different levels:</SPAN></P><OL><LI><SPAN>At the Infrastructure Level (Kyma): We use Modules (like the CAP Operator) to make the runtime "CAP-aware."</SPAN></LI><LI><SPAN>At the Application Level (CAP): We use <A href="https://cap.cloud.sap/docs/plugins/" target="_blank" rel="noopener nofollow noreferrer">CAP Plugins</A> to add business features like multitenancy or audit logging with a single command.</SPAN></LI><LI>At CAP CLI Level: We can add plugins like <A href="https://github.com/cap-js/cap-operator-plugin" target="_self" rel="nofollow noopener noreferrer">CAP Operator Plugin</A> to the cds cli which provides capabilities to generate CAP Operator resources essential for deploying multi-tenant CAP Applications from your project setup.</LI></OL><P><SPAN>This synergy creates the ultimate development and operations experience. You use plugins to build fast and the Operator to run smart and it enables you to contribute and to add your specific capabilities at any level.</SPAN></P><H2 id="toc-hId-1002695521"><STRONG>Turning Kyma into a CAP Runtime</STRONG></H2><P><SPAN>The beauty of this evolution is simplicity. You can now add the CAP Operator Community Module directly from your Kyma Cockpit using the Add Modules or Modify Modules Button on the Cluster Details Screen.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_0-1769755560640.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367263i51979E8029DF7661/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_0-1769755560640.png" alt="ChristianWeiss_0-1769755560640.png" /></span></P><P><SPAN>The Modules overview shows you the SAP Provided and Managed Modules under the Module Pane and the installed Community Modules in a separate Pane.</SPAN></P><P><SPAN>As a next step use the Add button on the Community Module Pane.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_1-1769755560641.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367261i5C2D6A8A40C33CED/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_1-1769755560641.png" alt="ChristianWeiss_1-1769755560641.png" /></span></P><P><SPAN>On the next screen you need to choose the Add Source YAML’s button to get the list of community modules. </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_2-1769755560643.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367262i9A4373899D1581FA/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_2-1769755560643.png" alt="ChristianWeiss_2-1769755560643.png" /></span></P><P><SPAN>Just keep the defaults and press the Add button.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_3-1769755560644.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367264i3E837B923A2BDF69/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_3-1769755560644.png" alt="ChristianWeiss_3-1769755560644.png" /></span></P><P><SPAN>This will load the list of currently available Community Modules from which you need to check the cap-operator Tile and press Add. </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_4-1769755560644.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367265iB6084074BDFA8A2F/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_4-1769755560644.png" alt="ChristianWeiss_4-1769755560644.png" /></span></P><P><SPAN>This will start the automatic installation of the CAP Operator Components into the namespace called cap-operator-system. </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_5-1769755560646.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367266i317643E1C2BBD9A2/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_5-1769755560646.png" alt="ChristianWeiss_5-1769755560646.png" /></span></P><P><SPAN>The installation and module status will be displayed in the overview and will turn into green after installation is successfully finished.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_6-1769755560646.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367267iDB28E33E752F9095/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_6-1769755560646.png" alt="ChristianWeiss_6-1769755560646.png" /></span></P><P data-unlink="true"><SPAN>By doing this, you aren't just deploying code; you are creating an intelligent system that manages your S/4HANA Multitenancy extensions built with CAP for you as it adds <A href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources" target="_self" rel="nofollow noopener noreferrer">Custom Resources (CRs)</A> like CAPApplication, CAPApplicationVersion, CAP Tenant and other to your cluster. This will enable: </SPAN></P><UL><LI><SPAN>Quick and easy deployment of CAP application backends, router, and related networking components.</SPAN></LI><LI><SPAN>Integration with SAP Software-as-a-Service Provisioning service to handle asynchronous tenant subscription requests, executing provisioning / deprovisioning tasks as Kubernetes jobs.</SPAN></LI><LI><SPAN>Automated upgrades of known tenants as soon as new application versions are available.</SPAN></LI><LI><SPAN>Support for deployment of service-specific content / configuration as a Kubernetes job with every application version (for example, HTML5 application content to SAP HTML5 Application Repository Service).</SPAN></LI><LI>Management of TLS certificates and DNS entries related to the deployed application, with support of customer-specific domains.</LI></UL><P><SPAN>and further capabilities.</SPAN></P><H2 id="toc-hId-806182016"><STRONG>What’s Next?</STRONG></H2><P><SPAN>In <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915" target="_self">Part 2</A>, we will dive deeper. We’ll look in more details of the runtime and at a cost benefits to prove why this modular approach is a win for your bottom line. </SPAN><SPAN>What are your thoughts on the modular Kyma concept? Have you tried the CAP Operator yet? Have you ideas for additional community modules for the ecosystem?</SPAN></P>2026-02-03T07:53:06.319000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/when-innovation-meets-real-business-impact-vnsg-sap-btp-hackathon-2026/ba-p/14321541When Innovation Meets Real Business Impact - VNSG SAP BTP Hackathon 20262026-02-04T13:53:21.455000+01:00winklerohttps://community.sap.com/t5/user/viewprofilepage/user-id/426853<H2 id="toc-hId-1789495363"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="VNSG SAP BTP Hackathon.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368932iE299B9A444846322/image-size/large?v=v2&px=999" role="button" title="VNSG SAP BTP Hackathon.png" alt="VNSG SAP BTP Hackathon.png" /></span></H2><P> </P><H2 id="toc-hId-1592981858"> <SPAN>A community powered by creativity</SPAN></H2><P class="">The VNSG SAP BTP Hackathon has once again shown what happens when passionate people, real business challenges, and <A href="https://www.sap.com/products/technology-platform.html" target="_blank" rel="noopener noreferrer">SAP Business Technology Platform</A> come together. Teams of customers, partners, and experts turned ideas into tangible solutions that demonstrate how SAP BTP can drive measurable value in day-to-day operations.<SPAN class=""></SPAN></P><P class="">Over the course of the Hackathon, every team brought its own perspective, business context and technical skills – from integration and automation to SAP Business AI and extensions on SAP BTP.</P><P class="">The result: a set of working prototypes that not only impress technically, but are rooted in real customer needs and adoption potential.</P><H2 id="celebrating-the-2026-finalists" id="toc-hId-1396468353">Celebrating the 2026 finalists</H2><P class="">After intense jury deliberation, three teams earned a spot in the grand final at <A href="https://r1.dotdigital-pages.com/p/7UY6-G3U/sync-vnsg-sap-jaarevent" target="_self" rel="nofollow noopener noreferrer">SYNC 2026</A> for their strong presentations, clear business impact, and high potential for real-world rollout.<SPAN class=""></SPAN></P><UL class=""><LI><P class=""><FONT color="#0000FF"><STRONG>Promocean</STRONG></FONT> (supported by <A href="https://www.soapeople.com/" target="_self" rel="nofollow noopener noreferrer">SOA People</A>) – “Operational Excellence through SAP Business AI”: A showcase of how SAP BTP and Business AI can streamline operations and enhance decision-making in a very pragmatic way.<SPAN class=""></SPAN></P></LI><LI><P class=""><FONT color="#0000FF"><STRONG>Ecotone</STRONG></FONT> (supported by <A href="https://expertum.net/" target="_self" rel="nofollow noopener noreferrer">Expertum</A>) – “Interface Maintenance”: A smart solution addressing the often underestimated pain of interface management, using SAP BTP to bring transparency, stability and control.<SPAN class=""></SPAN></P></LI><LI><P class=""><FONT color="#0000FF"><STRONG>NTT Data</STRONG></FONT> – “ProTrack”: A forward-looking use case that leverages SAP BTP to better track, steer, or optimize processes<SPAN class=""></SPAN></P></LI></UL><P class="">These finalists did not just deliver “cool demos”, they delivered stories where business and technology reinforce each other, with SAP BTP at the core as the innovation platform.</P><H2 id="a-strong-field-beyond-the-finalists" id="toc-hId-1199954848">A strong field beyond the finalists</H2><P class="">Of course, a hackathon is about much more than just three winners. <A href="https://www.aarini.com/" target="_self" rel="nofollow noopener noreferrer">Aarini Consulting</A>, <A href="https://www.ns.nl/" target="_self" rel="nofollow noopener noreferrer">Nederlandse Spoorwegen</A>, and <A href="https://www.simac.com/en/itnl" target="_self" rel="nofollow noopener noreferrer">Simac IT & BSC</A> also delivered impressive solutions that made the jury’s job anything but easy.<SPAN class=""></SPAN></P><P class="">Each of these teams tackled real business scenarios and used SAP BTP services to create innovation that can move quickly from prototype to production. The organizers explicitly encourage all teams to work with their coaches on post-hackathon follow-up to drive implementation and adoption, because the real success of a hackathon is measured when the ideas go live.</P><H2 id="sync-2026-where-the-stage-is-yours" id="toc-hId-1003441343">SYNC 2026: where the stage is yours</H2><P class="">As recognition of this milestone, the three finalist teams are invited to present at <A href="https://r1.dotdigital-pages.com/p/7UY6-G3U/sync-vnsg-sap-jaarevent" target="_self" rel="nofollow noopener noreferrer">SYNC 2026, the VNSG Annual Conference</A> on March 12. During the opening session, each finalist will pitch their solution live on stage to the SYNC audience. The audience will then vote to determine the VNSG SAP BTP Hackathon Champion – a great opportunity to showcase innovation, customer impact and the power of SAP BTP in front of the wider SAP community.</P><H2 id="keep-the-momentum-going" id="toc-hId-806927838">Keep the momentum going</H2><P class="">For everyone involved – finalists, other teams, coaches and the broader community – this hackathon is a starting point, not an ending. The next steps are about refining solutions, planning production rollouts, and scaling adoption so that the innovation created during the hackathon translates into lasting business impact.<SPAN class=""></SPAN></P><P class="">If you are part of the VNSG or wider SAP community and feel inspired by these stories, take this as a call to action: bring your own business challenges, experiment on SAP BTP and co-innovate with partners and customers.</P><P class="">The SYNC 2026 VNSG SAP BTP Hackathon has proven once more that with the right mix of creativity, collaboration and platform capabilities, you can turn ideas into value – fast...</P><P class=""><a href="https://community.sap.com/t5/c-khhcw49343/SAP+Business+Technology+Platform/pd-p/73555000100700000172" class="lia-product-mention" data-product="1215-1">SAP Business Technology Platform</a> <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Build/pd-p/73555000100700001491" class="lia-product-mention" data-product="1181-1">SAP Build</a> <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Integration+Suite/pd-p/73554900100800003241" class="lia-product-mention" data-product="23-1">SAP Integration Suite</a></P><P class=""><a href="https://community.sap.com/t5/user/viewprofilepage/user-id/341036">@hansvp</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/143759">@qmrjvd</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/893645">@sricsi98</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1445414">@BvE</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/607164">@mdschoenmakers</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/562">@tamasszirtes</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/278191">@f_van_leeuwen</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/62861">@Petra1</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/172702">@BartvdKamp</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/3315">@tedcastelijns</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1515172">@Arjan_deMol</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1831">@dvvelzen</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/781602">@JdTeuling</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/120077">@AnnikaHeus</a> <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/5710">@LaurensSteffers</a> </P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="VNSG SAP BTP Hackathon.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368933i8868A4BFB9916559/image-size/large?v=v2&px=999" role="button" title="VNSG SAP BTP Hackathon.png" alt="VNSG SAP BTP Hackathon.png" /></span></P>2026-02-04T13:53:21.455000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915Part 2: Runtime Architecture & Cost efficiency gains2026-02-04T16:53:28.002000+01:00ChristianWeisshttps://community.sap.com/t5/user/viewprofilepage/user-id/136917<H2 id="toc-hId-1788754343"><STRONG>Introduction</STRONG></H2><P><SPAN>In <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/kyma-evolution-transforming-sap-kyma-into-a-tailor-made-saas-platform-for/ba-p/14317418" target="_self">Part 1</A>, we introduced the </SPAN><STRONG>CAP Operator</STRONG><SPAN><STRONG> as the brain to make out of your Kyma Cluster a runtime for CAP Multitenancy SaaS applications.</STRONG> Today, we dive into technical architecture. We will see why Kyma isn't just "another place to run containers," but can be turned into a highly optimized environment that saves costs by being smarter about how it uses resources, especially when compared to a traditional Cloud Foundry setup, important if you need to run CAP Multitenancy Applications on a large scale.</SPAN></P><H2 id="toc-hId-1592240838"><STRONG>1. The Core Architecture paradigm: Declarative Tenant Management</STRONG></H2><P><SPAN>While Cloud Foundry relies on imperative commands (like </SPAN><SPAN>cf push or cf deploy</SPAN><SPAN>), the CAP Operator uses </SPAN><A href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources" target="_blank" rel="noopener nofollow noreferrer">Custom Resources (CRs)</A><SPAN> the Kubernetes extensibility API for customization. The most important one for SaaS Multitenancy CAP Solutions is the </SPAN><STRONG>CAPTenant.</STRONG></P><P><SPAN>In Cloud Foundry, the </SPAN>Service Manager<SPAN> handles the creation of HDI containers, and it does the same in Kyma. However, the CAP Operator adds a layer of </SPAN><STRONG>"Operational Intelligence"</STRONG><SPAN>:</SPAN></P><UL><LI><STRONG>Autonomous Monitoring:</STRONG><SPAN> The Operator doesn't just trigger the Service Manager; it monitors the entire lifecycle of the tenant's database schema. If a </SPAN><SPAN>cds deploy</SPAN><SPAN> (migration) fails, the Operator detects the "Unhealthy" state and can automatically retry or alert, integrated directly into the Kubernetes event stream.</SPAN></LI><LI><STRONG>State Reconciliation:</STRONG><SPAN> It ensures that the state of your tenants always matches your configuration. If a tenant record is stuck, the Operator's reconciliation loop acts as an automated "Day-2" administrator.</SPAN></LI><LI><STRONG>Automatic creation of the Provider tenant</STRONG><SPAN>: The CAP Operator will automatically take care that the provider tenant for your SaaS Application is created and updated like for the Subscriber tenants, whereas in Cloud Foundry you will need to to care. </SPAN></LI></UL><H2 id="toc-hId-1395727333">2. The "Zero-Idle" secret: On-Demand MTXS Pods</H2><P><SPAN>In a typical CAP multitenancy setup, the</SPAN><STRONG> CAP MTXS component is the heavy lifter.</STRONG><SPAN> It handles onboarding, unsubscription and database upgrades.</SPAN></P><P><STRONG>The Cloud Foundry Challenge:</STRONG><SPAN> In CF, the MTXS logic often runs as one container</SPAN><SPAN> or a permanent process within your application instance. This means you are constantly paying for the RAM and CPU of this "waiting" process, even if no one is subscribing to your app for days.</SPAN></P><P><STRONG>The Kyma + CAP Operator Advantage:</STRONG><SPAN> The Operator implements what we call a </SPAN><STRONG>"Zero-Idle" policy</STRONG><SPAN> for provisioning tasks:</SPAN></P><UL><LI><STRONG>Trigger-based Lifecycle:</STRONG><SPAN> When a subscription event occurs, the CAP Operator spins up a </SPAN><STRONG>short-lived Job Pod</STRONG><SPAN> specifically for the MTXS task of that tenant. For more details on how the CAP Operator manages the tenant lifecycle please have a look at the </SPAN><A href="https://sap.github.io/cap-operator/docs/usage/tenant-provisioning/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Tenant Provisioning Documentation</SPAN></A><SPAN>.</SPAN></LI><LI><STRONG>Scale-to-Zero:</STRONG><SPAN> With the CAP Operator you can <STRONG>run upgrades of your database tenants in parallel</STRONG>. Once the tenant is successfully managed and the database tenant is ready, the MTXS Pod is terminated.</SPAN></LI><LI><STRONG>Cost Benefit:</STRONG><SPAN> You pay </SPAN>zero<SPAN> for the MTXS infrastructure during idle times. In large-scale landscapes with many extensions, this optimization can <STRONG>reduce your runtime memory footprint by </STRONG></SPAN><STRONG>20–30%</STRONG><SPAN>.</SPAN></LI></UL><H2 id="toc-hId-1199213828"><STRONG>3. Scaling without linear cost growth</STRONG></H2><P><SPAN>A common concern is that Kubernetes might be more expensive than Cloud Foundry. The reality is the opposite when scaling SaaS:</SPAN></P><OL><LI><STRONG>Bin-Packing:</STRONG><SPAN> Kubernetes allows for "Overcommitting." Since your applications and tenants aren't all active at the same millisecond, you can pack more application instances onto a single K8S Node than the rigid memory cells of CF allow.</SPAN></LI><LI><STRONG>Resource Pooling:</STRONG><SPAN> By using the CAP Operator to orchestrate the Service Manager, you can efficiently fill up large, cost-effective HANA Cloud instances. You avoid the "base cost" of having many small, underutilized HANA instances.</SPAN></LI><LI><STRONG>Efficiency Gains:</STRONG><SPAN> As you grow from 10 to 100 tenants, your infrastructure costs grow at a much flatter angle because the "management overhead" (like MTXS) only consumes resources when it's actually working.</SPAN></LI></OL><H2 id="toc-hId-1002700323"><STRONG>4. Centralized Observability: Selective Logging</STRONG></H2><P><SPAN>Logging is often an overlooked cost driver. While both BTP environments use the </SPAN><STRONG>SAP Cloud Logging Service which can be shared among multiple deployments</STRONG><SPAN>, the way how it's done and how they handle logging data (storage as the major cost driver) is fundamentally different.</SPAN></P><UL><LI><STRONG>Cloud Foundry:</STRONG><SPAN> Every app instance pushes all logs blindly to the service. You have little control over the volume.</SPAN></LI><LI><STRONG>Kyma (The Daemon Approach):</STRONG><SPAN> Kyma uses the </SPAN><STRONG>Telemetry Module</STRONG><SPAN> (Fluent Bit) running as a daemon on every node. </SPAN></LI><UL><LI><STRONG>Pre-Filtering:</STRONG><SPAN> You can define a </SPAN><SPAN>LogPipeline</SPAN><SPAN> to filter logs </SPAN><I><SPAN>before</SPAN></I><SPAN> they leave the cluster. For example, you can exclude send logs from specific production namespaces or filter the logs by specific attributes.</SPAN></LI><LI><STRONG>Centralized Sharing:</STRONG><SPAN> You don't need to bind every app. You define one central pipeline that securely routes logs from the entire cluster to a single Cloud Logging instance, providing a unified view with significantly lower ingest costs.</SPAN></LI></UL></UL><P><SPAN>On how to use the SAP Cloud Logging with SAP Kyma, please have a look into the </SPAN><A href="http://community.sap.com/t5/technology-blog-posts-by-sap/kyma-integration-with-sap-cloud-logging-part-1-introduction-and-shipping/ba-p/13648649" target="_blank"><SPAN>Kyma Integration with SAP Cloud Logging Blog</SPAN></A><SPAN>.</SPAN></P><H2 id="toc-hId-806186818"><STRONG>Summary: The Architect's choice</STRONG></H2><P><SPAN>Choosing Kyma and the CAP Operator isn't just about a "new technology." It's a strategic decision to move <STRONG>from </STRONG></SPAN><STRONG>reserved resources to on-demand resources and how to utilize them more efficiently. </STRONG></P><UL><LI><STRONG>CF:</STRONG><SPAN> Predictable, but you often pay for "idling" capacity.</SPAN></LI><LI><STRONG>Kyma + CAP Operator:</STRONG><SPAN> Dynamic, ensuring you only pay for what is actively serving your business or your tenants.</SPAN></LI></UL><H2 id="toc-hId-609673313"><STRONG>What’s Next?</STRONG></H2><P><SPAN>Now that we have seen how the Power couple ”BTP Kyma&CAP Operator” enables you to lower your infrastructure cost in case of running CAP applications on large scale, we will see in <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-3-automated-application-lifecycle-management-in-action/ba-p/14317937" target="_self">Part 3</A> how the CAP Operator helps you to run your Application Lifecycle Management 100% automated which is a further important pillar to reduce your overall TCO.</SPAN></P>2026-02-04T16:53:28.002000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/part-3-automated-application-lifecycle-management-in-action/ba-p/14317937Part 3: Automated Application Lifecycle Management in Action2026-02-04T16:53:39.614000+01:00ChristianWeisshttps://community.sap.com/t5/user/viewprofilepage/user-id/136917<H2 id="toc-hId-1788754407"><STRONG>Introduction</STRONG></H2><P><SPAN>Enough theory. In <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/kyma-evolution-transforming-sap-kyma-into-a-tailor-made-saas-platform-for/ba-p/14317418" target="_self">Part 1</A> and <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915" target="_self">2</A>, we discussed how and why the </SPAN>CAP Operator<SPAN> turns SAP Kyma into a </SPAN><SPAN>runtime for CAP Multitenancy SaaS applications. </SPAN><SPAN><STRONG>Forget complex deployment scripts and manual tenant lifecycle tracking</STRONG>. We will take the official </SPAN><A href="https://github.com/SAP-samples/btp-cap-multitenant-saas" target="_blank" rel="noopener nofollow noreferrer">S<SPAN>AP-samples/btp-cap-multitenant-saas</SPAN></A><SPAN> and show you how to deploy, manage, and monitor your SaaS application with zero friction. This is Platform Engineering for DevOps at its best.</SPAN></P><H2 id="toc-hId-1592240902"><STRONG>1. The Application Blueprint</STRONG></H2><P><SPAN>Instead of managing dozens of individual Kubernetes objects, you define in a declarative way your CAP Application using </SPAN><SPAN> </SPAN><A href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources" target="_blank" rel="noopener nofollow noreferrer">Custom Resources (CRs)</A><SPAN> like CAPApplication, CAPApplicationVersion which will create the necessary Kubernetes native artifacts to manage your application lifecycle. </SPAN><SPAN>In the </SPAN><A href="https://github.com/SAP-samples/btp-cap-multitenant-saas" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Sample Repository</SPAN></A><SPAN> the necessary steps to deploy the </SPAN><SPAN>Sustainable SaaS (SusaaS) example using the CAP Operator is described under </SPAN><A href="https://github.com/SAP-samples/btp-cap-multitenant-saas/tree/main/deploy/cap-operator" target="_blank" rel="noopener nofollow noreferrer"><SPAN>deploy/capoperator</SPAN></A><SPAN>. The example will use a Helm Chart which creates the required BTP service instances and bindings using the</SPAN><A href="https://help.sap.com/docs/btp/sap-business-technology-platform/sap-btp-operator-module" target="_blank" rel="noopener noreferrer"><SPAN> SAP BTP Operator</SPAN></A><SPAN> Kyma Module which is installed by default in your Kyma Cluster. Using the </SPAN><I><SPAN>helm dry-run</SPAN></I><SPAN> option you will be able to see all artifacts which will be managed via the Helm Chart. The output of:</SPAN></P><PRE><SPAN><I>helm upgrade -i -n susaas susaas chart -f chart/values-private.yaml --dry-run=client</I></SPAN></PRE><P><SPAN>includes for instance the </SPAN><STRONG>CAPApplication Custom Resource</STRONG><SPAN>:</SPAN></P><pre class="lia-code-sample language-yaml"><code>apiVersion: sme.sap.com/v1alpha1
kind: CAPApplication
metadata:
name: cap-saasmt
spec:
domainRefs:
- kind: Domain
name: dom-saasmt
btpAppName: "saasmt"
globalAccountId: "8ec9195a-6924-45d2-94da-a5c798578808"
provider:
subDomain: "saasmtcws"
tenantId: "7fced6ad-dada-4ffe-a5ec-6f75bfd9d67c"
btp:
services:
- class: "xsuaa"
name: "xsuaa-api"
secret: "saasmt-xsuaa-api-bind-btp"
…
- class: "service-manager"
name: "sm-admin"
secret: "saasmt-sm-admin-bind-btp"</code></pre><P><SPAN>and the </SPAN><STRONG>CAPApplicationVersion Custom Resource</STRONG><SPAN> which specifies all your Application workloads like Approuter, Backend, Service Broker which are running permanently with its specific configurations. In addition it includes the Content Deployment Jobs like to deploy HTML5 Content to the HTML5 Content Repository Service and the Operations to create, update and to delete Application Tenants. These Jobs are scaled to Zero once successfully done.</SPAN></P><pre class="lia-code-sample language-yaml"><code>apiVersion: sme.sap.com/v1alpha1
kind: CAPApplicationVersion
spec:
capApplicationInstance: "cap-saasmt"
version: "1"
registrySecrets:
- regcred
workloads:
- name: srv
consumedBTPServices:
- "xsuaa"
- "saas-registry"
- "destintation"
- "com-hdi-container"
- "sm-container"
- "sm-admin"
deploymentDefinition:
type: CAP
image: "espchris/capop-susaas-srv:0.0.1"
env:
- name: DEBUG
value: "mtx"
...
volumeMounts:
...
volumes:
...
securityContext:
...
resources:
...
- name: router
...
- name: api
...
- name: broker
- name: hdi-deployer
...
- name: html5-apps-deployer
...
- name: mtxs
...
contentJobs:
- hdi-deployer
- html5-apps-deployer
tenantOperations:
provisioning:
- workloadName: mtxs
- workloadName: automator
upgrade:
- workloadName: mtxs
deprovisioning:
- workloadName: mtxs
- workloadName: automator</code></pre><H2 id="toc-hId-1395727397"><STRONG>2. Deploy and watch the automation</STRONG></H2><P><SPAN>Once you deploy the application using </SPAN><I><SPAN>helm upgrade -i -n susaas susaas chart -f chart/values-private.yaml </SPAN></I><SPAN>like described in the </SPAN><A href="https://github.com/SAP-samples/btp-cap-multitenant-saas/blob/main/deploy/cap-operator/README.md" target="_blank" rel="noopener nofollow noreferrer"><SPAN>documentation</SPAN></A> <SPAN>the automation for instance includes:</SPAN></P><UL><LI><STRONG>Service Auto-Provisioning:</STRONG><SPAN> It talks to the BTP Operator Module, creates the required instances, and binds them.</SPAN></LI><LI><STRONG>Manages the Application workload and content deployment</STRONG><SPAN>: Create for instance the Certificate for your Application wildcard domain including the necessary DNS entry for your SaaS Application. Triggers the content deployment jobs like to deploy the Fiori Application HTML5 content the BTP HTML5 repository Service.</SPAN></LI><LI><STRONG>Provider Tenant creation/upgrade:</STRONG><SPAN> It automatically initializes your provider subaccount database tenant by calling the MTX Job automatically, a step that used to be a manual headache.</SPAN></LI></UL><H2 id="toc-hId-1199213892"><STRONG>3. Native Monitoring in the Kyma Dashboard (Busola)</STRONG></H2><P><SPAN>Who says Kubernetes is CLI-only? One of the biggest advantages of the CAP Operator is its </SPAN><STRONG>native integration into the Kyma Dashboard (Busola)</STRONG><SPAN> which is integrated into the BTP Cockpit. </SPAN><SPAN>In Busola, you don't just see "Pods"; you see the status of your </SPAN><STRONG>Application</STRONG><SPAN>:</SPAN></P><UL><LI><STRONG>Visual Health Check:</STRONG><SPAN> Check the status of your </SPAN><STRONG>CAPApplication, CAPApplicationVersions </STRONG><SPAN> and </SPAN><STRONG>CAPTenants</STRONG><SPAN> at a glance. Green status means the DB is migrated, services are bound, and the app is ready for consumers.</SPAN></LI><LI><STRONG>Tenant Provisioning Deep-Dive:</STRONG><SPAN> Click on a specific subscriber tenant to see its specific configuration and history and</SPAN></LI><LI><STRONG>Real-time Logs:</STRONG><SPAN> If a tenant upgrade fails you will see that directly in the UI. </SPAN></LI></UL><P><SPAN>Here is an example showing the details of the CAPApplicationVersion.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_0-1769702710726.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367030iC6EDF4F2EEDDEF5A/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_0-1769702710726.png" alt="ChristianWeiss_0-1769702710726.png" /></span></P><P><SPAN>CAPTenants</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_1-1769702710727.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367031iD34AF51DB5D38F73/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_1-1769702710727.png" alt="ChristianWeiss_1-1769702710727.png" /></span></P><H2 id="toc-hId-1002700387"><STRONG>4. Subscriber tenant provisioning including specific tenant operations</STRONG></H2><P><SPAN>Once the application is in a Ready State the subscription of further tenants is done in the same way like for Cloud Foundry Applications by creating a new subscriber subaccount and creating a new subscription which will trigger the </SPAN><A href="https://sap.github.io/cap-operator/docs/usage/tenant-provisioning/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Tenant subscription workflow</SPAN></A><SPAN> creating a new database schema, ISTIO Virtual Service for you Subaccount and will trigger all other defined automations.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_0-1769703021033.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367033i5DC50B7B57A71350/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_0-1769703021033.png" alt="ChristianWeiss_0-1769703021033.png" /></span></P><P><SPAN>The result will be again visible in the Kyma Dashboard where we will see all Jobs including Status which have been executed.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_1-1769703021035.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367034iF4804270EE010D49/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_1-1769703021035.png" alt="ChristianWeiss_1-1769703021035.png" /></span></P><H2 id="toc-hId-806186882"><STRONG>5. Scaling your lifecycle operations</STRONG></H2><P><SPAN>What happens when you have 100+ tenants and a new release?</SPAN></P><UL><LI><STRONG>Parallel Upgrades:</STRONG><SPAN> The Operator doesn't wait in line. It spins up parallel MTXS jobs to upgrade your tenants' database schemas simultaneously, respecting the limits you define.</SPAN></LI><LI><STRONG>Self-Healing:</STRONG><SPAN> If a tenant's schema drifts or a migration fails, the Operator's reconciliation loop kicks in. It's like having a </SPAN><STRONG>24/7 SRE (Site Reliability Engineer)</STRONG><SPAN> built into your cluster.</SPAN></LI></UL><P><SPAN>The results of a Application upgrade using command helm upgrade -i -n saasmt susaas chart -f chart/values-private.yaml can be again seen in the Kyma Dashboard.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ChristianWeiss_2-1769703134553.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367035iA02DE59B18B1236F/image-size/large?v=v2&px=999" role="button" title="ChristianWeiss_2-1769703134553.png" alt="ChristianWeiss_2-1769703134553.png" /></span></P><H2 id="toc-hId-609673377"><STRONG>6. The "Golden Path": GitOps with the CAP Operator</STRONG></H2><P>Automated ALM is great, but manually running commands is still a potential source of error. This is where GitOps comes in. Since the CAP Operator uses a purely declarative approach, it provides the perfect foundation for tools like Argo CD.</P><UL><LI><STRONG>Infrastructure as Code (IaC): </STRONG><SPAN>Your entire SaaS landscape from the CAPApplication definition to the individual CAPApplicationVersion configurations is stored in a Git repository.</SPAN></LI><LI><STRONG>Automatic Sync:</STRONG><SPAN> When you push a new image version or a new tenant configuration to Git, Argo CD (or any GitOps tool) detects the change and synchronizes it with your Kyma cluster.</SPAN></LI><LI><STRONG>Audit Trail & Revert:</STRONG><SPAN> Every change to your SaaS environment is tracked in Git. Need to rollback a buggy tenant upgrade? Just revert the commit, and the CAP Operator will automatically restore the previous state.</SPAN></LI></UL><P><SPAN>By combining GitOps with the CAP Operator, you achieve the highest level of maturity in cloud-native operations: Your Git repo becomes the single source of truth for your entire "CAP-as-a-Service" platform.</SPAN></P><H2 id="toc-hId-413159872"><STRONG>Summary: Kyma as the runtime for CAP Multitenancy SaaS applications</STRONG></H2><P><SPAN>SAP provides with Kyma a runtime for running CAP Multitenancy solutions with minimal costs on a large scale and it includes openness to allow the ecosystem to extend it with additional operators. Using the Kyma Module CAP Operator we've turned a complex Kubernetes cluster into a specialized </SPAN><STRONG>"CAP-as-a-Service"</STRONG><SPAN> platform. </SPAN></P><P><SPAN>We have seen how to:</SPAN></P><UL><LI><STRONG>Turn your cluster into a "CAP-as-a-Service"</STRONG><SPAN> platform with a view clicks (<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/kyma-evolution-transforming-sap-kyma-into-a-tailor-made-saas-platform-for/ba-p/14317418" target="_self">Part 1</A>).</SPAN></LI><LI><STRONG>Reduce Infrastructure Cost </STRONG><SPAN>through resource sharing and optimization (<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/part-2-runtime-architecture-amp-cost-efficiency-gains/ba-p/14317915" target="_self">Part 2</A>).</SPAN></LI><LI><STRONG>Automate the Application Lifecycle with the help of the CAP Operator </STRONG><SPAN> (Part 3).</SPAN></LI></UL><P><SPAN>The future of SAP CAP Application development and operations is open, modular and operator-led. Start exploring the capabilities and transform your Kyma cluster into a high-performance and scalable engine! To create the necessary Helm artifacts to deploy your CAP Application using the CAP Operator please make use of the </SPAN><A href="https://github.com/cap-js/cap-operator-plugin" target="_blank" rel="noopener nofollow noreferrer"><SPAN>cap-operator-plugin</SPAN></A><SPAN> which we will explain in more detail in one of the next blogs.</SPAN></P>2026-02-04T16:53:39.614000+01:00https://community.sap.com/t5/sap-for-oil-gas-and-energy-blog-posts/reimagining-utility-transformation-clean-core-principles-powered-by-sap-btp/ba-p/14321645Reimagining Utility Transformation: Clean Core Principles Powered by SAP BTP2026-02-06T07:51:30.744000+01:00Atul_Joshi85https://community.sap.com/t5/user/viewprofilepage/user-id/2274193<H1 id="toc-hId-1660413609">Introduction</H1><P>Across the utility industry, one message appears in every transformation discussion: <STRONG>“Keep the core clean.”</STRONG> The guidance is sound, especially for organizations preparing for S/4HANA. But inside SAP IS‑U—where regulatory complexity, legacy billing logic, and decades of Z‑programs coexist—this principle often feels contradictory.</P><P>Utilities must modernize rapidly while protecting the stability of their revenue engine. This creates a familiar tension: <STRONG>innovation requires speed, but the core requires caution.</STRONG></P><P>This post explores why that tension exists, how SAP BTP resolves it, and how utilities can operationalize a Clean Core strategy without compromising business continuity.</P><H1 id="toc-hId-1463900104">Main Body</H1><H2 id="toc-hId-1396469318">1. The Customization Cage: How Utilities Got Here</H2><P>For many years, utilities solved business needs by customizing the ERP core:</P><UL><LI>Custom ABAP for tariffs</LI><LI>Modified standard tables for regulatory fields</LI><LI>Embedded workflows inside IS‑U</LI><LI>Enhancements tightly coupled to billing and device management</LI></UL><P>These decisions were practical at the time—but over decades, they created a <STRONG>Rigid Core</STRONG> that is difficult to upgrade, expensive to test, and slow to evolve.</P><P><STRONG>Resulting challenges:</STRONG></P><UL><LI>High upgrade risk</LI><LI>Long regression cycles</LI><LI>Slow innovation</LI><LI>Technical debt that compounds every year</LI></UL><P>Your first diagram captures this reality perfectly.</P><P><STRONG>Diagram 1 — The Customization Cage </STRONG></P><P><STRONG> </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Atul_Joshi85_0-1770216536920.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368957i685E3BEBFF734633/image-size/medium?v=v2&px=400" role="button" title="Atul_Joshi85_0-1770216536920.png" alt="Atul_Joshi85_0-1770216536920.png" /></span></P><P> </P><P><STRONG> </STRONG></P><P> </P><H2 id="toc-hId-1199955813">2. Clean Core + SAP BTP: The Agility Layer Utilities Needed</H2><P>A Clean Core strategy is not about reducing capability—it’s about <STRONG>relocating</STRONG> capability.</P><P>SAP Business Technology Platform (BTP) provides the architectural separation utilities have needed for years:</P><UL><LI><STRONG>S/4HANA Core</STRONG> → Stable, standardized, upgrade-friendly</LI><LI><STRONG>SAP BTP</STRONG> → Custom logic, workflows, APIs, event-driven processes</LI></UL><P>This separation transforms the ERP into a <STRONG>system of record</STRONG>, while BTP becomes the <STRONG>system of innovation</STRONG>.</P><P><STRONG>Key benefits:</STRONG></P><UL><LI>Faster delivery cycles</LI><LI>Reduced regression testing</LI><LI>Lower upgrade effort</LI><LI>Event-driven operations</LI><LI>Extensibility without core modification</LI></UL><H2 id="toc-hId-1003442308">3. A Practical Framework for Utility Clean Core Adoption</H2><P>SAP Community readers expect actionable, practitioner-focused guidance. Here is a structured approach utilities can follow.</P><P><STRONG>Step 1: Start With a Core Assessment</STRONG></P><P>Identify:</P><UL><LI>Business-critical customizations</LI><LI>Redundant or obsolete logic</LI><LI>Enhancements blocking upgrade cycles</LI><LI>High-change areas suitable for BTP</LI></UL><P>This reframes Clean Core as <STRONG>risk reduction</STRONG>, not cleanup.</P><P><STRONG>Step 2: Move High-Variability Logic to BTP First</STRONG></P><P>Ideal candidates include:</P><UL><LI>Rate and tariff changes</LI><LI>Regulatory reporting</LI><LI>Credit & collections workflows</LI><LI>Meter-to-cash orchestration</LI><LI>BPEM automation</LI></UL><P>These areas generate the most upgrade friction—and deliver the fastest BTP wins.</P><H2 id="toc-hId-806928803">Step 3: Use SAP Event Mesh to Break the Batch Mindset</H2><P>Utilities can begin shifting toward real-time operations by triggering events for:</P><UL><LI>Move-in / move-out</LI><LI>Billing determinants</LI><LI>Meter read exceptions</LI><LI>Payment events</LI><LI>Outage notifications</LI></UL><P>This enables a hybrid model: <STRONG>batch where necessary, real-time where possible.</STRONG></P><H2 id="toc-hId-610415298">Step 4: Establish Governance That Protects the Core</H2><UL><LI>No custom code in S/4 unless SAP mandates it</LI><LI>BTP-first evaluation for all new logic</LI><LI>Standard APIs and events as default patterns</LI><LI>Quarterly architecture reviews to prevent “core creep”</LI></UL><P>Governance is the long-term safeguard of Clean Core.</P><H2 id="toc-hId-413901793">Step 5: Treat BTP as a Business Platform</H2><P>Executives respond when BTP is positioned as:</P><UL><LI>A revenue protection tool</LI><LI>A regulatory accelerator</LI><LI>A customer experience enabler</LI><LI>A technical debt reduction mechanism</LI></UL><H2 id="toc-hId-217388288">Step 6: Build a 24‑Month Roadmap</H2><P>A practical roadmap includes:</P><UL><LI><STRONG>Phase 1:</STRONG> Core assessment + quick wins</LI><LI><STRONG>Phase 2:</STRONG> Event-driven architecture rollout</LI><LI><STRONG>Phase 3:</STRONG> Predictive and AI-driven use cases</LI><LI><STRONG>Phase 4:</STRONG> Full Clean Core enforcement + S/4 readiness</LI></UL><P>This provides clarity, sequencing, and measurable ROI.</P><H2 id="toc-hId-20874783">4. The Modern Utility Innovation Stack</H2><P>Your second diagram illustrates the future-state architecture:</P><UL><LI>A stable S/4HANA core</LI><LI>A flexible BTP layer</LI><LI>A connected utility ecosystem</LI></UL><P><STRONG>Diagram 2 — The Utility Innovation Stack </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Atul_Joshi85_1-1770216537147.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368958iDEA788D84B292DEB/image-size/medium?v=v2&px=400" role="button" title="Atul_Joshi85_1-1770216537147.png" alt="Atul_Joshi85_1-1770216537147.png" /></span></P><P> </P><P><STRONG> </STRONG></P><P> </P><H1 id="toc-hId-465018642">Conclusion</H1><P>A Clean Core is not a limitation—it is a <STRONG>strategic advantage</STRONG>.</P><P>Utilities preparing for S/4HANA should shift the conversation from:</P><P><STRONG>“How do we migrate everything we built?”</STRONG> to <STRONG>“How do we protect the core and modernize the business at the same time?”</STRONG></P><P>SAP BTP provides the answer:</P><UL><LI>Keep the core stable</LI><LI>Move innovation to the agility layer</LI><LI>Adopt event-driven operations</LI><LI>Reduce technical debt</LI><LI>Build a future-ready architecture</LI></UL><P>Utilities that embrace this model will be better positioned to innovate, comply, and scale in the decade ahead.</P>2026-02-06T07:51:30.744000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/a-sap-btp-kyma-encryption-decryption-microservice-for-all-contexts-e-g-sap/ba-p/14326277A SAP BTP Kyma Encryption/Decryption Microservice for ALL Contexts (e.g. SAP Datasphere/BDC/ABAP)2026-02-11T16:12:01.099000+01:00stefan_geiselhart2https://community.sap.com/t5/user/viewprofilepage/user-id/200897<P>G’day!</P><P><STRONG>Warning</STRONG>: This is only for readers who are really interested in this subject <span class="lia-unicode-emoji" title=":smiling_face_with_heart_eyes:">😍</span> The article is quite comprehensive and has got some cross-article links inside, that also need to be considered. Therefore: A lot of reading and a lot of hands-on + thinking if you want to rebuild the thing.</P><P><U>Below is the structure of this article:</U></P><UL><LI><FONT size="3">1 Status Quo on En-/Decryption in BTP</FONT></LI><LI><FONT size="3">2 Motivation</FONT></LI><LI><FONT size="3">3 Fundamentals </FONT></LI><LI><FONT size="3">4 Architecture</FONT></LI><LI><FONT size="3">5 HDLFS Configuration</FONT></LI><LI><FONT size="3">6 Insights into Python</FONT></LI><LI><FONT size="3">7 Containerization & Deployment in Kyma</FONT></LI><LI><FONT size="3">8 Achievements</FONT></LI><LI><FONT size="3">9 More Scope</FONT></LI><LI><FONT size="3">10 References</FONT></LI></UL><P>Let's begin:</P><P><STRONG><FONT size="5">1 Status Quo on En-/Decryption in BTP</FONT></STRONG></P><P><FONT size="3">When it comes to En-/Decryption on SAP BTP, there are a couple of pitfalls to be considered. </FONT>The following roughly outlines the situation, platform and application-wise:</P><UL><LI>Individual applications (e.g. SAP Successfactors) got proprietary encryption/decryption mechanisms available. They are typically <STRONG>not reusable</STRONG>.</LI><LI><STRONG>SAP Integration Suite has got PGP En-/Decryptor nodes</STRONG> available. However, it is <STRONG><U>not</U> advisable to run</STRONG> large volume scenarios using this approach. It has been built for data sizes on the levels represented within message handling scenarios (I‘m not an expert therein, but I guess thats typically not in the area of GBs).</LI><LI>A policy/rule for data to remain encrypted until its landing into BTP: You face challenges to <STRONG>securely onboard large volume files</STRONG> containing sensitive data <STRONG>into BTP</STRONG>, because there is no out-of-the-box solution to keep them encrypted in motion and decrypt them at rest once available in BTP (and vice versa for an outbound flow).</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="The Challenge" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371589i1ED7B676DB32AA4F/image-size/large?v=v2&px=999" role="button" title="image.png" alt="The Challenge" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">The Challenge</span></span></P><P><STRONG><FONT size="5">2 Motivation</FONT></STRONG></P><P>Due to the above limitations, we were eager to find an appropriate solution approach that first of all satifies our specific scenario, but can also be augmented to other contexts. This is why the following motivation arose:</P><UL><LI>Especially En-/Decryption for larger files on BTP side seems to be a missing piece in the puzzle.</LI><LI>Our ambition is to encapsulate such kind of mechanism centrally as a reusable service. <STRONG>Multiple kinds of contexts can be served</STRONG>, exposing this service: Applications, data flows & transformations etc. – in our context the consumer will be SAP Datasphere.</LI><LI>There is another consequence of finally being able to load decrypted large data sets into BTP. To be more precise: To onboard this data into SAP Datasphere to further process and transform, the fact that Base64 encoded data must be handled in our context too, why not also expose or <STRONG>incorporate columnar Base64 En-/Decoding</STRONG> into the microservice?</LI><LI>The intended solution is <STRONG>custom-built</STRONG> and must be managed individually. However, the baseline that will be set <STRONG>can be reused and adapted</STRONG> to various customer needs and application contexts.</LI></UL><P><STRONG><FONT size="5">3 Fundamentals</FONT></STRONG> </P><P>The following represent some fundamentals and success criteria we‘ve established for our solution:</P><UL><LI>Use HANA Data Lake Files (<STRONG>HDLFS</STRONG>) as a file store for inbound/outbound persistency</LI><LI>Implement the service as <STRONG>containerized Python</STRONG> code logic</LI><LI>Service capabilities must be <STRONG>controllable and schedulable within Kyma</STRONG></LI><LI>Service can handle <STRONG>PGP encrypted</STRONG> files, but can also decrypt for outbound delivery</LI><LI>Service can decode <STRONG>Base64</STRONG> columns for inbound source files</LI><LI>Service <STRONG>manages and cleans up</STRONG> directories and file in-/output on HDLFS</LI><LI>Service can handle <STRONG>files with sizes in GB range</STRONG></LI><LI>Deploy service into a <STRONG>Kyma Cluster</STRONG> (Cloud Foundry? -> no, Memory/Sizing limitations)</LI><LI><STRONG>Security</STRONG>: Facilitate simple and secure handling of all required secrets and certificates</LI><LI>Service <STRONG>not exposed via HTTP endpoints</STRONG> (this is kind of a contradiction to a true microservice, however this requirement applies in our context. Under "More Scope" you will find guidelines/directions how to enable HTTP based microservices)</LI></UL><P><STRONG><FONT size="5">4 Architecture</FONT></STRONG></P><P>The following architectural components represent what we‘ve picked out for our solution:</P><UL><LI>Usage of HANA Data Lake Files (<STRONG>HDLFS</STRONG>) as a file store for <STRONG>inbound/outbound persistency</STRONG></LI><LI>Use <STRONG>BTP Kyma for service runtime</STRONG> (Kyma deployed in the 2nd smallest T-Shirt size)</LI><LI><STRONG>SAP Datasphere</STRONG> represents the <STRONG>persistency layer</STRONG> (DSP as a consumer is exchangeable but in our case relevant for the business scenario)</LI><LI>The other more general blue box "other BTP Solutions" indicates, that the service implementation on Kyma isn't necessary limited to SAP Datasphere only, but should be rather considered as something agnostic. This of course heavily depends how it is built and finally implemented.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="BTP Solution Architecture Kyma Microservice" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371516i29EF2B1C74333CEB/image-size/large?v=v2&px=999" role="button" title="image.png" alt="BTP Solution Architecture Kyma Microservice" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">BTP Solution Architecture Kyma Microservice</span></span></P><P><STRONG><FONT size="5">5 HDLFS Configuration</FONT></STRONG></P><P><FONT size="4">I strongly recommend to read the <A href="https://developers.sap.com/tutorials/data-lake-file-containers-hdlfscli.html" target="_self" rel="noopener noreferrer">tutorial by Jason Hinsperger</A> to familiarize yourself with the HDLFS REST API dependencies and the essential steps to spin up the instance. You must follow all the steps described therein.</FONT></P><P><FONT size="4">Its important to note that the <STRONG>generated client key and client certificate</STRONG> is what you are about to use from python level. Moreover <STRONG>you should take note of the REST API endpoint</STRONG>. This is the endpoint you run all your requests against (you find it in HANA Cloud Central -> Instances -> HDLFS Instance -> "Files REST API Endpoint".).</FONT></P><P><FONT size="4">To familiarize yourself in general with the HDLFS REST API documentation, you can use this <A href="https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/2025_3_QRC/en-US/index.html#tag/WebHDFS" target="_self" rel="noopener noreferrer">link</A>.</FONT></P><P><STRONG><FONT size="5"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="HDLFS REST API Endpoint" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371618i735BEFA7C02FCA36/image-size/large?v=v2&px=999" role="button" title="image.png" alt="HDLFS REST API Endpoint" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">HDLFS REST API Endpoint</span></span></FONT></STRONG></P><P><FONT size="4">One essential detail I don't want to hide, is the IP whitelisting of the Kyma environment. In order to do that, navigate to the start page of your cluster dashboard. There is a section called "Cluster Overview --> Metadata". Copy the (typically three) IP listed there: "NAT Gateway IP Addresses".</FONT></P><P><FONT size="4">Go to your HANA Cloud Central and navigate to your HDLFS instance. Click "Manage Configuration" and go to the section "Connections": If you have enabled the setting to only allow specific IP addresses, likewise maintain the ones of the Kyma Cluster that you previously copied:</FONT></P><P><FONT size="4"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="HDLFS Configuration Connections" style="width: 706px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371638i636055B083947539/image-dimensions/706x435?v=v2" width="706" height="435" role="button" title="image.png" alt="HDLFS Configuration Connections" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">HDLFS Configuration Connections</span></span></FONT></P><P><STRONG><FONT size="5">6 Insights into Python</FONT></STRONG></P><P><FONT size="3">The python coding part is split into several modules, which are all described in a detailed way further below:</FONT></P><OL><LI><FONT size="3">Inbound processing</FONT></LI><LI><FONT size="3">Outbound processing</FONT></LI><LI><FONT size="3">Common functions</FONT></LI><LI><FONT size="3">Crypto function for encryption/decryption</FONT></LI></OL><P><FONT size="3"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Python Modules Sketch" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371586i15DBFF5CC94401F9/image-size/large?v=v2&px=999" role="button" title="image.png" alt="Python Modules Sketch" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Python Modules Sketch</span></span></FONT></P><P><FONT size="4"><STRONG>Inbound Module (1)</STRONG></FONT></P><P>The inbound part scans a given datalake path for <CODE>.gpg</CODE> files, then decrypts each file locally in parallel using a configurable worker count. After decryption, it samples the CSV (semicolon-delimited), detects which columns contain base64-encoded data, and runs a transformation step to produce a cleaned output file. The transformed result is uploaded to a decrypted output path, and the original encrypted input is moved to an archive folder. It logs success or errors with timing details on file level and exits with a failure code if any file fails.</P><P>This code snippet describes the parallel processing of files:</P><pre class="lia-code-sample language-python"><code> with ThreadPoolExecutor(max_workers=MAX_PARALLEL_JOBS) as exe:
futs = {exe.submit(processFile, f): f for f in files}
for fut in as_completed(futs):
f = futs[fut]
try:
res = fut.result()
ok += 1
print(f"[OK] {res['input']} -> {res['output']} sec={res['sec']:.2f} base64_cols={res['base64_cols']}")
except Exception as ex:
err.append((f, str(ex)))
print(f"[ERR] {f}: {ex}")</code></pre><P>Sampel code of processing an individual file:</P><pre class="lia-code-sample language-python"><code>def processFile(file_name: str):
t0 = timer()
out_name = normalize_output_name(file_name)
with tempfile.TemporaryDirectory() as td:
enc_path = os.path.join(td, file_name)
dec_path = os.path.join(td, out_name + ".decrypted.csv")
out_path = os.path.join(td, out_name)
download_to_file(f"{PATH_IN_DATALAKE}/{file_name}", enc_path)
decrypt_symmetric(enc_path, dec_path)
header, sample = sample_csv(dec_path, delimiter=";")
base64_cols = detect_base64_columns(out_name, header, sample)
transform_csv(dec_path, out_path, base64_cols, delimiter=";")
mkdirs(PATH_OUT_DATALAKE)
upload_file_chunked(f"{PATH_OUT_DATALAKE}/{out_name}", out_path)
mkdirs(PATH_ARCHIVE)
rename(f"{PATH_IN_DATALAKE}/{file_name}", f"{PATH_ARCHIVE}/{file_name}")
return {"input": file_name, "output": out_name, "base64_cols": base64_cols, "sec": timer() - t0}</code></pre><P><FONT size="4"><STRONG>Outbound module (2)</STRONG></FONT></P><P><FONT size="3">It scans the HDLFS outbound folder for .csv files and subdirectories containing Parquet parts, then processes them in parallel using a configurable worker count. CSV files are downloaded locally, then encrypted and uploaded back again to an encrypted outbound path. The originals are moved to an archive folder. Parquet file based directories on HDLFS (when enabled) are downloaded, consolidated into a single CSV using PyArrow, encrypted and uploaded similarly. As a last step, they are marked with an _SUCCESS file to prevent reprocessing and moved to the archive. </FONT></P><P><FONT size="4"><STRONG>Crypto Module (3)</STRONG></FONT></P><P>It provides helper functions for symmetric file encryption and decryption using the <CODE>gpg</CODE> command-line tool with an AES-256 cipher. It requires a passphrase supplied via the <CODE>PASSPHRASE</CODE> environment variable (from a Kyma Secret). Both <CODE>encrypt_symmetric</CODE> and <CODE>decrypt_symmetric</CODE> execute GPG in batch mode through <CODE>subprocess</CODE>, capturing stdout/stderr and validating the return code.</P><P>The <CODE>decrypt_symmetric</CODE> function is implemented as follows:</P><P> </P><pre class="lia-code-sample language-python"><code>def decrypt_symmetric(enc_path: str, dec_path: str):
_require_passphrase()
cmd = [
"gpg", "--batch", "--yes",
"--pinentry-mode", "loopback",
"--passphrase", PASSPHRASE,
"--output", dec_path,
"--decrypt", enc_path
]
r = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if r.returncode != 0:
raise RuntimeError(f"GPG decrypt failed: {r.stderr.decode('utf-8', 'replace')}")</code></pre><P><FONT size="4"><STRONG>Common Functions Module (4)</STRONG></FONT></P><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""><P>All major control variables are built using environment variables (defined as secrets in Kyma) for endpoint/container, TLS certificates, verification behavior, timeouts, retries, and chunk size.</P><P>In one function it builds a reusable SSL context. API requests are wrapped with exponential-backoff retries for transient HTTP statuses (e.g., 429/5xx) and network errors. It provides HDLFS related helpers to list files/directories, check existence, create directories, rename paths, download remote content to disk in chunks, and upload files.</P><P>The below part builds the SSL context to connect to HDLFS via its REST API:</P></DIV></DIV></DIV></DIV></DIV></DIV></DIV><pre class="lia-code-sample language-python"><code>def buildSSLContext() -> ssl.SSLContext:
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
if os.path.exists(CRT_PATH) and os.path.exists(KEY_PATH):
ctx.load_cert_chain(certfile=CRT_PATH, keyfile=KEY_PATH)
if TLS_VERIFY:
if CA_CERT_PATH:
ctx.load_verify_locations(CA_CERT_PATH)
else:
ctx.load_default_certs()
ctx.verify_mode = ssl.CERT_REQUIRED
ctx.check_hostname = True
else:
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
return ctx</code></pre><P><FONT size="4">This part establishes a connection to the HDLFS REST API endpoint as defined in the environment variable FILES_REST_API:</FONT></P><pre class="lia-code-sample language-abap"><code>def _conn():
if not FILES_REST_API:
raise RuntimeError("FILES_REST_API is empty (set env FILES_REST_API).")
return http.client.HTTPSConnection(FILES_REST_API, timeout=HTTP_TIMEOUT, context=_SSL_CTX)</code></pre><P><FONT size="4">The below function represents the upload (HDLFS REST API PUT) of a file in streaming mode to an HDLFS directory:</FONT></P><pre class="lia-code-sample language-python"><code>def _put_to_location(location: str):
url = urlparse(location)
if u.scheme and u.netloc:
host = url.netloc
path = (url.path or "/") + (("?" + url.query) if url.query else "")
conn = http.client.HTTPSConnection(host, timeout=HTTP_TIMEOUT, context=_SSL_CTX)
else:
path = location
conn = _conn()
try:
conn.putrequest("PUT", path)
conn.putheader("x-sap-filecontainer", CONTAINER)
conn.putheader("Content-Type", "application/octet-stream")
conn.putheader("Content-Length", str(size))
conn.putheader("Connection", "close")
conn.endheaders()
with open(file_path, "rb") as f:
while True:
chunk = f.read(CHUNK_SIZE)
if not chunk:
break
conn.send(chunk)
r = conn.getresponse()
data = r.read()
status = r.status
r.close()
if status in _RETRYABLE_STATUSES:
raise RetryableHttpError(f"PUT retryable: {status} {data[:200]!r}")
if status not in (200, 201):
raise RuntimeError(f"PUT failed {remote_path_no_leading_slash}: {status} {data[:200]!r}")
finally:
conn.close()</code></pre><P><STRONG><FONT size="5">7 Containerization & Deployment in Kyma</FONT></STRONG></P><P>I won't describe in detail, how the creation of a docker container is made step by step. I just provide some essentials + hints to guarantee that you succeed and don't stumble upon the same issues I had faced. The following prerequisites must be met in order to create a runnable docker container based on your Dockerfile + python code:</P><UL><LI>Local installation of docker (Docker Desktop)</LI><LI>Docker registry (e.g. GitHub Docker Registry) – this is from where the Pod pulls the docker image</LI><LI>kubectl cli</LI><LI>IDE such as Visual Studio code</LI></UL><P>For an overall procedure, including the part in Kyma, you can refer to this <A href="http://https://community.sap.com/t5/technology-blog-posts-by-members/develop-and-deploy-python-rest-api-with-kubernetes-docker-in-sap-btp-kyma/ba-p/13533279" target="_blank" rel="noopener nofollow noreferrer">blog entry on SCN</A>. I strongly recommend that you try to rebuild the example the author walks through first, BEFORE you tackle your actual project.</P><P>Recommendations out of my personal learnings:</P><UL><LI>Try to not create any Kyma artifacts via the UI, but rather define those within descriptor yaml files of the corresponding kinds (e.g. kind: Deployment or Service or APIRule).</LI><LI><STRONG>!!!</STRONG> A minor but very important thing to consider when running the docker build command, run it as follows:</LI></UL><pre class="lia-code-sample language-bash"><code>docker build . --tag ghcr.io/<github_user>/<git_repo_name>:latest --platform linux/amd64</code></pre><UL><LI>The deployment.yaml in the blog specified above is based on three Kyma/K8S artifacts that are created: <STRONG>Deployment, Service + APIRule</STRONG>. Which kind of artifacts you actually require strongly depends on your python implementation and how your service/logic runs. In the event of a purely job schedule based execution of your processing logic, <STRONG>you don't need a service or APIRule</STRONG> artifact, of course. Instead, you'd only need a <STRONG>deployment of kind: CronJob</STRONG>. This will spin up all dependent artifacts implicitely. In a CronJob based deployment you will have to specify the following details in the yaml file: scheduling details (time, frequency, time zone), required volumes, container spec including secrets to pull image, container level: mounts & env variables, resource allocation.</LI><LI>A template for a CronJob based deployment can look like this:</LI></UL><pre class="lia-code-sample language-json"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: <job_name>
namespace: <kyma_namespace>
spec:
suspend: false
schedule: "* * * * *"
timeZone: "Europe/Berlin"
jobTemplate:
spec:
backoffLimit: 3
template:
spec:
restartPolicy: Never
imagePullSecrets:
- name: ghcr-pull-secret
volumes:
- name: gnupg-home
emptyDir: {}
initContainers:
- name: init-gnupg
image: busybox:1.36
command: ["sh","-c","mkdir -p /tmp/gnupg && chmod 700 /tmp/gnupg"]
volumeMounts:
- name: gnupg-home
mountPath: /tmp/gnupg
resources:
requests:
cpu: "10m"
memory: "32Mi"
limits:
cpu: "50m"
memory: "64Mi"
containers:
- name: inbound
image: <container_registry_path>
imagePullPolicy: Always
command: [<custom_os_level_command_if_required>]
volumeMounts:
- name: gnupg-home
mountPath: /tmp/gnupg
env:
- name: KEY_PATH
value: "/keys/keyfile.key"
- name: PASSPHRASE
valueFrom:
secretKeyRef:
name: kyma-secret
key: PASSPHRASE
resources:
requests:
cpu: "100m"
memory: "1Gi"
limits:
cpu: "200m"
memory: "2Gi"</code></pre><P><STRONG><FONT size="5">8 Achievements</FONT></STRONG></P><P>The following functionality and service details were delivered:</P><UL><LI>The Python code was improved multiple times: Handling approx. 30 files in one go with a total of 10 GB in encrypted state could be accelerated to < 30 minutes of processing time</LI><LI>Cron Jobs in the Kyma cluster can be used to schedule individual services, e.g. inbound processing of files that reside in a specific folder on HDLFS</LI><LI>SAP Datasphere writes local tables via Replication Flows back into HDLFS, on which the outbound service performs encryption and marks the files to „collectible“</LI></UL><P><STRONG><FONT size="5">9 More Scope</FONT></STRONG></P><P><FONT size="4">...up to come soon. I will cover aspects how to further mature the microservice/python logic and moreover exemplify on HTTP based encryption/decryption service endpoints, exposed on the Kyma Cluster.</FONT></P><P><FONT size="4"><STRONG><FONT size="5">10 References</FONT></STRONG></FONT></P><P><A href="https://community.sap.com/t5/technology-blog-posts-by-members/develop-and-deploy-python-rest-api-with-kubernetes-docker-in-sap-btp-kyma/ba-p/13533279" target="_self">K8S/Kyma + Python SCN Blog</A></P><P><A href="http://https://developers.sap.com/tutorials/data-lake-file-containers-hdlfscli.html" target="_self" rel="nofollow noopener noreferrer">Getting Started with Data Lake Files HDLFSCLI</A></P><P><A href="http:// https://help.sap.com/doc/9d084a41830f46d6904fd4c23cd4bbfa/2025_3_QRC/en-US/index.html#tag/WebHDFS" target="_self" rel="nofollow noopener noreferrer">HDLFS REST API Guide</A></P><P><FONT size="4"><STRONG><FONT size="5">Thx...</FONT></STRONG></FONT></P><P><FONT size="4">...to my fellow colleagues at SAP who provides meaningful input and discussed about solution options. Furthermore I am especially grateful to our implementation partner who also supported, implemented and showed strong endurance in building up this scenario.</FONT></P><P><FONT size="4">I hope you enjoyed reading this article and could have gained some deeper insights into what we did. If you have comments or suggestions of any kind, don't hesitate to comment and start-off a discussions with me and hopefully other SMEs.</FONT></P>2026-02-11T16:12:01.099000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/sap-job-scheduling-service-free-tier-now-available-on-btp-live/ba-p/14327952🚀 SAP Job Scheduling Service: Free Tier Now Available on BTP Live2026-02-13T12:48:39.945000+01:00DenisDuevhttps://community.sap.com/t5/user/viewprofilepage/user-id/180332<DIV class=""><H1 id="toc-hId-1660595266"><span class="lia-unicode-emoji" title=":rocket:">🚀</span> Free Tier Now Available on BTP Live: Schedule Jobs at Zero Cost!</H1></DIV><P>Great news for the SAP BTP community! After the successful launch of our Free plan on BTP Trial, we're thrilled to announce that the<SPAN> </SPAN><STRONG>Free tier for SAP Job Scheduling Service is now available on BTP Live (Production)!</STRONG><SPAN> </SPAN><span class="lia-unicode-emoji" title=":party_popper:">🎉</span></P><P>Whether you're building personal projects, creating proof-of-concepts, or running small-scale production workloads, you can now leverage the power of automated job scheduling without incurring any costs!</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hero-btp-live-free.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372297iFF1CC7A3C65123E1/image-size/large?v=v2&px=999" role="button" title="hero-btp-live-free.png" alt="hero-btp-live-free.png" /></span></P><DIV class=""><H2 id="toc-hId-1593164480"><span class="lia-unicode-emoji" title=":glowing_star:">🌟</span> What is the Free Tier?</H2></DIV><P>The Free tier is a<SPAN> </SPAN><STRONG>no-cost service plan</STRONG><SPAN> </SPAN>that provides you with production-grade job scheduling capabilities on SAP Business Technology Platform. It's designed to give developers, students, indie builders, and enterprises access to powerful scheduling features without financial barriers.</P><DIV class=""><H3 id="toc-hId-1525733694">Key Highlights</H3></DIV><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Zero Cost</STRONG><SPAN> </SPAN>- No charges, no hidden fees, no credit card required.<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Production Ready</STRONG><SPAN> </SPAN>- Available on both BTP Trial AND BTP Live.<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>OAuth 2.0 Security</STRONG><SPAN> </SPAN>- Modern authentication, same as Standard plan.<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Full Feature Set</STRONG><SPAN> </SPAN>- Access to all core scheduling capabilities.<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>15 Schedules</STRONG><SPAN> </SPAN>- Enough for real-world small to medium applications.<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Hourly Scheduling</STRONG><SPAN> </SPAN>- Perfect for regular tasks and automation.</P><DIV class=""><H2 id="toc-hId-1200137470"><span class="lia-unicode-emoji" title=":direct_hit:">🎯</span> Who is the Free Tier For?</H2></DIV><P>The Free tier is perfect for:</P><DIV class=""><H3 id="toc-hId-1132706684">🧪 Proof of Concepts</H3></DIV><UL><LI>Validate scheduling logic before committing to Standard plan</LI><LI>Demonstrate capabilities to stakeholders</LI><LI>Test integration with other BTP services</LI><LI>Build MVPs and demos</LI></UL><DIV class=""><H3 id="toc-hId-936193179"><span class="lia-unicode-emoji" title=":office_building:">🏢</span> Small Production Workloads</H3></DIV><UL><LI>Run lightweight scheduled tasks in production</LI><LI>Automate periodic maintenance jobs</LI><LI>Schedule regular reports and notifications</LI><LI>Power small-scale applications</LI></UL><DIV class=""><H3 id="toc-hId-739679674"><span class="lia-unicode-emoji" title=":briefcase:">💼</span> Enterprise Evaluation</H3></DIV><UL><LI>Test Job Scheduling Service in your landscape</LI><LI>Evaluate before enterprise-wide adoption</LI><LI>Train your team on production patterns</LI><LI>Prototype solutions for larger implementations</LI></UL><BLOCKQUOTE><P>Update to the Standard plan when you need more schedules, sub-hourly intervals, or SLA-backed support for business-critical workloads.</P></BLOCKQUOTE><DIV class=""><H2 id="toc-hId-414083450"><span class="lia-unicode-emoji" title=":vs_button:">🆚</span> How Does Free Compare to Standard?</H2></DIV><P>Here's a detailed comparison to help you choose the right plan:<BR /><BR /></P><TABLE><TBODY><TR><TD><H3 id="toc-hId-346652664"><STRONG>Feature</STRONG></H3></TD><TD><H3 id="toc-hId-150139159"><STRONG><CODE>Free</CODE> Plan <span class="lia-unicode-emoji" title=":free_button:">🆓</span></STRONG></H3></TD><TD><H3 id="toc-hId--121605715"><STRONG><CODE>Standard</CODE> Plan <span class="lia-unicode-emoji" title=":briefcase:">💼</span></STRONG></H3></TD></TR><TR><TD><STRONG>Cost</STRONG></TD><TD>$0/month</TD><TD>Pay-as-you-go pricing</TD></TR><TR><TD><STRONG>Availability</STRONG></TD><TD>Trial & Live</TD><TD>Live</TD></TR><TR><TD><STRONG>Number of Schedules</STRONG></TD><TD>15</TD><TD>Unlimited</TD></TR><TR><TD><STRONG>Minimum Interval</STRONG></TD><TD>1 hour <span class="lia-unicode-emoji" title=":alarm_clock:">⏰</span></TD><TD>5 minutes <span class="lia-unicode-emoji" title=":high_voltage:">⚡</span></TD></TR><TR><TD><STRONG>Authentication</STRONG></TD><TD>OAuth 2.0 <span class="lia-unicode-emoji" title=":locked:">🔒</span></TD><TD>OAuth 2.0 <span class="lia-unicode-emoji" title=":locked:">🔒</span></TD></TR><TR><TD><STRONG>Multitenancy</STRONG></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD><STRONG>REST API</STRONG></TD><TD>Full access <span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD>Full access <span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD><STRONG>Alert Notifications</STRONG></TD><TD>Unlimited <span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD>Unlimited <span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD><STRONG>CF Tasks</STRONG></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD><STRONG>Cloud ALM</STRONG></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD><STRONG>Support</STRONG></TD><TD>No, only Community <span class="lia-unicode-emoji" title=":busts_in_silhouette:">👥</span></TD><TD>SLA-backed <span class="lia-unicode-emoji" title=":shield:">🛡</span>️</TD></TR><TR><TD><STRONG>Best For</STRONG></TD><TD>Learning, small workloads, PoCs</TD><TD>Production, enterprise, high-frequency jobs</TD></TR></TBODY></TABLE><BLOCKQUOTE><P><span class="lia-unicode-emoji" title=":light_bulb:">💡</span><SPAN> </SPAN><STRONG>Tip:</STRONG><SPAN> </SPAN>For production workloads requiring SLA-backed support, consider the<SPAN> </SPAN><CODE>Standard</CODE><SPAN> </SPAN>plan.</P></BLOCKQUOTE><DIV class=""><H3 id="toc-hId--318119220">When to Choose Free</H3></DIV><UL><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You need<SPAN> </SPAN><STRONG>15 or fewer</STRONG><SPAN> </SPAN>scheduled jobs</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Hourly or less frequent scheduling is sufficient</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You're comfortable with community support</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You want<SPAN> </SPAN><STRONG>zero cost</STRONG><SPAN> </SPAN>scheduling</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You're building PoCs, demos, or learning projects</LI></UL><DIV class=""><H3 id="toc-hId--514632725">When to Update to Standard</H3></DIV><UL><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You need<SPAN> </SPAN><STRONG>more than 15</STRONG><SPAN> </SPAN>schedules</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You require<SPAN> </SPAN><STRONG>sub-hourly</STRONG><SPAN> </SPAN>scheduling (every 5 minutes, etc.)</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You need<SPAN> </SPAN><STRONG>SLA-backed support</STRONG></LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You're running<SPAN> </SPAN><STRONG>business-critical</STRONG><SPAN> </SPAN>production workloads</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> You require<SPAN> </SPAN><STRONG>faster response times</STRONG><SPAN> </SPAN>from SAP support</LI></UL><BLOCKQUOTE><P><span class="lia-unicode-emoji" title=":light_bulb:">💡</span><SPAN> </SPAN><STRONG>Good News:</STRONG><SPAN> </SPAN>Updating from Free to Standard is seamless! Your jobs, schedules, and configurations are preserved. (continue reading for detailed instructions)</P></BLOCKQUOTE><P>That's it for the overview! Scroll down for a quick reference guide and step-by-step instructions to get started with the Free tier on BTP Live.</P><HR /><DIV class=""><H2 id="toc-hId--417743223"><span class="lia-unicode-emoji" title=":rocket:">🚀</span> How to create instance of Free Tier on BTP Live</H2></DIV><DIV class=""><H3 id="toc-hId--907659735">Using the BTP Cockpit</H3></DIV><DIV class=""><H4 id="toc-hId--1397576247">Step 1: Access BTP Cockpit</H4></DIV><OL><LI>Log in to your<SPAN> </SPAN><A href="https://cockpit.btp.cloud.sap/" target="_blank" rel="nofollow noopener noreferrer">SAP BTP Cockpit</A></LI><LI>Navigate to your<SPAN> </SPAN><STRONG>Global Account</STRONG><SPAN> </SPAN>→<SPAN> </SPAN><STRONG>Subaccount</STRONG></LI><LI>Make sure you're in the<SPAN> </SPAN><STRONG>Live</STRONG><SPAN> </SPAN>(production) environment</LI></OL><DIV class=""><H4 id="toc-hId--1594089752">Step 2: Find Job Scheduling Service</H4></DIV><OL><LI>In your subaccount, go to<SPAN> </SPAN><STRONG>Service Marketplace</STRONG></LI><LI>Search for "<STRONG>Job Scheduling Service</STRONG>"</LI><LI>Click on the service to see available plans</LI></OL><DIV class=""><H4 id="toc-hId--1790603257">Step 3: Create Free Plan Instance</H4></DIV><P><STRONG>Using the BTP Cockpit:</STRONG></P><OL><LI>Click<SPAN> </SPAN><STRONG>Create</STRONG><SPAN> </SPAN>button</LI><LI>Select<SPAN> </SPAN><STRONG>Service</STRONG>: Job Scheduling Service</LI><LI>Select<SPAN> </SPAN><STRONG>Plan</STRONG>:<SPAN> </SPAN><CODE>free</CODE><SPAN> </SPAN><span class="lia-unicode-emoji" title=":star:">⭐</span></LI><LI>Enter an<SPAN> </SPAN><STRONG>Instance Name</STRONG>: e.g.,<SPAN> </SPAN><CODE>my-free-scheduler</CODE></LI><LI>Optionally configure parameters (usually not needed)</LI><LI>Click<SPAN> </SPAN><STRONG>Create</STRONG></LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-free-instance.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372298iB63067D6CAA5C927/image-size/large?v=v2&px=999" role="button" title="create-free-instance.gif" alt="create-free-instance.gif" /></span></P><DIV class=""><H3 id="toc-hId--1693713755">Using the Cloud Foundry CLI</H3></DIV><DIV class=""><PRE><SPAN class=""># Log in to your CF space</SPAN>
cf login -a <SPAN class=""><</SPAN>api-endpoint<SPAN class="">></SPAN>
<SPAN class=""># Create free tier instance</SPAN>
cf create-service jobscheduler free my-free-scheduler
<SPAN class=""># Verify creation</SPAN>
cf service my-free-scheduler</PRE></DIV><P>Note: you can check what is available in your region with<SPAN> </SPAN><CODE>cf marketplace -e jobscheduler</CODE></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="create-instance-cli.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372296iE786DD5E70146172/image-size/large?v=v2&px=999" role="button" title="create-instance-cli.gif" alt="create-instance-cli.gif" /></span></P><DIV class=""><H2 id="toc-hId--1596824253"><span class="lia-unicode-emoji" title=":counterclockwise_arrows_button:">🔄</span> Updating from Free to Standard</H2></DIV><P>Need more power? Updating is seamless (see below)!<SPAN> </SPAN><STRONG>Everything</STRONG><SPAN> </SPAN>is preserved!</P><DIV class=""><H3 id="toc-hId--1918557074">Using the BTP Cockpit</H3></DIV><OL><LI>Navigate to your Job Scheduling Service instance in the BTP Cockpit</LI><LI>Click on the instance to view details</LI><LI>Click the<SPAN> </SPAN><STRONG>Actions</STRONG><SPAN> </SPAN>dropdown and select<SPAN> </SPAN><STRONG>Update</STRONG></LI><LI>Choose the<SPAN> </SPAN><CODE>standard</CODE><SPAN> </SPAN>plan and press<SPAN> </SPAN><STRONG>Update Instance</STRONG></LI><LI>Wait for the update to complete (usually a few seconds)</LI><LI>Restage your app to pick up changes</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="update-instance.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372299i83C40A6C9D86BB4B/image-size/large?v=v2&px=999" role="button" title="update-instance.gif" alt="update-instance.gif" /></span></P><DIV class=""><H3 id="toc-hId--2115070579">Using the Cloud Foundry CLI</H3></DIV><DIV class=""><PRE><SPAN class=""># Update service plan</SPAN>
cf update-service my-free-scheduler -p standard
<SPAN class=""># Restage your app to pick up changes</SPAN></PRE><DIV class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="update-instance-cli.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372295i4C19E2D47CA1598E/image-size/large?v=v2&px=999" role="button" title="update-instance-cli.gif" alt="update-instance-cli.gif" /></span></DIV></DIV><BLOCKQUOTE><P><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>Note:</STRONG><SPAN> </SPAN>Updating to Standard will start incurring charges based on usage. When you update your plan, all executions performed on the<SPAN> </SPAN><STRONG>same day</STRONG><SPAN> </SPAN>are charged toward the<SPAN> </SPAN><CODE>standard</CODE><SPAN> </SPAN>plan.</P></BLOCKQUOTE><DIV class=""><H2 id="toc-hId--2018181077"><span class="lia-unicode-emoji" title=":books:">📚</span> Additional Resources</H2></DIV><DIV class=""><H3 id="toc-hId-1786869707">Documentation</H3></DIV><UL><LI><A href="https://help.sap.com/docs/JOB_SCHEDULER" target="_blank" rel="noopener noreferrer">SAP Job Scheduling Service Help</A></LI><LI><A href="https://help.sap.com/docs/job-scheduling/sap-job-scheduling-service/service-plans" target="_blank" rel="noopener noreferrer">Free Plan Service Plans</A></LI><LI><A href="https://help.sap.com/docs/job-scheduling/sap-job-scheduling-service/rest-api" target="_blank" rel="noopener noreferrer">REST API Reference</A></LI><LI><A href="https://help.sap.com/docs/job-scheduling/sap-job-scheduling-service/secure-access" target="_blank" rel="noopener noreferrer">Secure Access Configuration</A></LI></UL><DIV class=""><H3 id="toc-hId-1590356202">Discovery Center</H3></DIV><UL><LI><A href="https://discovery-center.cloud.sap/index.html#/serviceCatalog/job-scheduling-service?region=all&tab=service_plan" target="_blank" rel="nofollow noopener noreferrer">Job Scheduling Service in Discovery Center</A></LI></UL><DIV class=""><H3 id="toc-hId-1393842697">Community</H3></DIV><UL><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/job-scheduler-in-sap-business-technology-platform-overview-of-blog-posts/ba-p/13510707" target="_blank">All Job Scheduler Blog Posts</A></LI><LI><A href="https://community.sap.com/t5/technology-q-a/bd-p/technology-questions" target="_blank">SAP Community Q&A</A></LI></UL><DIV class=""><H3 id="toc-hId-1197329192">Related Blog Posts</H3></DIV><UL><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/ho-ho-ho-a-christmas-present-from-sap-job-scheduling-service-free-plan-on/ba-p/14292434" target="_blank">Christmas Present: Free Plan on BTP Trial</A></LI><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/lite-plan-deprecation-time-to-upgrade-to-free-sap-job-scheduling-service/ba-p/14314717" target="_blank">Lite Plan Deprecation Notice</A></LI></UL><DIV class=""><H2 id="toc-hId-1294218694"><span class="lia-unicode-emoji" title=":party_popper:">🎉</span> Start Scheduling Today!</H2></DIV><P>The Free tier makes SAP Job Scheduling Service accessible to everyone. Whether you're a student learning the ropes, a developer building a side project, or an enterprise evaluating the service, you can now schedule jobs at<SPAN> </SPAN><STRONG>zero cost</STRONG><SPAN> </SPAN>on BTP Live!</P><DIV class=""><H2 id="toc-hId-1097705189"><span class="lia-unicode-emoji" title=":balance_scale:">⚖️</span> Terms and Conditions</H2></DIV><P>Free tier service plans are subject to additional terms:</P><UL><LI><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>No SLA</STRONG>: Service Level Agreements do not apply to free plans</LI><LI><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>Community Support Only</STRONG>: No official SAP support tickets</LI><LI><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>Subject to Terms</STRONG>: Usage governed by<SPAN> </SPAN><A href="https://www.sap.com/about/trust-center/agreements/cloud/cloud-services.html" target="_blank" rel="noopener noreferrer">Business Technology Platform Supplemental Terms and Conditions</A></LI><LI><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>Fair Use</STRONG>: SAP reserves the right to enforce fair use policies</LI><LI><span class="lia-unicode-emoji" title=":warning:">⚠️</span><SPAN> </SPAN><STRONG>Service Changes</STRONG>: SAP may modify or discontinue free tier with notice</LI></UL><BLOCKQUOTE><P><span class="lia-unicode-emoji" title=":light_bulb:">💡</span> For production workloads requiring SLAs and official support, update to the <STRONG>Standard</STRONG> plan.</P></BLOCKQUOTE><HR /><P><STRONG>Happy Scheduling! <span class="lia-unicode-emoji" title=":party_popper:">🎉</span></STRONG></P><P><EM>The SAP Job Scheduling Service Team</EM></P><HR /><P><STRONG>Have questions?</STRONG><SPAN> </SPAN>Drop a comment below or visit the<SPAN> </SPAN><A href="https://community.sap.com/" target="_blank">SAP Community</A><SPAN> </SPAN>to start a discussion!</P>2026-02-13T12:48:39.945000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/from-cloud-foundry-to-kyma-on-sap-btp-5-essential-migration-patterns/ba-p/14328724From Cloud Foundry to Kyma on SAP BTP: 5 Essential Migration Patterns2026-02-17T08:40:01.820000+01:00neilaspinhttps://community.sap.com/t5/user/viewprofilepage/user-id/167493<H1 id="toc-hId-1660623044">From Cloud Foundry to Kyma on SAP BTP: 5 Essential Migration Patterns</H1><H2 id="toc-hId-1593192258">Introduction</H2><P>This blog covers five core patterns you'll need when moving from CF to Kyma, with working examples you can test on a BTP Trial Kyma cluster. Each pattern is presented as a direct comparison: here's how you did it in Cloud Foundry, here's how you do it in Kyma. The code examples are complete and tested—you can copy them into your own Kyma environment and see them work. By the end, you'll have a practical understanding of the migration path and reference code you can adapt for your own applications.</P><P><STRONG>What you'll learn:</STRONG></P><OL><LI>Basic deployment with APIRules (Istio-based routing)</LI><LI>Service bindings using Kubernetes-native patterns</LI><LI>Pre-runtime configuration with init containers</LI><LI>Credential Store integration for secrets management</LI><LI>Destination Service integration from Kyma</LI></OL><H2 id="toc-hId-1396678753">Prerequisites</H2><P>Before starting, you'll need:</P><UL><LI>SAP BTP Trial account with Kyma environment enabled</LI><LI><CODE>kubectl</CODE> CLI installed locally</LI><LI>Basic familiarity with Kubernetes concepts</LI></UL><H2 id="toc-hId-1200165248">Setup: Download Your Kubeconfig</H2><P>First, let's get connected to your Kyma cluster:</P><PRE><CODE># Create directory structure
mkdir -p ~/kyma-tutorials
cd ~/kyma-tutorials
# Download kubeconfig from BTP Cockpit
# Navigate to: Subaccount → Kyma Environment → Download Kubeconfig
# Save to ~/.kube/config-kyma-trial
# Set kubeconfig
export KUBECONFIG=~/.kube/config-kyma-trial
# Verify connection
kubectl cluster-info</CODE></PRE><P><STRONG>Critical first step:</STRONG> Enable Istio sidecar injection on your namespace. Kyma requires this for external routing to work:</P><PRE><CODE># Create namespace
kubectl create namespace demo-app
# Enable Istio injection
kubectl label namespace demo-app istio-injection=enabled
# Set as default
kubectl config set-context --current --namespace=demo-app</CODE></PRE><H2 id="toc-hId-1003651743">Pattern 1: Deployment and Routing with APIRules</H2><P><STRONG>CF Pattern:</STRONG></P><PRE><CODE>cf push myapp</CODE></PRE><P><STRONG>Kyma Pattern:</STRONG> In Kyma, you need three resources: Deployment, Service, and APIRule.</P><PRE><CODE>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kyma
namespace: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: hello-kyma
template:
metadata:
labels:
app: hello-kyma
spec:
containers:
- name: hello
image: hashicorp/http-echo:latest
args:
- "-text=Hello from Kyma!"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: hello-kyma-service
namespace: demo-app
spec:
selector:
app: hello-kyma
ports:
- protocol: TCP
port: 80
targetPort: 5678
---
apiVersion: gateway.kyma-project.io/v2alpha1
kind: APIRule
metadata:
name: hello-kyma-api
namespace: demo-app
spec:
gateway: kyma-system/kyma-gateway
hosts:
- hello-kyma.<YOUR_CLUSTER_DOMAIN>
service:
name: hello-kyma-service
port: 80
rules:
- path: /*
methods: ["GET"]
noAuth: true</CODE></PRE><P>Deploy it:</P><PRE><CODE>kubectl apply -f deployment.yaml
# Wait for pod with Istio sidecar
kubectl get pods -n demo-app
# You should see 2/2 containers (app + istio-proxy)
# Test your app
curl https://hello-kyma.<YOUR_CLUSTER_DOMAIN></CODE></PRE><P><STRONG>Key differences:</STRONG></P><UL><LI>CF Router → Istio + APIRule</LI><LI>CF routes → APIRule hosts</LI><LI>Automatic SSL in both, but APIRule uses <CODE>noAuth: true</CODE> vs App Router patterns</LI></UL><HR /><H2 id="toc-hId-807138238">Pattern 2: Service Bindings - From VCAP_SERVICES to Kubernetes Secrets</H2><P><STRONG>CF Pattern:</STRONG></P><PRE><CODE>cf create-service xsuaa application myxsuaa
cf bind-service myapp myxsuaa
# Credentials appear in VCAP_SERVICES environment variable</CODE></PRE><P><STRONG>Kyma Pattern:</STRONG> In Kyma, service bindings create Kubernetes Secrets.</P><PRE><CODE>apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
name: kyma-xsuaa
namespace: demo-app
spec:
serviceOfferingName: xsuaa
servicePlanName: application
parameters:
xsappname: kyma-demo-xsuaa
tenant-mode: dedicated
scopes:
- name: "$XSAPPNAME.Read"
description: "Read permission"
role-templates:
- name: Reader
scope-references:
- "$XSAPPNAME.Read"
---
apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
name: kyma-xsuaa-binding
namespace: demo-app
spec:
serviceInstanceName: kyma-xsuaa
secretName: kyma-xsuaa-secret</CODE></PRE><P>Deploy it:</P><PRE><CODE>kubectl apply -f xsuaa-instance.yaml
# Wait for ready (takes 1-2 minutes)
kubectl get serviceinstance kyma-xsuaa -n demo-app -w</CODE></PRE><P>Now use the credentials in your app - you have two options:</P><P><STRONG>Option 1: Environment Variables</STRONG></P><PRE><CODE>env:
- name: XSUAA_CLIENTID
valueFrom:
secretKeyRef:
name: kyma-xsuaa-secret
key: clientid
- name: XSUAA_CLIENTSECRET
valueFrom:
secretKeyRef:
name: kyma-xsuaa-secret
key: clientsecret</CODE></PRE><P><STRONG>Option 2: Volume Mounts</STRONG></P><PRE><CODE>volumeMounts:
- name: xsuaa-volume
mountPath: /etc/secrets/xsuaa
readOnly: true
volumes:
- name: xsuaa-volume
secret:
secretName: kyma-xsuaa-secret</CODE></PRE><P>Then read credentials from files:</P><PRE><CODE>const fs = require('fs');
const clientId = fs.readFileSync('/etc/secrets/xsuaa/clientid', 'utf8');
const clientSecret = fs.readFileSync('/etc/secrets/xsuaa/clientsecret', 'utf8');</CODE></PRE><P><STRONG>Key differences:</STRONG></P><P>Cloud Foundry Kyma</P><TABLE><TBODY><TR><TD><CODE>cf bind-service</CODE></TD><TD><CODE>ServiceBinding</CODE> resource</TD></TR><TR><TD><CODE>VCAP_SERVICES</CODE> JSON</TD><TD>Kubernetes Secret</TD></TR><TR><TD>Auto-injected as env var</TD><TD>Mount as volume OR env vars</TD></TR><TR><TD>App parses JSON</TD><TD>App reads individual keys</TD></TR></TBODY></TABLE><HR /><H2 id="toc-hId-610624733">Pattern 3: Pre-Runtime Configuration - From .profile to Init Containers</H2><P><STRONG>CF Pattern:</STRONG> Create a <CODE>.profile</CODE> script in your app directory:</P><PRE><CODE>#!/bin/bash
echo "Decrypting secrets..."
# Runs before app starts, in same container</CODE></PRE><P><STRONG>Kyma Pattern:</STRONG> Use Kubernetes Init Containers that run before your main app:</P><PRE><CODE>apiVersion: apps/v1
kind: Deployment
metadata:
name: init-demo
namespace: demo-app
spec:
template:
spec:
# Init container runs FIRST
initContainers:
- name: decrypt-config
image: busybox:latest
command:
- sh
- -c
- |
echo "Init container running..."
# Decrypt secrets, download certs, etc.
cat /encrypted/config.txt | base64 -d > /decrypted/config.txt
echo "Runtime info: $(date)" >> /decrypted/runtime-info.txt
echo "Init complete!"
volumeMounts:
- name: encrypted-volume
mountPath: /encrypted
- name: decrypted-volume
mountPath: /decrypted
# Main app runs AFTER init completes
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: decrypted-volume
mountPath: /decrypted
readOnly: true
volumes:
- name: encrypted-volume
configMap:
name: encrypted-config
- name: decrypted-volume
emptyDir: {}</CODE></PRE><P><STRONG>Key differences:</STRONG></P><P>CF .profile Kyma Init Containers</P><TABLE><TBODY><TR><TD>Shell script only</TD><TD>Any container image</TD></TR><TR><TD>Same container</TD><TD>Separate container</TD></TR><TR><TD>Sequential only</TD><TD>Multiple init containers (chained)</TD></TR><TR><TD>No resource limits</TD><TD>CPU/memory limits per init</TD></TR></TBODY></TABLE><P><STRONG>Real-world use cases:</STRONG></P><UL><LI>Decrypt Credential Store secrets</LI><LI>Download certificates from external CA</LI><LI>Run database migrations</LI><LI>Generate dynamic configuration</LI><LI>Wait for dependencies to be ready</LI></UL><HR /><H2 id="toc-hId-414111228">Pattern 4: Credential Store Integration</H2><P>In CF, you might fetch credentials in your <CODE>.profile</CODE> script. In Kyma, use an init container to fetch from Credential Store and write to a shared volume.</P><P>First, create the Credential Store service:</P><PRE><CODE>apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
name: credstore
namespace: demo-app
spec:
serviceOfferingName: credstore
servicePlanName: trial
---
apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
name: credstore-binding
namespace: demo-app
spec:
serviceInstanceName: credstore
secretName: credstore-secret</CODE></PRE><P>Then fetch credentials in an init container:</P><PRE><CODE>spec:
initContainers:
- name: fetch-credentials
image: curlimages/curl:latest
command:
- sh
- -c
- |
# Get OAuth token
OAUTH_URL="$(cat /credstore/url | sed 's|/api.*||')/oauth/token"
TOKEN=$(curl -s -X POST "$OAUTH_URL" \
-d "grant_type=client_credentials" \
-d "client_id=$(cat /credstore/username)" \
-d "client_secret=$(cat /credstore/password)" \
| jq -r '.access_token')
# Fetch credential
CRED_URL=$(cat /credstore/url)
curl -s "$CRED_URL/password?name=database-password" \
-H "Authorization: Bearer $TOKEN" \
| jq -r '.value' > /secrets/db-password
volumeMounts:
- name: credstore-creds
mountPath: /credstore
readOnly: true
- name: runtime-secrets
mountPath: /secrets
containers:
- name: app
image: myapp:latest
volumeMounts:
- name: runtime-secrets
mountPath: /secrets
readOnly: true
volumes:
- name: credstore-creds
secret:
secretName: credstore-secret
- name: runtime-secrets
emptyDir: {}</CODE></PRE><P><STRONG>Why this matters:</STRONG></P><UL><LI>Credentials NOT stored in Kubernetes Secrets (more secure)</LI><LI>Credentials fetched at runtime (can rotate without redeploying)</LI><LI>Main app doesn't need Credential Store SDK</LI><LI>Init container separates credential management from app logic</LI></UL><HR /><H2 id="toc-hId-217597723">Pattern 5: Destination Service Integration</H2><P><STRONG>CF Pattern:</STRONG> In CF, you use the App Router with <CODE>@sap/approuter</CODE> package, which handles Destination Service integration automatically.</P><P><STRONG>Kyma Pattern:</STRONG> In Kyma, you need to call the Destination Service API directly from your application.</P><P>First, create the destination in BTP Cockpit:</P><OL><LI>Go to Connectivity → Destinations</LI><LI>Create destination:<UL><LI>Name: <CODE>backend-api</CODE></LI><LI>URL: <CODE><A href="https://api.example.com" target="_blank" rel="noopener nofollow noreferrer">https://api.example.com</A></CODE></LI><LI>Authentication: <CODE>NoAuthentication</CODE></LI><LI>Additional Property: <CODE>forwardAuthToken = true</CODE></LI></UL></LI></OL><P>Then create the Destination service instance:</P><PRE><CODE>apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
name: destination
namespace: demo-app
spec:
serviceOfferingName: destination
servicePlanName: lite
---
apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
name: destination-binding
namespace: demo-app
spec:
serviceInstanceName: destination
secretName: destination-secret</CODE></PRE><P>Use it in your Node.js app:</P><PRE><CODE>const express = require('express');
const axios = require('axios');
const fs = require('fs');
const app = express();
// Read Destination Service credentials from mounted secret
const destCreds = {
uri: fs.readFileSync('/etc/secrets/destination/uri', 'utf8').trim(),
clientid: fs.readFileSync('/etc/secrets/destination/clientid', 'utf8').trim(),
clientsecret: fs.readFileSync('/etc/secrets/destination/clientsecret', 'utf8').trim(),
url: fs.readFileSync('/etc/secrets/destination/url', 'utf8').trim()
};
app.get('/call-backend', async (req, res) => {
try {
// 1. Get OAuth token for Destination Service
const tokenResponse = await axios.post(
`${destCreds.url}/oauth/token`,
new URLSearchParams({
grant_type: 'client_credentials',
client_id: destCreds.clientid,
client_secret: destCreds.clientsecret
}),
{ headers: { 'Content-Type': 'application/x-www-form-urlencoded' } }
);
const token = tokenResponse.data.access_token;
// 2. Get destination configuration
const destResponse = await axios.get(
`${destCreds.uri}/destination-configuration/v1/destinations/backend-api`,
{ headers: { Authorization: `Bearer ${token}` } }
);
const backendUrl = destResponse.data.destinationConfiguration.URL;
// 3. Call backend via destination
const backendResponse = await axios.get(`${backendUrl}/posts/1`);
res.json({
message: 'Success!',
data: backendResponse.data
});
} catch (error) {
res.status(500).json({ error: error.message });
}
});
app.listen(8080);</CODE></PRE><P>Deploy with the secret mounted:</P><PRE><CODE>apiVersion: apps/v1
kind: Deployment
metadata:
name: destination-demo
spec:
template:
spec:
containers:
- name: app
image: node:18-alpine
volumeMounts:
- name: destination-creds
mountPath: /etc/secrets/destination
readOnly: true
volumes:
- name: destination-creds
secret:
secretName: destination-secret
---
apiVersion: gateway.kyma-project.io/v2alpha1
kind: APIRule
metadata:
name: destination-demo-api
spec:
gateway: kyma-system/kyma-gateway
hosts:
- destination-demo.<YOUR_CLUSTER_DOMAIN>
service:
name: destination-demo-service
port: 80
rules:
- path: /*
methods: ["GET"]
noAuth: true</CODE></PRE><P><STRONG>Key differences:</STRONG></P><P>Cloud Foundry Kyma</P><TABLE><TBODY><TR><TD>App Router handles it</TD><TD>Manual API calls</TD></TR><TR><TD><CODE>@sap/approuter</CODE> package</TD><TD>Custom integration code</TD></TR><TR><TD>Auto-configured routes</TD><TD>Explicit OAuth + API calls</TD></TR></TBODY></TABLE><HR /><H2 id="toc-hId-21084218">Summary: CF vs Kyma Migration Checklist</H2><P>Feature Cloud Foundry Kyma Equivalent</P><TABLE><TBODY><TR><TD><STRONG>Deploy</STRONG></TD><TD><CODE>cf push</CODE></TD><TD><CODE>kubectl apply -f deployment.yaml</CODE></TD></TR><TR><TD><STRONG>Service Binding</STRONG></TD><TD><CODE>cf bind-service</CODE></TD><TD><CODE>ServiceBinding</CODE> resource</TD></TR><TR><TD><STRONG>Credentials</STRONG></TD><TD><CODE>VCAP_SERVICES</CODE> JSON env var</TD><TD>Kubernetes Secret (volume/env)</TD></TR><TR><TD><STRONG>Pre-runtime</STRONG></TD><TD><CODE>.profile</CODE> script</TD><TD>Init Containers</TD></TR><TR><TD><STRONG>Routing</STRONG></TD><TD>CF Router + routes</TD><TD>Istio + APIRule</TD></TR><TR><TD><STRONG>Authentication</STRONG></TD><TD>App Router</TD><TD>APIRule <CODE>accessStrategies</CODE></TD></TR><TR><TD><STRONG>Namespace isolation</STRONG></TD><TD>CF Spaces (shared network)</TD><TD>K8s Namespaces (network policies)</TD></TR><TR><TD><STRONG>Scaling</STRONG></TD><TD><CODE>cf scale</CODE></TD><TD><CODE>kubectl scale</CODE> or HPA</TD></TR></TBODY></TABLE><H2 id="toc-hId-171825070">Key Takeaways</H2><OL><LI><P><STRONG>Istio is mandatory</STRONG>: Always enable <CODE>istio-injection=enabled</CODE> on namespaces before deploying apps that need external access.</P></LI><LI><P><STRONG>Secrets are files, not JSON</STRONG>: In Kyma, service credentials are individual files in a mounted volume, not a single JSON object.</P></LI><LI><P><STRONG>Init containers are powerful</STRONG>: They're not just a <CODE>.profile</CODE> replacement—they can use any container image, have independent resource limits, and can be chained.</P></LI><LI><P><STRONG>Manual integration required</STRONG>: Unlike CF's App Router, Kyma requires you to integrate with BTP services (like Destination Service) directly via API calls.</P></LI><LI><P><STRONG>Kubernetes-native</STRONG>: Kyma uses standard Kubernetes patterns. If you know Kubernetes, you know Kyma. If you don't, learning Kyma teaches you portable Kubernetes skills.</P></LI></OL><H2 id="toc-hId--24688435">Conclusion</H2><P>The patterns in this blog give you a foundation to migrate existing CF apps to Kyma. Start with simple apps, master these five patterns, then tackle more complex workloads.</P><P> </P>2026-02-17T08:40:01.820000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/sap-job-scheduling-service-we-redesigned-the-dashboard-and-added-analytics/ba-p/14334944SAP Job Scheduling Service: We Redesigned the Dashboard (And Added Analytics)2026-02-24T08:11:17.768000+01:00DenisDuevhttps://community.sap.com/t5/user/viewprofilepage/user-id/180332<DIV class=""><H1 id="toc-hId-1661429385">We Redesigned the Dashboard (And Added Analytics)</H1></DIV><P><EM>Fewer clicks. More insights. Still in preview — we need your feedback.</EM></P><HR /><P>Hi folks <span class="lia-unicode-emoji" title=":waving_hand:">👋</span></P><P>You may have already seen it, but we put some love into our Job Scheduling Dashboard.</P><P>It's now more intuitive - we've eliminated some pages so you can navigate with fewer clicks - saving you precious time. And we've standardized the look and feel, so you can easily find what you need.</P><P>But the bigger change? We're experimenting with analytics. <span class="lia-unicode-emoji" title=":bar_chart:">📊</span></P><P>Many of you have told us you use the dashboard for monitoring your jobs - but honestly, it doesn't always do the job well. One example we kept hearing: "How do I find if some execution failed in a schedule?" Good question. We didn't have a good answer. So we built one. <span class="lia-unicode-emoji" title=":hammer_and_wrench:">🛠</span>️</P><P>We want to give you more insights into your jobs, so you can easily identify issues and optimize your workflows. But this is just the beginning - we have many more ideas in the pipeline, and we can't wait to share them with you.</P><P>These features are in preview, so we'd love your feedback. Some of them are limited to certain regions or depend on the number of jobs in your instance. Let us know what works and what doesn't.</P><HR /><DIV class=""><H2 id="toc-hId-1593998599">The Overview Page</H2></DIV><P>What's new here:</P><UL><LI>Configuration Page is gone - it's now a simple edit right in this page (one less click, you're welcome)</LI><LI>Two statistics cards - one for Job Statistics and one for Schedule Statistics</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="overview.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/376002iCC093FB9B7C9F4A4/image-size/large?v=v2&px=999" role="button" title="overview.png" alt="overview.png" /></span></P><P> </P><HR /><DIV class=""><H2 id="toc-hId-1397485094">Jobs</H2></DIV><P>In the Jobs overview we decided to show the filter by default. You have a filter??? Yes we have - but it was kind of hidden from you. We know. Sorry about that. <span class="lia-unicode-emoji" title=":grinning_face_with_sweat:">😅</span> Apart from the filter, only the layout has changed.</P><P>In the Job details page we merged the overview with schedules - everything in one place now.</P><P>But here's the interesting part - we've added analytics cards. Schedule statistics. Best practices warnings. Like when you have a POST job without a body defined in the schedule - because trust me, that leads to issues on your side. We've seen it happen too many times.<SPAN> </SPAN><span class="lia-unicode-emoji" title=":warning:">⚠️</span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="job-details.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/376005iA79007912F10D7F1/image-size/large?v=v2&px=999" role="button" title="job-details.gif" alt="job-details.gif" /></span></P><HR /><DIV class=""><H2 id="toc-hId-1200971589">Schedule Overview</H2></DIV><P>Same idea as Jobs - we merged the overview and details pages into one.</P><P>Here we thought you might be interested in the success rate of your schedules, so we added a card for that. Simple question, simple answer: are your schedules healthy or not?</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="schedule-overview.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/376004i73C1989E49C068DB/image-size/large?v=v2&px=999" role="button" title="schedule-overview.gif" alt="schedule-overview.gif" /></span></P><HR /><DIV class=""><H2 id="toc-hId-1004458084">Help Us Make It Better <span class="lia-unicode-emoji" title=":speech_balloon:">💬</span></H2></DIV><P>This is preview for a reason - your feedback shapes what comes next.</P><P>Now it's your turn. Try the new features and tell us what you think. What's working? What's broken? What's missing? We're listening. <span class="lia-unicode-emoji" title=":ear:">👂</span></P><UL><LI><STRONG>Comment here</STRONG><SPAN> </SPAN>- post a comment or send a message</LI><LI><STRONG>Influence Portal</STRONG><SPAN> </SPAN>-<SPAN> </SPAN><A href="https://influence.sap.com/sap/ino/#campaign/2277" target="_blank" rel="noopener noreferrer">BTP Foundation</A><SPAN> </SPAN>- help shape the roadmap</LI><LI><STRONG>Documentation feedback</STRONG><SPAN> </SPAN>-<SPAN> </SPAN><A href="https://help.sap.com/docs/job-scheduling" target="_blank" rel="noopener noreferrer">Help Portal</A><SPAN> </SPAN>or<SPAN> </SPAN><A href="https://github.com/SAP-docs/btp-job-scheduling-service" target="_blank" rel="nofollow noopener noreferrer">GitHub</A></LI><LI><STRONG>Reach out directly</STRONG><SPAN> </SPAN>- I'm Denis, the PM for Job Scheduling service. Find me on<SPAN> </SPAN><A href="https://www.linkedin.com/in/denis-duev/" target="_blank" rel="nofollow noopener noreferrer">LinkedIn</A><SPAN> </SPAN>- always happy to chat.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="feedback-needed.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/376003i536086F4930061C7/image-size/large?v=v2&px=999" role="button" title="feedback-needed.png" alt="feedback-needed.png" /></span></P><P> </P><P> </P>2026-02-24T08:11:17.768000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/deploying-a-python-microservice-on-sap-btp-kyma-runtime/ba-p/14338774Deploying a Python Microservice on SAP BTP Kyma Runtime2026-03-01T10:07:07.875000+01:00neilaspinhttps://community.sap.com/t5/user/viewprofilepage/user-id/167493<P>This tutorial deploys a Python Flask application to Kyma runtime. Most tutorials lean towards Node.js or Java, so I wanted to document the Python path properly — including the gotchas that aren't covered in the official docs.</P><HR /><H2 id="toc-hId-1790629439">Prerequisites</H2><UL><LI>SAP BTP account with Kyma enabled (trial is fine to follow along)</LI><LI><CODE>kubectl</CODE> installed and configured with your Kyma kubeconfig</LI><LI>Docker Desktop installed and running</LI><LI>Docker Hub account</LI><LI>Python 3.11+</LI></UL><HR /><H2 id="toc-hId-1594115934">Project Structure</H2><PRE><CODE>my-btp-python/
├── app.py
├── requirements.txt
├── Dockerfile
└── k8s/
├── deployment.yaml
├── service.yaml
└── apirule.yaml</CODE></PRE><HR /><H2 id="toc-hId-1397602429">The Flask App</H2><P>A simple two-endpoint app — a health check and a data endpoint:</P><P><STRONG><CODE>app.py</CODE></STRONG></P><PRE><CODE>from flask import Flask, jsonify
import os
app = Flask(__name__)
@app.route("/health", methods=["GET"])
def health():
return jsonify({"status": "ok"}), 200
@app.route("/api/data", methods=["GET"])
def get_data():
return jsonify({
"message": "Hello from Kyma!",
"environment": os.getenv("ENVIRONMENT", "dev")
}), 200
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
app.run(host="0.0.0.0", port=port)</CODE></PRE><P><STRONG><CODE>requirements.txt</CODE></STRONG></P><PRE><CODE>flask==3.0.3</CODE></PRE><HR /><H2 id="toc-hId-1201088924">Dockerfile</H2><PRE><CODE>FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py .
EXPOSE 8080
ENV FLASK_APP=app.py
CMD ["flask", "run", "--host=0.0.0.0", "--port=8080"]</CODE></PRE><H3 id="toc-hId-1133658138">Apple Silicon Users</H3><P>Kyma clusters run on <CODE>amd64</CODE> (x86) nodes. If you're on Apple Silicon your Mac builds <CODE>arm64</CODE> images by default, which won't run on the cluster. Build explicitly for the right platform:</P><PRE><CODE>docker buildx build --platform linux/amd64 -t yourusername/btp-python-app:v1 --push .</CODE></PRE><HR /><H2 id="toc-hId-808061914">Kubernetes Manifests</H2><P><STRONG><CODE>k8s/deployment.yaml</CODE></STRONG></P><PRE><CODE>apiVersion: apps/v1
kind: Deployment
metadata:
name: btp-python-app
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: btp-python-app
template:
metadata:
labels:
app: btp-python-app
spec:
containers:
- name: btp-python-app
image: yourusername/btp-python-app:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
env:
- name: ENVIRONMENT
value: "kyma-trial"
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"</CODE></PRE><P>Set <CODE>imagePullPolicy: Always</CODE> during development — without it Kubernetes will use a cached image on the node and ignore newly pushed changes.</P><P><STRONG><CODE>k8s/service.yaml</CODE></STRONG></P><PRE><CODE>apiVersion: v1
kind: Service
metadata:
name: btp-python-app
namespace: default
spec:
selector:
app: btp-python-app
ports:
- port: 80
targetPort: 8080</CODE></PRE><HR /><H2 id="toc-hId-611548409">APIRule v2</H2><P>APIRule v1beta1 was removed from Kyma in mid-2025. If you're following older tutorials you'll hit errors. The current version is <CODE>v2</CODE> and the syntax changed significantly.</P><P>First, get your cluster domain:</P><PRE><CODE>kubectl get gateway -n kyma-system kyma-gateway -o jsonpath='{.spec.servers[0].hosts[0]}'</CODE></PRE><P>This returns something like <CODE>*.c-abc123.kyma.ondemand.com</CODE>. Strip the <CODE>*.</CODE> prefix and use that as your base domain.</P><P><STRONG><CODE>k8s/apirule.yaml</CODE></STRONG></P><PRE><CODE>apiVersion: gateway.kyma-project.io/v2
kind: APIRule
metadata:
name: btp-python-app
namespace: default
spec:
gateway: kyma-system/kyma-gateway
hosts:
- btp-python-app.c-abc123.kyma.ondemand.com
service:
name: btp-python-app
port: 80
rules:
- path: /*
methods: ["GET", "POST"]
noAuth: true</CODE></PRE><P>Key changes from v1beta1:</P><UL><LI><CODE>host</CODE> is now <CODE>hosts</CODE> (a list)</LI><LI>Requires a fully qualified domain name</LI><LI><CODE>accessStrategies</CODE> with <CODE>handler: noop</CODE> is replaced by <CODE>noAuth: true</CODE></LI><LI><CODE>/.*</CODE> path syntax is no longer valid — use <CODE>/*</CODE></LI></UL><HR /><H2 id="toc-hId-415034904">Istio Sidecar Injection</H2><P>APIRule v2 is Istio-based. Without sidecar injection enabled on your namespace, the APIRule will show an <CODE>Error</CODE> status. Enable it before deploying:</P><PRE><CODE>kubectl label namespace default istio-injection=enabled</CODE></PRE><HR /><H2 id="toc-hId-218521399">Deploy</H2><PRE><CODE>kubectl apply -f k8s/</CODE></PRE><P>Watch the pods — you're looking for <CODE>2/2</CODE> in the READY column (your container plus the Istio sidecar):</P><PRE><CODE>kubectl get pods -w</CODE></PRE><P>Verify the APIRule is ready:</P><PRE><CODE>kubectl get apirule btp-python-app</CODE></PRE><HR /><H2 id="toc-hId-22007894">Test</H2><PRE><CODE>curl https://btp-python-app.c-abc123.kyma.ondemand.com/health
# {"status":"ok"}
curl https://btp-python-app.c-abc123.kyma.ondemand.com/api/data
# {"environment":"kyma-trial","message":"Hello from Kyma!"}</CODE></PRE><HR /><H2 id="toc-hId-172748746">Troubleshooting</H2><P><STRONG>404 from istio-envoy</STRONG> — Check the VirtualService that the APIRule generates has the correct hostname:</P><PRE><CODE>kubectl get virtualservice</CODE></PRE><P>If it shows an incorrect domain, delete and recreate the APIRule:</P><PRE><CODE>kubectl delete apirule btp-python-app
kubectl apply -f k8s/apirule.yaml</CODE></PRE><P><STRONG>Test the app inside the pod</STRONG> before blaming the networking:</P><PRE><CODE>kubectl exec -it $(kubectl get pod -l app=btp-python-app -o jsonpath='{.items[0].metadata.name}') -c btp-python-app -- python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8080/health').read())"</CODE></PRE><P>If this returns <CODE>{"status":"ok"}</CODE> then Flask is running fine and the problem is in the routing layer.</P><P><STRONG>Trial cluster expiry</STRONG> — Trial Kyma clusters expire after 14 days. When they go down, <CODE>kubectl</CODE> commands will throw DNS errors. Recreate the cluster from BTP Cockpit and re-download the kubeconfig. Keep your manifests in source control so you can redeploy quickly.</P><HR /><H2 id="toc-hId--23764759">Next Steps</H2><P>From here you can extend this with:</P><UL><LI>XSUAA JWT validation on the APIRule for authentication</LI><LI>A <CODE>ServiceInstance</CODE> and <CODE>ServiceBinding</CODE> for BTP services like Destination or Connectivity</LI><LI>Reading bound service credentials from environment variables injected via Kubernetes secrets</LI></UL><HR /><H2 id="toc-hId--220278264">Conclusion</H2><P>Getting Python running on Kyma is more involved than it might first appear, but once you understand the moving parts — container registry, Istio sidecar injection, APIRule v2 syntax, and architecture compatibility — it becomes fairly repeatable. The trial environment adds some extra friction with expiring clusters and kubeconfigs, but the underlying platform is solid.</P><P>The key is understanding that Kyma is a proper Kubernetes-native environment rather than a traditional PaaS. Once that clicks, the tooling makes sense — Istio out of the box, a clean service binding model, and automatic TLS via Let's Encrypt. There's a learning curve, but it's worth the investment.</P>2026-03-01T10:07:07.875000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/sap-job-scheduling-service-rest-api-now-on-business-accelerator-hub-explore/ba-p/14341463SAP Job Scheduling Service REST API Now on Business Accelerator Hub - Explore and Try It Out!2026-03-05T07:00:00.029000+01:00DenisDuevhttps://community.sap.com/t5/user/viewprofilepage/user-id/180332<DIV class=""><H1 id="toc-hId-1662258789"><span class="lia-unicode-emoji" title=":rocket:">🚀</span> SAP Job Scheduling Service REST API Now on Business Accelerator Hub - Explore and Try It Out!</H1></DIV><P><STRONG>Big news for SAP developers!</STRONG><SPAN> </SPAN>The SAP Job Scheduling Service REST API is now available on the<SPAN> </SPAN><A href="https://api.sap.com/" target="_blank" rel="noopener noreferrer">SAP Business Accelerator Hub</A><SPAN> </SPAN>(API Hub for short), making it easier than ever to discover, explore, and integrate job scheduling capabilities into your SAP BTP applications.</P><P>Whether you're building Cloud Foundry or Kyma applications, automating workflows, or managing background tasks, you can now find everything you need in one centralized location alongside other SAP APIs.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="hero.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379676i3A8B4E4582661446/image-size/large?v=v2&px=999" role="button" title="hero.png" alt="hero.png" /></span></P><DIV class=""><H2 id="toc-hId-1594828003"><span class="lia-unicode-emoji" title=":direct_hit:">🎯</span> What's New?</H2></DIV><P>As of<SPAN> </SPAN><STRONG>February 18, 2026</STRONG>, the SAP Job Scheduling Service REST API is live on Business Accelerator Hub<SPAN> </SPAN><A href="https://api.sap.com/" target="_blank" rel="noopener noreferrer">https://api.sap.com/</A></P><DIV class=""><H3 id="toc-hId-1527397217">Why This Matters</H3></DIV><P>Before this release, developers had to piece together API documentation from help portals, blog posts, and code examples. Now, everything is in one place with:</P><UL><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Centralized Discovery</STRONG><SPAN> </SPAN>- Find the Job Scheduling Service API alongside 1000+ other SAP APIs</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Interactive Testing</STRONG><SPAN> </SPAN>- Try API endpoints directly in your browser with the "Try It Out" feature</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>OpenAPI 3.0.3 Specification</STRONG><SPAN> </SPAN>- Standards-based documentation that works with any OpenAPI tool</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Code Generation</STRONG><SPAN> </SPAN>- Generate client libraries in your favorite programming language</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Live Documentation</STRONG><SPAN> </SPAN>- Always up-to-date API reference with request/response examples</LI></UL><HR /><DIV class=""><H2 id="toc-hId-1201800993"><span class="lia-unicode-emoji" title=":magnifying_glass_tilted_left:">🔍</span> What Can the Job Scheduling Service API Do?</H2></DIV><P>The Job Scheduling Service REST API provides complete programmatic control over job scheduling on SAP BTP:</P><UL><LI><span class="lia-unicode-emoji" title=":clipboard:">📋</span> Job Management</LI><LI><span class="lia-unicode-emoji" title=":one_o_clock:">🕐</span> Schedule Management</LI><LI><span class="lia-unicode-emoji" title=":bar_chart:">📊</span> Monitoring & Run Logs</LI><LI><span class="lia-unicode-emoji" title=":locked_with_key:">🔐</span> Authentication</LI></UL><HR /><DIV class=""><H2 id="toc-hId-1005287488"><span class="lia-unicode-emoji" title=":magnifying_glass_tilted_right:">🔎</span> How to Find the API on Business Accelerator Hub</H2></DIV><P>Let's walk through discovering the Job Scheduling Service API on BAH:</P><DIV class=""><H3 id="toc-hId-937856702">Option 1: Search for "Job Scheduling service"</H3></DIV><P>Search for "SAP Job Scheduling Service" directly in the search bar.</P><DIV class=""> </DIV><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-search.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379680i9C4F8679A402F450/image-size/large?v=v2&px=999" role="button" title="bah-search.png" alt="bah-search.png" /></span></P><DIV class=""><H3 id="toc-hId-741343197">Option 2: Direct Link</H3></DIV><P>Directly go to<SPAN> </SPAN><A href="https://api.sap.com/api/sap-btpjss-admin-v1/overview" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/sap-btpjss-admin-v1/overview</A></P><DIV class=""> </DIV><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-overview.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379679i14B55CFCDFD6F61F/image-size/large?v=v2&px=999" role="button" title="bah-overview.png" alt="bah-overview.png" /></span></P><DIV class=""><H3 id="toc-hId-544829692">API Details Page</H3></DIV><P>You'll see:</P><UL><LI><STRONG>Overview</STRONG><SPAN> </SPAN>- High-level description of the API capabilities</LI><LI><STRONG>API Reference</STRONG><SPAN> </SPAN>- Complete list of all endpoints organized by category</LI><LI><STRONG>Schema View</STRONG><SPAN> </SPAN>- Detailed definitions of request and response objects</LI><LI><STRONG>Try It Out</STRONG><SPAN> </SPAN>- Interactive testing of API endpoints with real authentication</LI><LI><STRONG>Documents</STRONG><SPAN> </SPAN>- Additional documentation links</LI></UL><HR /><DIV class=""><H2 id="toc-hId-219233468"><span class="lia-unicode-emoji" title=":open_book:">📖</span> Exploring the API Documentation</H2></DIV><P>The API documentation on BAH is organized into clear sections:</P><DIV class=""><H3 id="toc-hId-151802682">API Endpoints Overview</H3></DIV><P>The Job Scheduling Service API groups endpoints into three main categories:</P><P><STRONG><span class="lia-unicode-emoji" title=":wrench:">🔧</span> Jobs</STRONG><SPAN> </SPAN>(<CODE>/scheduler/jobs</CODE>)</P><UL><LI><CODE>POST /jobs</CODE><SPAN> </SPAN>- Create a new job</LI><LI><CODE>GET /jobs</CODE><SPAN> </SPAN>- Retrieve all jobs</LI><LI><CODE>GET /jobs/{jobId}</CODE><SPAN> </SPAN>- Get details of a specific job</LI><LI><CODE>PATCH /jobs/{jobId}</CODE><SPAN> </SPAN>- Update an existing job</LI><LI><CODE>DELETE /jobs/{jobId}</CODE><SPAN> </SPAN>- Delete a job</LI></UL><P><STRONG><span class="lia-unicode-emoji" title=":calendar:">📅</span> Schedules</STRONG><SPAN> </SPAN>(<CODE>/scheduler/jobs/{jobId}/schedules</CODE>)</P><UL><LI><CODE>POST /jobs/{jobId}/schedules</CODE><SPAN> </SPAN>- Create a schedule for a job</LI><LI><CODE>GET /jobs/{jobId}/schedules</CODE><SPAN> </SPAN>- Get all schedules for a job</LI><LI><CODE>GET /jobs/{jobId}/schedules/{scheduleId}</CODE><SPAN> </SPAN>- Get a specific schedule</LI><LI><CODE>PATCH /jobs/{jobId}/schedules/{scheduleId}</CODE><SPAN> </SPAN>- Update a schedule</LI><LI><CODE>DELETE /jobs/{jobId}/schedules</CODE><SPAN> </SPAN>- Delete all schedules</LI><LI><CODE>DELETE /jobs/{jobId}/schedules/{scheduleId}</CODE><SPAN> </SPAN>- Delete a specific schedule</LI></UL><P><STRONG><span class="lia-unicode-emoji" title=":bar_chart:">📊</span> Run Logs</STRONG><SPAN> </SPAN>(<CODE>/scheduler/jobs/{jobId}/schedules/{scheduleId}/runs</CODE>)</P><UL><LI><CODE>GET /jobs/{jobId}/schedules/{scheduleId}/runs</CODE><SPAN> </SPAN>- Get execution logs</LI><LI><CODE>GET /jobs/{jobId}/schedules/{scheduleId}/runs/{runId}</CODE><SPAN> </SPAN>- Get details of a specific run</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-api-reference.gif" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379677i60A637433C80B11C/image-size/large?v=v2&px=999" role="button" title="bah-api-reference.gif" alt="bah-api-reference.gif" /></span></P><DIV class=""><H3 id="toc-hId--119942192">Schema Definitions</H3></DIV><P>The API documentation includes detailed schema definitions for all request and response objects:</P><UL><LI><STRONG>CreateJobRequest</STRONG><SPAN> </SPAN>- Job creation with schedules</LI><LI><STRONG>JobDetails</STRONG><SPAN> </SPAN>- Complete job information</LI><LI><STRONG>ScheduleDetails</STRONG><SPAN> </SPAN>- Schedule configuration (cron, time, repeatInterval)</LI><LI><STRONG>RunLog</STRONG><SPAN> </SPAN>- Execution details, status, and HTTP responses</LI><LI><STRONG>Error responses</STRONG><SPAN> </SPAN>- Standard error format with messages and HTTP status codes</LI></UL><P>Each schema shows:</P><UL><LI>Property names and types</LI><LI>Required vs. optional fields</LI><LI>Default values</LI><LI>Validation constraints (min/max length, patterns, enums)</LI><LI>Example values</LI></UL><HR /><DIV class=""><H2 id="toc-hId--23052690"><span class="lia-unicode-emoji" title=":video_game:">🎮</span> Try It Out - Test the API Interactively</H2></DIV><P>One of the most powerful features of Business Accelerator Hub is the<SPAN> </SPAN><STRONG>"Try It Out"</STRONG><SPAN> </SPAN>functionality. Let's walk through testing a real API endpoint!</P><DIV class=""><H3 id="toc-hId--512969202">Prerequisites</H3></DIV><P>Before you can test the API, you'll need:</P><OL><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> A<SPAN> </SPAN><STRONG>Job Scheduling service instance</STRONG><SPAN> </SPAN>on SAP BTP (Trial or Live)</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> A<SPAN> </SPAN><STRONG>service key</STRONG><SPAN> </SPAN>with OAuth credentials</LI></OL><P>Don't have these yet? Check out our<SPAN> </SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/job-scheduler-in-sap-business-technology-platform-overview-of-blog-posts/ba-p/13510707" target="_blank">first blog post</A><SPAN> </SPAN>in the series for setup instructions.</P><DIV class=""><H3 id="toc-hId--709482707">Getting Your Service Key</H3></DIV><OL><LI>Open the<SPAN> </SPAN><STRONG>SAP BTP Cockpit</STRONG></LI><LI>Navigate to your<SPAN> </SPAN><STRONG>subaccount</STRONG><SPAN> </SPAN>→<SPAN> </SPAN><STRONG>space</STRONG><SPAN> </SPAN>→<SPAN> </SPAN><STRONG>Service Instances</STRONG></LI><LI>Find your Job Scheduling service instance</LI><LI>Click<SPAN> </SPAN><STRONG>"Create Service Key"</STRONG><SPAN> </SPAN>(or use an existing one)</LI><LI>Open the service key and note the following values:<UL><LI><CODE>url</CODE><SPAN> </SPAN>- The base URL for the REST API - extract the Landscape Region from this (e.g.<SPAN> </SPAN><CODE>eu10</CODE><SPAN> </SPAN>from<SPAN> </SPAN><CODE><A href="https://jobscheduler-rest.cfapps.eu10.hana.ondemand.com" target="_blank" rel="noopener nofollow noreferrer">https://jobscheduler-rest.cfapps.eu10.hana.ondemand.com</A></CODE>)</LI><LI><CODE>uaa.identityzone</CODE><SPAN> </SPAN>- The zone where the Identity Authentication service is located (e.g.<SPAN> </SPAN><CODE>my-subdomain.authentication.eu10.hana.ondemand.com</CODE>)</LI><LI><CODE>uaa.clientid</CODE><SPAN> </SPAN>- Your OAuth client ID</LI><LI><CODE>uaa.clientsecret</CODE><SPAN> </SPAN>- Your OAuth client secret</LI></UL></LI></OL><P>Example service key structure:</P><DIV class=""><PRE>{
<SPAN class="">"url"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>https://jobscheduler-rest.cfapps.eu10.hana.ondemand.com<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"uaa"</SPAN>: {
<SPAN class="">"identityzone"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>my-subdomain<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"clientid"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>sb-clone-jobscheduler-service!b1234<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"clientsecret"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>your-secret-here<SPAN class="">"</SPAN></SPAN>
}
}</PRE><DIV class=""> </DIV></DIV><DIV class=""><H3 id="toc-hId--905996212">Step-by-Step: Testing GET /jobs</H3></DIV><P>Let's test the<SPAN> </SPAN><STRONG>GET /jobs</STRONG><SPAN> </SPAN>endpoint to retrieve all jobs in your Job Scheduling service instance.</P><DIV class=""><H4 id="toc-hId--1395912724">1. Configure Environment for Authentication</H4></DIV><P>Click the<SPAN> </SPAN><STRONG>"Select environment"</STRONG><SPAN> </SPAN>button:</P><DIV class=""> </DIV><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-create-env.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379675iC564627E39ADD45B/image-size/large?v=v2&px=999" role="button" title="bah-create-env.png" alt="bah-create-env.png" /></span></P><P>In the environment dialog enter:</P><OL><LI><CODE>Display name</CODE><SPAN> </SPAN>- e.g. "My Trial instance"</LI><LI><CODE>Landscape Region</CODE><SPAN> </SPAN>- e.g ap21 (trial) - extracted from the<SPAN> </SPAN><CODE>url</CODE></LI><LI><CODE>Client ID</CODE><SPAN> </SPAN>- From your service key (<CODE>uaa.clientid</CODE>)</LI><LI><CODE>Client Secret</CODE><SPAN> </SPAN>- From your service key (<CODE>uaa.clientsecret</CODE>)</LI><LI><CODE>Cf-subaccount-domain</CODE><SPAN> </SPAN>- the domain of your BTP subaccount (e.g.<SPAN> </SPAN><CODE>dadb4adetrial</CODE>) from<SPAN> </SPAN><CODE>identityzone</CODE></LI><LI><CODE>Landscape Region</CODE><SPAN> </SPAN>- again - same as above (e.g.<SPAN> </SPAN><CODE>ap21</CODE>) - yes, we need that, and yes it is confusing</LI></OL><DIV class=""> </DIV><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-env.png" style="width: 624px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379674i64A641C50DCEA800/image-size/large?v=v2&px=999" role="button" title="bah-env.png" alt="bah-env.png" /></span></P><P>If successful, you'll see a confirmation that the access token was obtained automatically.</P><DIV class=""><H4 id="toc-hId--1592426229">2. Find the GET /jobs Endpoint</H4></DIV><P>Navigate to the<SPAN> </SPAN><STRONG>Jobs</STRONG><SPAN> </SPAN>section and expand the<SPAN> </SPAN><STRONG>GET /jobs</STRONG><SPAN> </SPAN>endpoint and click<SPAN> </SPAN><STRONG>"Run"</STRONG>:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="bah-execute-get.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379678iC17BD4905EB24FF0/image-size/large?v=v2&px=999" role="button" title="bah-execute-get.png" alt="bah-execute-get.png" /></span></P><DIV class=""><H4 id="toc-hId--1788939734">4. Configure Parameters (Optional)</H4></DIV><P>The GET /jobs endpoint supports optional query parameters:</P><UL><LI><CODE>displaySchedules</CODE><SPAN> </SPAN>(boolean) - Include schedule details in response</LI><LI><CODE>page</CODE><SPAN> </SPAN>(integer) - Page number for pagination</LI><LI><CODE>pageSize</CODE><SPAN> </SPAN>(integer) - Number of jobs per page</LI></UL><P>For this example, we'll use the defaults (no parameters needed).</P><DIV class=""><H4 id="toc-hId--1985453239">6. View the Response</H4></DIV><P>You'll see the<SPAN> </SPAN><STRONG>HTTP response</STRONG><SPAN> </SPAN>directly in the browser:</P><P><STRONG>Response Code</STRONG>:<SPAN> </SPAN><CODE>200 OK</CODE></P><P><STRONG>Response Body</STRONG><SPAN> </SPAN>(example):</P><DIV class=""><PRE>{
<SPAN class="">"total"</SPAN>: <SPAN class="">2</SPAN>,
<SPAN class="">"results"</SPAN>: [
{
<SPAN class="">"jobId"</SPAN>: <SPAN class="">2523708</SPAN>,
<SPAN class="">"name"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>validateSalesOrder<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"description"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>Validates sales order requests<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"action"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>https://my-app.cfapps.eu10.hana.ondemand.com/orders/validate<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"active"</SPAN>: <SPAN class="">true</SPAN>,
<SPAN class="">"httpMethod"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>POST<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"jobType"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>HTTP_ENDPOINT<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"ansConfig"</SPAN>: { <SPAN class="">"onError"</SPAN>: <SPAN class="">true</SPAN>, <SPAN class="">"onSuccess"</SPAN>: <SPAN class="">false</SPAN> },
<SPAN class="">"calmConfig"</SPAN>: { <SPAN class="">"enabled"</SPAN>: <SPAN class="">false</SPAN> },
<SPAN class="">"createdAt"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>2026-02-15 10:30:00<SPAN class="">"</SPAN></SPAN>
},
{
<SPAN class="">"jobId"</SPAN>: <SPAN class="">2440430</SPAN>,
<SPAN class="">"name"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>dailyReport<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"description"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>Generate daily sales report<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"action"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>https://my-app.cfapps.eu10.hana.ondemand.com/reports/daily<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"active"</SPAN>: <SPAN class="">true</SPAN>,
<SPAN class="">"httpMethod"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>GET<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"jobType"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>HTTP_ENDPOINT<SPAN class="">"</SPAN></SPAN>,
<SPAN class="">"ansConfig"</SPAN>: { <SPAN class="">"onError"</SPAN>: <SPAN class="">false</SPAN>, <SPAN class="">"onSuccess"</SPAN>: <SPAN class="">false</SPAN> },
<SPAN class="">"calmConfig"</SPAN>: { <SPAN class="">"enabled"</SPAN>: <SPAN class="">true</SPAN> },
<SPAN class="">"createdAt"</SPAN>: <SPAN class=""><SPAN class="">"</SPAN>2026-02-10 08:00:00<SPAN class="">"</SPAN></SPAN>
}
],
<SPAN class="">"prev_url"</SPAN>: <SPAN class="">null</SPAN>,
<SPAN class="">"next_url"</SPAN>: <SPAN class="">null</SPAN>
}</PRE><DIV class=""> </DIV></DIV><P><STRONG><span class="lia-unicode-emoji" title=":party_popper:">🎉</span> Success!</STRONG><SPAN> </SPAN>You've just called the Job Scheduling service REST API directly from your browser!</P><DIV class=""><H3 id="toc-hId--1888563737">Understanding the Response</H3></DIV><P>The response shows:</P><UL><LI><STRONG>total</STRONG><SPAN> </SPAN>- Total number of jobs in your instance</LI><LI><STRONG>results</STRONG><SPAN> </SPAN>- Array of job objects with:<UL><LI><CODE>jobId</CODE><SPAN> </SPAN>- Unique job identifier</LI><LI><CODE>name</CODE><SPAN> </SPAN>- Job name</LI><LI><CODE>description</CODE><SPAN> </SPAN>- Job description</LI><LI><CODE>action</CODE><SPAN> </SPAN>- HTTP endpoint that gets invoked</LI><LI><CODE>httpMethod</CODE><SPAN> </SPAN>- HTTP method (GET, POST, PUT, DELETE, PATCH)</LI><LI><CODE>active</CODE><SPAN> </SPAN>- Whether the job is currently active</LI><LI><CODE>jobType</CODE><SPAN> </SPAN>- Type of job (e.g.<SPAN> </SPAN><CODE>HTTP_ENDPOINT</CODE>)</LI><LI><CODE>ansConfig</CODE><SPAN> </SPAN>- Alert Notification Service settings (<CODE>onError</CODE>,<SPAN> </SPAN><CODE>onSuccess</CODE>)</LI><LI><CODE>calmConfig</CODE><SPAN> </SPAN>- SAP Cloud ALM integration settings (<CODE>enabled</CODE>)</LI><LI><CODE>createdAt</CODE><SPAN> </SPAN>- When the job was created (UTC timestamp)</LI></UL></LI><LI><STRONG>prev_url</STRONG><SPAN> </SPAN>/<SPAN> </SPAN><STRONG>next_url</STRONG><SPAN> </SPAN>- Pagination links for navigating result pages</LI></UL><DIV class=""><H3 id="toc-hId--1916893551">Tips for Exploring Other Endpoints</H3></DIV><P>Now that you know how "Try It Out" works, explore other endpoints:</P><P><STRONG>Try creating a job</STRONG>:</P><UL><LI>Use<SPAN> </SPAN><CODE>POST /jobs</CODE><SPAN> </SPAN>with a sample request body</LI><LI>See the created job immediately in the response</LI></UL><P><STRONG>Retrieve job details</STRONG>:</P><UL><LI>Copy a job ID from the<SPAN> </SPAN><CODE>GET /jobs</CODE><SPAN> </SPAN>response</LI><LI>Use<SPAN> </SPAN><CODE>GET /jobs/{jobId}</CODE><SPAN> </SPAN>to see full details including schedules</LI></UL><P><STRONG>Check run logs</STRONG>:</P><UL><LI>Find a schedule ID from a job's schedules</LI><LI>Use<SPAN> </SPAN><CODE>GET /jobs/{jobId}/schedules/{scheduleId}/runs</CODE><SPAN> </SPAN>to see execution history</LI></UL><HR /><DIV class=""><H2 id="toc-hId--1820004049"><span class="lia-unicode-emoji" title=":books:">📚</span> Additional Resources</H2></DIV><DIV class=""><H3 id="toc-hId-1985046735">Official Documentation</H3></DIV><UL><LI><STRONG><A href="https://help.sap.com/docs/JOB_SCHEDULER" target="_blank" rel="noopener noreferrer">SAP Job Scheduling Service Help Portal</A></STRONG><SPAN> </SPAN>- Complete product documentation</LI><LI><STRONG><A href="https://help.sap.com/docs/job-scheduling/sap-job-scheduling-service/authentication" target="_blank" rel="noopener noreferrer">Authentication Guide</A></STRONG><SPAN> </SPAN>- OAuth 2.0 setup</LI></UL><DIV class=""><H3 id="toc-hId-1788533230">Blog Series</H3></DIV><UL><LI><STRONG><A href="https://community.sap.com/t5/technology-blogs-by-sap/job-scheduler-in-sap-business-technology-platform-overview-of-blog-posts/ba-p/13510707" target="_blank">Overview of All Blog Posts</A></STRONG><SPAN> </SPAN>- Complete tutorial series</LI></UL><DIV class=""><H3 id="toc-hId-1592019725">Discovery</H3></DIV><UL><LI><STRONG><A href="https://discovery-center.cloud.sap/serviceCatalog/job-scheduling-service" target="_blank" rel="nofollow noopener noreferrer">SAP Discovery Center</A></STRONG><SPAN> </SPAN>- Service overview and capabilities</LI></UL><HR /><DIV class=""><H2 id="toc-hId-1688909227"><span class="lia-unicode-emoji" title=":direct_hit:">🎯</span> What's Next?</H2></DIV><P>Now that you know how to explore and test the Job Scheduling Service API on Business Accelerator Hub, you're ready for the next level:</P><P><STRONG><span class="lia-unicode-emoji" title=":open_book:">📖</span> Coming Soon: Generate Your Own Job Scheduling Service Client Library</STRONG></P><P>In our next blog post, we'll show you how to:</P><UL><LI><span class="lia-unicode-emoji" title=":wrench:">🔧</span><SPAN> </SPAN><STRONG>Download the OpenAPI specification</STRONG><SPAN> </SPAN>from Business Accelerator Hub</LI><LI><span class="lia-unicode-emoji" title=":gear:">⚙️</span><SPAN> </SPAN><STRONG>Generate a type-safe client library</STRONG><SPAN> </SPAN>in your favorite language (Node.js, Java, Python, Go, etc.)</LI><LI><span class="lia-unicode-emoji" title=":laptop_computer:">💻</span><SPAN> </SPAN><STRONG>Use the generated client</STRONG><SPAN> </SPAN>to create jobs, schedules, and monitor execution</LI><LI><span class="lia-unicode-emoji" title=":rocket:">🚀</span><SPAN> </SPAN><STRONG>Build a complete working example</STRONG><SPAN> </SPAN>with authentication and error handling</LI></UL><P>Whether you're building with Node.js, Java, Python, or another language, you'll be able to generate a custom client tailored to your needs!</P><P><STRONG>Stay tuned!</STRONG><SPAN> </SPAN>Subscribe to the<SPAN> </SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/job-scheduler-in-sap-business-technology-platform-overview-of-blog-posts/ba-p/13510707" target="_blank">SAP Job Scheduling Service blog series</A><SPAN> </SPAN>to get notified.</P><HR /><DIV class=""><H2 id="toc-hId-1492395722"><span class="lia-unicode-emoji" title=":speech_balloon:">💬</span> Questions or Feedback?</H2></DIV><P>Have questions about the Job Scheduling Service API on Business Accelerator Hub? Found a bug or have a feature request?</P><UL><LI><STRONG>Idea</STRONG><SPAN> </SPAN>- Submit your ideas on the<SPAN> </SPAN><A href="https://influence.sap.com/sap/ino/#campaign/2277" target="_blank" rel="noopener noreferrer">Influence Portal</A></LI><LI><span class="lia-unicode-emoji" title=":books:">📚</span><SPAN> </SPAN><STRONG>Help Portal</STRONG><SPAN> </SPAN>- Check the<SPAN> </SPAN><A href="https://help.sap.com/docs/JOB_SCHEDULER" target="_blank" rel="noopener noreferrer">official documentation</A><SPAN> </SPAN>and leave feedback</LI><LI><span class="lia-unicode-emoji" title=":speech_balloon:">💬</span><SPAN> </SPAN><STRONG>Community</STRONG><SPAN> </SPAN>- Comment here in this post</LI><LI><span class="lia-unicode-emoji" title=":e_mail:">📧</span><SPAN> </SPAN><STRONG>Contact me</STRONG><SPAN> </SPAN>- Find me on Linkedin:<SPAN> </SPAN><A href="https://www.linkedin.com/in/denis-duev/" target="_blank" rel="nofollow noopener noreferrer">https://www.linkedin.com/in/denis-duev/</A></LI></UL><HR /><DIV class=""><H2 id="toc-hId-1295882217"><span class="lia-unicode-emoji" title=":party_popper:">🎉</span> Summary</H2></DIV><P>The SAP Job Scheduling Service REST API is now discoverable on the Business Accelerator Hub, bringing:</P><UL><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>One central location</STRONG><SPAN> </SPAN>for API discovery alongside 1000+ SAP APIs</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Interactive testing</STRONG><SPAN> </SPAN>with "Try It Out" - no code required</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>OpenAPI 3.0.3 specification</STRONG><SPAN> </SPAN>for standards-based integration</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Up-to-date documentation</STRONG><SPAN> </SPAN>with examples and schemas</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span><SPAN> </SPAN><STRONG>Foundation for client generation</STRONG><SPAN> </SPAN>(covered in our next blog!)</LI></UL><P>Whether you're just starting with Job Scheduling service or looking to integrate it into your SAP BTP applications, the Business Accelerator Hub makes it easier than ever to get started.</P><P><STRONG>Happy Scheduling!</STRONG><SPAN> </SPAN><span class="lia-unicode-emoji" title=":rocket:">🚀</span></P>2026-03-05T07:00:00.029000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/from-sales-data-to-design-blueprints-how-rpt-1-enabled-our-fashion-inverse/ba-p/14345282From Sales Data to Design Blueprints: How RPT-1 Enabled Our Fashion Inverse Design Engine2026-03-10T03:31:48.778000+01:00NilsDitthttps://community.sap.com/t5/user/viewprofilepage/user-id/1804917<P><STRONG><FONT size="5"><SPAN>What We Built</SPAN></FONT></STRONG></P><DIV><DIV><DIV><SPAN>The Fashion Trend Alchemist is a fully functional inverse design engine that turns historical sales data into concrete design recommendations — complete with product visualizations, a catchy product name, and compelling sales copy. It works across product categories: coats, t-shirts, dresses, trousers, even shoes and sunglasses. The system adapts to any product type without retraining or reconfiguration.</SPAN></DIV><DIV> </DIV><DIV><DIV><DIV><FONT size="5"><STRONG>How it creates a product — the full pipeline:</STRONG></FONT></DIV><DIV><FONT size="5"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="FTA_Visualization.jpeg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381676i947863E69D677A1A/image-size/large?v=v2&px=999" role="button" title="FTA_Visualization.jpeg" alt="FTA_Visualization.jpeg" /></span></STRONG></FONT></DIV></DIV></DIV></DIV><EM><SPAN>The product creation workflow: from context selection through AI enrichment and RPT-1 prediction to the final product profile with images, name, and sales text. The refine loop lets users iterate on any design decision.</SPAN><BR /></EM></DIV><DIV><DIV><SPAN>As shown in the diagram above, product creation flows through four core phases:</SPAN></DIV><DIV><DIV><SPAN>1.</SPAN> <SPAN><STRONG>Context Setup</STRONG></SPAN><SPAN> — Select a product category (e.g., men's winter coats), define filters and date ranges, and let the system build a curated context of top and bottom performers from the sales data</SPAN></DIV><DIV><SPAN>2.</SPAN> <SPAN><STRONG>Enrichment</STRONG></SPAN><SPAN> — A Vision LLM analyzes product images to extract detailed design attributes (collar type, button style, fabric weight...) that don't exist in the raw database</SPAN></DIV><DIV><SPAN>3.</SPAN> <SPAN><STRONG>Prediction</STRONG></SPAN><SPAN> — The user specifies which attributes to lock and which to predict; RPT-1 fills in the optimal values using in-context learning from the enriched dataset</SPAN></DIV><DIV><SPAN>4.</SPAN> <SPAN><STRONG>Product Profile</STRONG></SPAN><SPAN> — The system generates multi-view product images, proposes a product name, and drafts sales copy — creating a complete, tangible design concept that can be refined and saved to a collection</SPAN></DIV><DIV> </DIV><DIV><DIV><STRONG><FONT size="5">Technical Architecture</FONT></STRONG></DIV><DIV><DIV><SPAN>This isn't a Jupyter notebook experiment — it's a deployed, interactive application where users explore design decisions in real time.</SPAN></DIV><DIV> <span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="FTA_TechnicalArchitecture.drawio.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381677i50448B14A9FAA007/image-size/large?v=v2&px=999" role="button" title="FTA_TechnicalArchitecture.drawio.png" alt="FTA_TechnicalArchitecture.drawio.png" /></span></DIV><DIV><DIV><EM>The Fashion Trend Alchemist runs on SAP BTP's Kyma runtime as a three-container deployment. The React frontend communicates with a Fastify/Node.js backend that orchestrates three AI services: RPT-1 for attribute prediction and GPT-4.1 for data enrichment and text generation (both via SAP AI Core), and a **self-hosted Z-Image Turbo** instance (product visualization, deployed as a Kyma microservice). PostgreSQL, Redis, and SeaweedFS handle data persistence, caching, and image storage respectively.</EM></DIV><DIV> </DIV><DIV><DIV><FONT size="5"><STRONG>How It Works: Designing a Men's Winter Coat</STRONG></FONT></DIV><DIV><DIV><SPAN>Let's walk through a concrete example to see each phase in action: a product manager wants to know what the next successful men's winter coat should look like.</SPAN></DIV><DIV> </DIV><DIV><DIV><STRONG>Building the Context</STRONG></DIV><DIV><DIV>The user selects "Coats" as the product type, filters for winter seasons and men's wear, and the system queries a large transaction dataset for matching products.</DIV><DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ContextBuilding.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381680i70555597C4F561BF/image-size/large?v=v2&px=999" role="button" title="ContextBuilding.png" alt="ContextBuilding.png" /></span></DIV><DIV><DIV><SPAN>For each matching product, the system calculates a </SPAN><SPAN><STRONG>velocity score</STRONG></SPAN><SPAN> — first computing units sold per day of availability (transaction count divided by days between first and last sale), then converting this to a percentile rank (0-100) across all matching products. A coat that sold 200 units in 30 days scores higher than one that sold 200 units over 6 months. The system then selects the top and bottom performers to create a balanced context showing RPT-1 clear examples of "success" and "failure."</SPAN></DIV><DIV> </DIV><DIV><DIV><STRONG>Enriching Products with Design Attributes</STRONG></DIV><DIV><DIV><SPAN>Fashion databases store generic columns: color, material, product group. But a coat's success depends on collar type, insulation level, and closure style — attributes that don't exist in the raw data.</SPAN></DIV><BR /><DIV><SPAN>We solve this with AI enrichment. First, an LLM generates a product-specific attribute schema (e.g., </SPAN><FONT color="#FF6600"><EM>`<FONT color="#FF6600">collar_type`</FONT></EM><EM>, `button_style`, `insulation_level`</EM></FONT><SPAN> with their possible values). Then, GPT-4.1 analyzes each product's image and extracts these attributes as structured data.</SPAN></DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Enrichment.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381681i753526E38AD4DA0C/image-size/large?v=v2&px=999" role="button" title="Enrichment.png" alt="Enrichment.png" /></span></SPAN></DIV><DIV><DIV><DIV><EM>Real-time enrichment with progress tracking — GPT-4.1 extracts structured design attributes from product images.</EM></DIV><DIV> </DIV><DIV><DIV><DIV><STRONG>RPT-1 in Action: The Prediction</STRONG></DIV><DIV><DIV><DIV><SPAN>Now the core moment. The enriched context — ~50 products with their detailed attributes and velocity scores — is sent to RPT-1. The user constructs a </SPAN><SPAN><STRONG>query row</STRONG></SPAN><SPAN> where some attributes are fixed and others contain </SPAN><FONT color="#FF6600"><EM>`[PREDICT]`</EM></FONT><SPAN> placeholders.</SPAN></DIV><DIV> </DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RPT-1Prediction.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381682i24299306986C01B9/image-size/large?v=v2&px=999" role="button" title="RPT-1Prediction.png" alt="RPT-1Prediction.png" /></span></SPAN></DIV></DIV></DIV></DIV></DIV></DIV></DIV><DIV> <DIV><DIV><SPAN>The interface uses a three-column layout:</SPAN></DIV><BR /><DIV><SPAN>-</SPAN> <SPAN><STRONG>Locked</STRONG></SPAN><SPAN> (left): Attributes the user wants to fix — "I know I want a Navy coat"</SPAN></DIV><DIV><SPAN>-</SPAN> <SPAN><STRONG>AI-Predicted</STRONG></SPAN><SPAN> (center): Attributes RPT-1 should determine — "What collar type? What button style?"</SPAN></DIV><DIV><SPAN>-</SPAN> <SPAN><STRONG>Excluded</STRONG></SPAN><SPAN> (right): Attributes irrelevant to this prediction</SPAN></DIV><DIV> </DIV><DIV><DIV><SPAN>The user sets a target success score and clicks "Transmute." RPT-1 receives the full context plus the query row in a single API call — no model training, no feature engineering, no pipeline orchestration. Formatted tabular data in, predictions out.</SPAN></DIV><BR /><DIV><SPAN>The entire ML layer is essentially: </SPAN><SPAN>_format data as rows → add query row with </SPAN><FONT color="#FF6600"><EM>`[PREDICT]`</EM></FONT><SPAN> → call API → parse response_</SPAN><SPAN>. The system complexity lives in data preparation, not model management.</SPAN></DIV><DIV> </DIV><DIV><DIV><STRONG>Bringing It to Life: The Full Product Profile</STRONG></DIV><DIV><DIV><SPAN>The predicted attributes are combined with locked values to generate a complete product profile. The system translates the attribute combination into structured image prompts and produces multiple product views via a self-hosted image generation model running on Kyma — using category-aware prompt templates (coats use ghost mannequin photography; footwear uses product pair shots; accessories get contextual framing). This is completed by a catchy product name and a sales text.</SPAN></DIV><DIV> </DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ImageGeneration.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381683i7664A70DC86C050D/image-size/large?v=v2&px=999" role="button" title="ImageGeneration.png" alt="ImageGeneration.png" /></span></SPAN></DIV><DIV><DIV><SPAN>The result isn't an abstract list of attributes — it's a concrete product concept that a designer or product manager can evaluate, iterate on, and refine. The refine loop shown in the pipeline diagram lets users adjust any attribute and regenerate until they're satisfied, then save the design to a collection.</SPAN></DIV><DIV> </DIV><DIV><DIV><FONT size="5"><STRONG>What We Learned About RPT-1</STRONG></FONT></DIV><DIV><DIV><STRONG>Prediction Becomes a Data Formatting Problem</STRONG></DIV><DIV><DIV><SPAN>RPT-1 is pretrained and uses in-context learning — you don't train the model, you provide examples and a query. The API structure is remarkably simple:</SPAN></DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RPT-1Query.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381684i600915C511956D2C/image-size/large?v=v2&px=999" role="button" title="RPT-1Query.png" alt="RPT-1Query.png" /></span></SPAN></DIV><DIV><DIV><SPAN>The entire prediction integration is ~50 lines of TypeScript. No training scripts, no model serialization, no serving infrastructure. This drastically lowers the barrier for teams outside ML/data science — if you can format tabular data, you can use RPT-1.</SPAN></DIV><BR /><DIV><SPAN>Dynamic schemas come free: coats need </SPAN><SPAN>`collar_type`</SPAN><SPAN> and </SPAN><SPAN>`button_style`</SPAN><SPAN>, sunglasses need </SPAN><SPAN>`lens_tint`</SPAN><SPAN> and </SPAN><SPAN>`frame_shape`</SPAN><SPAN>. With traditional ML, changing features means re-encoding, retraining, revalidating. With RPT-1, just change the columns.</SPAN></DIV><DIV> </DIV><DIV><DIV><STRONG>Validating RPT-1 in the Fashion Domain</STRONG></DIV><DIV><DIV><SPAN>Our goal wasn't to build a production prediction system — it was to test whether RPT-1's in-context learning could handle fashion data meaningfully. To validate this, we inverted the workflow: instead of predicting attributes from success scores (inverse design), we tested how well RPT-1 predicts success from attributes (forward prediction). This serves as a sanity check — if the model understands attribute-success relationships, it can leverage them for inverse predictions.</SPAN></DIV><BR /><DIV><SPAN>We benchmarked RPT-1 against traditional gradient boosting models (XGBoost, CatBoost) across five product categories using an 80/20 random split:</SPAN></DIV><DIV><TABLE border="1" width="100%"><TBODY><TR><TD width="25%" height="30px"><DIV><DIV><STRONG>Model</STRONG></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><STRONG>R² </STRONG></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><STRONG>NDCG </STRONG></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><STRONG>Hit@25</STRONG></DIV></DIV></TD></TR><TR><TD width="25%" height="30px"><DIV><DIV><SPAN>RPT-1</SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><SPAN>+0.302 </SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><STRONG>0.954</STRONG></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><STRONG>60.5%</STRONG></DIV></DIV></TD></TR><TR><TD width="25%" height="30px"><DIV><DIV><SPAN>XGBoost</SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><SPAN>+0.267 </SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><SPAN>0.946</SPAN></DIV></DIV></TD><TD width="25%" height="30px">55.1%</TD></TR><TR><TD width="25%" height="30px"><DIV><DIV><SPAN>CatBoost</SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><SPAN>+0.258 </SPAN></DIV></DIV></TD><TD width="25%" height="30px"><DIV><DIV><SPAN>0.945</SPAN></DIV></DIV></TD><TD width="25%" height="30px"> 56.4%</TD></TR></TBODY></TABLE><DIV><DIV><EM>Aggregated across five product categories. RPT-1 matches established gradient boosting models on absolute fit (R²) and edges them out on ranking quality (NDCG, Hit@25) — the metrics that matter most for separating winners from losers.</EM></DIV><DIV> </DIV><DIV><DIV><SPAN>The scatter plot below shows this concretely for ladies' dresses: top performers (Q4, green) cluster where predicted, while bottom performers (Q1, red) separate clearly. The model isn't perfect, but the quartile separation confirms RPT-1 successfully learned from the in-context examples.</SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="row_order_dress_ladies_all_seasons_rank_database_order.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381685i73ABB1EC108D5022/image-size/large?v=v2&px=999" role="button" title="row_order_dress_ladies_all_seasons_rank_database_order.png" alt="row_order_dress_ladies_all_seasons_rank_database_order.png" /></span></DIV><DIV><DIV><EM>Actual vs. predicted rank for Ladies All-Season Dresses (ρ = 0.626). Clear quartile separation demonstrates RPT-1 effectively distinguishes high from low performers using only in-context examples.</EM></DIV><DIV> </DIV><DIV><DIV><SPAN><STRONG>The key finding:</STRONG></SPAN><SPAN> RPT-1 worked out-of-the-box for fashion data without domain-specific tuning. For teams exploring tabular foundation models in new domains, this suggests the pretrained capabilities transfer broadly. RPT-1's in-context learning architecture particularly shines in low-data scenarios — some niche categories had only ~23 items in the context, where RPT-1 handled these gracefully.</SPAN></DIV><DIV> </DIV><DIV> </DIV><DIV><DIV> </DIV><DIV> </DIV><DIV><STRONG>Context Quality Matters More Than Context Structure</STRONG></DIV><DIV><DIV><SPAN>We systematically tested what affects RPT-1's prediction quality:</SPAN></DIV><DIV> </DIV><DIV><SPAN><STRONG>Row and column ordering</STRONG> — by design, RPT-1's architecture is invariant to the ordering of rows and columns, so reordering should not affect predictions (up to minor numerical artifacts). Our experiments confirmed this: shuffling context rows or reordering columns produced no measurable difference in results — exactly as expected.<BR /></SPAN></DIV><BR /><DIV><STRONG>Column naming</STRONG> — inherently relevant, though RPT-1 proved surprisingly resilient. Even when we degraded names to synonyms, German translations, or numeric codes, performance variations were minor and inconsistent. Descriptive column names remain a best practice, but the model handles imperfect naming gracefully.<BR /><BR /></DIV><DIV> </DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="naming_chaos.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381686i7C018781DD3D1BE2/image-size/large?v=v2&px=999" role="button" title="naming_chaos.png" alt="naming_chaos.png" /></span></SPAN></DIV><DIV><DIV><DIV>Spearman correlation across naming conditions (Original, Synonyms, German, Mixed, Legacy, Chaos, Numeric). RPT-1 proved robust across conditions in our dataset, though meaningful column names remain a best practice for optimal predictions.</DIV><DIV> </DIV></DIV><DIV><SPAN><STRONG>Out-of-distribution contamination</STRONG></SPAN><SPAN> — high impact. When wrong products polluted the context (e.g., shirts mixed into a hoodies dataset), prediction quality degraded significantly.</SPAN></DIV><DIV> </DIV><DIV><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ood_contamination.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381687iDC680E28EAC51BD7/image-size/large?v=v2&px=999" role="button" title="ood_contamination.png" alt="ood_contamination.png" /></span></SPAN></DIV><DIV><DIV><DIV><DIV><EM>Spearman correlation drops as OOD contamination increases from 0% to 40%. Clean data matters far more than clever formatting.</EM></DIV></DIV><SPAN><BR /><STRONG>The takeaway: RPT-1 handles structural variations gracefully — row/column ordering has no effect, and naming degradation had limited impact in our tests. Your primary effort should go into data cleaning and proper normalization.</STRONG><BR /><BR /></SPAN></DIV><DIV><DIV><SPAN><FONT size="5"><STRONG>Challenges We Faced and How We Solved Them</STRONG></FONT></SPAN></DIV><DIV><STRONG>Real-World Data is Messy</STRONG></DIV><BR /><DIV><SPAN><STRONG>Problem:</STRONG></SPAN><SPAN> Product catalogs contain labeling errors — shirts categorized as hoodies, skirts labeled as pants. In-context learning amplifies these errors: garbage in, garbage out.</SPAN></DIV><BR /><DIV><SPAN><STRONG>Solution:</STRONG></SPAN><SPAN> Our Vision LLM enrichment returns a </SPAN><EM><FONT color="#FF6600">`mismatchConfidence`</FONT></EM><SPAN> score (0-100) alongside extracted attributes. Products scoring above 80 are flagged for human review. The user sees a prefiltered list and can correct or exclude problematic items before prediction. What started as an afterthought became essential.</SPAN></DIV><BR /><DIV><FONT size="5"><STRONG>GenAI Output is Unpredictable</STRONG></FONT></DIV><BR /><DIV><SPAN><STRONG>Problem:</STRONG></SPAN><SPAN> LLM outputs can be wrong or malformed — invalid JSON, attributes outside valid ranges, unexpected structures.</SPAN></DIV><BR /><DIV><SPAN><STRONG>Solution:</STRONG></SPAN><SPAN> Multi-layer guardrails:</SPAN></DIV><BR /><DIV><SPAN>-</SPAN><SPAN> Prompting with positive examples and explicit output structure</SPAN></DIV><DIV><SPAN>-</SPAN><SPAN> Runtime validation with </SPAN><SPAN><STRONG>Zod schemas</STRONG></SPAN><SPAN> to catch structural errors</SPAN></DIV><DIV><SPAN>-</SPAN><SPAN> Attribute values validated against the ontology's allowed options</SPAN></DIV><DIV><SPAN>-</SPAN><SPAN> Retry logic with exponential backoff (up to 3 attempts)</SPAN></DIV><DIV> </DIV><DIV><DIV><SPAN>Users can manually adjust results or send them back to the LLM with feedback.</SPAN></DIV><DIV><FONT size="4"><STRONG> </STRONG></FONT></DIV><DIV><DIV><FONT size="4"><STRONG>Consistency Across Generated Images</STRONG></FONT></DIV><DIV><DIV><SPAN><STRONG>Problem:</STRONG></SPAN><SPAN> Three product views (front, back, model) must show the same design. Different prompts lead to inconsistent products.</SPAN></DIV><BR /><DIV><SPAN><STRONG>Solution:</STRONG></SPAN><SPAN> The LLM generates a single shared </SPAN><EM><FONT color="#FF6600">`productDescription`</FONT></EM><SPAN> used as the consistent core across all views, combined with view-specific prefixes and category-aware templates (wearables use ghost mannequin photography; footwear uses product pair shots).</SPAN></DIV><DIV> </DIV><DIV><DIV><FONT size="5"><STRONG>Closing</STRONG></FONT></DIV><BR /><DIV><SPAN><STRONG>Three takeaways for your next project:</STRONG></SPAN></DIV><BR /><DIV><SPAN>1.</SPAN> <SPAN><STRONG>RPT-1 turns prediction into a data selection & formatting problem.</STRONG></SPAN><SPAN> No training pipeline, no ML expertise required — just clean tabular data with a success metric.</SPAN></DIV><BR /><DIV>2.<STRONG> Context quality is paramount.</STRONG> Row and column ordering has no effect on RPT-1 by design, and the model proved resilient to naming variations in our experiments — though descriptive column names remain recommended. What matters most is clean, relevant data: wrong items in the context degrade predictions significantly.</DIV><BR /><DIV><SPAN>3.</SPAN> <SPAN><STRONG>Build guardrails around GenAI.</STRONG> </SPAN><SPAN>Validate LLM outputs with schemas, constrain responses to allowed values, and keep humans in the loop for edge cases.</SPAN></DIV><DIV> </DIV><DIV> </DIV><DIV><DIV><DIV><EM>Developed at BTP Solution Advisory APAC by Danylo Polishchuk and Nils Dittrich.</EM></DIV><DIV> </DIV><DIV><DIV><DIV><SPAN>For questions or collaboration opportunities, connect with us on LinkedIn: <A href="https://www.linkedin.com/in/danylo-polishchuk-b365473b0/" target="_blank" rel="noopener nofollow noreferrer">Danylo Polishchuk</A> | <A href="https://www.linkedin.com/in/nils-dittrich-667991308" target="_self" rel="nofollow noopener noreferrer">Nils Dittrich.</A></SPAN></DIV><DIV> </DIV><DIV><DIV><DIV><STRONG>Want to see the Fashion Trend Alchemist in action?</STRONG><SPAN> Check out the </SPAN><A href="https://sapvideo.cfapps.eu10-004.hana.ondemand.com/?entry_id=1_9tpnzdcg" target="_self" rel="nofollow noopener noreferrer"><SPAN>demo video</SPAN></A><SPAN> for a walkthrough of the complete workflow.</SPAN></DIV><BR /><DIV><STRONG>Want to try RPT-1 yourself?</STRONG><SPAN> Explore the </SPAN><A href="https://rpt.cloud.sap/login?next=%2Fdashboard" target="_self" rel="nofollow noopener noreferrer"><SPAN>RPT-1 Playground </SPAN></A><SPAN>to test tabular predictions with your own data.</SPAN></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV></DIV>2026-03-10T03:31:48.778000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/getting-started-with-sap-btp-kyma-runtime-using-the-cli/ba-p/14347069Getting Started with SAP BTP Kyma Runtime Using the CLI2026-03-12T10:27:43.536000+01:00neilaspinhttps://community.sap.com/t5/user/viewprofilepage/user-id/167493<P>Introduction</P><P>If you're learning SAP BTP Kyma Runtime, most tutorials point you straight to the cockpit UI. But understanding the CLI tools — <CODE>btp</CODE> and <CODE>kubectl</CODE> — gives you a much deeper understanding of what's actually happening under the hood. In this post I'll walk through connecting to a Kyma cluster and exploring its core components purely from the command line.</P><HR /><H2 id="toc-hId-1791516416">Prerequisites</H2><UL><LI>SAP BTP Trial account with Kyma Runtime enabled</LI><LI><CODE>btp</CODE><SPAN> </SPAN>CLI installed (<A href="https://tools.hana.ondemand.com/#cloud" target="_blank" rel="noopener noreferrer nofollow">download here</A>)</LI><LI><CODE>kubectl</CODE><SPAN> </SPAN>installed</LI></UL><HR /><H2 id="toc-hId-1595002911">Step 1: Find Your Subaccount and Kyma Instance</H2><P>Log in to BTP via CLI and find your subaccount:</P><DIV class=""><PRE><CODE>btp login
btp list accounts/subaccount</CODE></PRE></DIV><P>Then find your Kyma environment instance:</P><DIV class=""><PRE><CODE>btp list accounts/environment-instance --subaccount <subaccount-id></CODE></PRE></DIV><P>You should see something like:</P><DIV class=""><PRE><CODE>environment name environment id environment type state
<subdomain>-7tn04i0d <environment-id> kyma OK</CODE></PRE></DIV><P>Get the details including your kubeconfig URL:</P><DIV class=""><PRE><CODE>btp get accounts/environment-instance <environment-id> --subaccount <subaccount-id></CODE></PRE></DIV><P>This returns a <CODE>labels</CODE> field containing your <CODE>KubeconfigURL</CODE> and <CODE>APIServerURL</CODE>.</P><HR /><H2 id="toc-hId-1398489406">Step 2: Download the Kubeconfig and Connect kubectl</H2><P>Download the kubeconfig from the Kyma Dashboard or via the KubeconfigURL, then point kubectl at it:</P><DIV class=""><PRE><CODE>cp ~/Downloads/kubeconfig.yaml ~/.kube/kyma-config.yaml
export KUBECONFIG=~/.kube/kyma-config.yaml
kubectl get nodes</CODE></PRE></DIV><P>On a trial cluster you'll see an AWS EC2 worker node:</P><DIV class=""><PRE><CODE>NAME STATUS ROLES AGE VERSION
<node-name>.ec2.internal Ready worker 7d v1.33.7</CODE></PRE></DIV><HR /><H2 id="toc-hId-1201975901">Step 3: Explore the Kyma Cluster</H2><H3 id="toc-hId-1134545115">Namespaces</H3><DIV class=""><PRE><CODE>kubectl get namespaces</CODE></PRE></DIV><P>Key namespaces:</P><P>Namespace Purpose</P><TABLE><TBODY><TR><TD><CODE>kyma-system</CODE></TD><TD>Core Kyma components</TD></TR><TR><TD><CODE>istio-system</CODE></TD><TD>Service mesh</TD></TR><TR><TD><CODE>default</CODE></TD><TD>Your workloads</TD></TR></TBODY></TABLE><H3 id="toc-hId-938031610">Core Components in kyma-system</H3><DIV class=""><PRE><CODE>kubectl get pods -n kyma-system</CODE></PRE></DIV><P>Pod Purpose</P><TABLE><TBODY><TR><TD><CODE>api-gateway-controller-manager</CODE></TD><TD>Manages APIRule resources</TD></TR><TR><TD><CODE>btp-manager-controller-manager</CODE></TD><TD>Manages BTP service bindings</TD></TR><TR><TD><CODE>istio-controller-manager</CODE></TD><TD>Manages Istio service mesh</TD></TR><TR><TD><CODE>sap-btp-operator-controller-manager</CODE></TD><TD>Provisions BTP services as Kubernetes resources</TD></TR><TR><TD><CODE>warden-*</CODE></TD><TD>Image trust validation</TD></TR></TBODY></TABLE><H3 id="toc-hId-741518105">Kyma Custom Resources (CRDs)</H3><DIV class=""><PRE><CODE>kubectl api-resources | grep kyma</CODE></PRE></DIV><P>Key CRDs:</P><P>Resource Purpose</P><TABLE><TBODY><TR><TD><CODE>APIRule</CODE></TD><TD>Expose services externally with auth/routing</TD></TR><TR><TD><CODE>RateLimit</CODE></TD><TD>Throttle API traffic</TD></TR><TR><TD><CODE>BtpOperator</CODE></TD><TD>Configure BTP service operator</TD></TR><TR><TD><CODE>Kyma</CODE></TD><TD>Top-level Kyma installation resource</TD></TR></TBODY></TABLE><HR /><H2 id="toc-hId-415921881">Step 4: Understanding an APIRule</H2><P>The <CODE>APIRule</CODE> is the most important Kyma concept for exposing workloads. Here's what a typical one looks like:</P><DIV class=""><PRE><CODE>apiVersion: gateway.kyma-project.io/v2
kind: APIRule
spec:
gateway: kyma-system/kyma-gateway
hosts:
- myapp.<cluster-id>.kyma.ondemand.com
rules:
- methods: [GET, POST]
noAuth: true
path: /*
service:
name: myapp
port: 80</CODE></PRE></DIV><H3 id="toc-hId-348491095">Traffic flow:</H3><DIV class=""><PRE><CODE>Internet
→ https://myapp.<cluster-id>.kyma.ondemand.com
→ kyma-gateway (Istio ingress in kyma-system)
→ Service: myapp:80
→ Pod (with Istio sidecar injected automatically)</CODE></PRE></DIV><P>Note the <CODE>noAuth: true</CODE> setting — fine for development but in production you'd replace this with a JWT handler pointing to XSUAA or another identity provider.</P><HR /><H2 id="toc-hId-22894871">Key Observations</H2><UL><LI><STRONG>Istio sidecar injection</STRONG><SPAN> </SPAN>is automatic — any pod in a labelled namespace gets a second container (<CODE>2/2</CODE><SPAN> </SPAN>in<SPAN> </SPAN><CODE>kubectl get pods</CODE>)</LI><LI><STRONG>APIRules replace Ingress</STRONG><SPAN> </SPAN>in Kyma — don't use standard Kubernetes Ingress resources</LI><LI><STRONG>BTP services bind as Kubernetes Secrets</STRONG><SPAN> </SPAN>— via<SPAN> </SPAN><CODE>ServiceInstance</CODE><SPAN> </SPAN>and<SPAN> </SPAN><CODE>ServiceBinding</CODE><SPAN> </SPAN>resources managed by the SAP BTP Operator</LI></UL><HR /><H2 id="toc-hId-173635723">Next Steps</H2><UL><LI>Bind a BTP service (XSUAA, Destination Service) using<SPAN> </SPAN><CODE>ServiceInstance</CODE><SPAN> </SPAN>and<SPAN> </SPAN><CODE>ServiceBinding</CODE></LI><LI>Secure an APIRule with JWT authentication</LI><LI>Deploy a Kyma Serverless Function</LI></UL>2026-03-12T10:27:43.536000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/how-our-team-started-using-ai-to-supercharge-daily-dev-work-in-sap/ba-p/14347072How Our Team Started Using AI to Supercharge Daily Dev Work in SAP Logistics Management2026-03-12T10:28:51.321000+01:00eric_wang_07https://community.sap.com/t5/user/viewprofilepage/user-id/252626<P><STRONG>Taming the Dependency Update Avalanche </STRONG></P><P>If you've ever managed dependency updates across multiple repositories, you know the pain. Every morning, there's a fresh batch of version bumps waiting for you. Click into GitHub, check the build status, review the diff, approve, merge, repeat. Multiply that by half a dozen repos, and congratulations — your morning is gone. With AI assistance, we condensed that entire ritual into a single conversation. Ask the AI to check all open dependency PRs, get a summary table with build status and update needs, then tell it which ones to approve. What used to be a 20-minute clicking marathon now takes seconds. It's not glamorous, but reclaiming that time every single day adds up fast.</P><P><STRONG>From Task Description to Implementation Plan — Without the Ramp-Up </STRONG></P><P>We've all been there: you pick up a task from the backlog, and you have zero context. The description references some integration pipeline, a data model you've never touched, and a field mapping buried in documentation you didn't know existed. Instead of spending an hour ramping up, we started asking the AI to do the legwork. It reads the task description, pulls up relevant documentation, cross-references technical specs, and even verifies assumptions against design documents. Then it drafts an implementation plan — sometimes spinning up parallel research threads to speed things up. The key insight here isn't that AI writes perfect plans. It's that the research phase — the part where you're just gathering context — gets compressed dramatically. You still make the decisions, but you make them faster because the information is already in front of you.</P><P><STRONG>Rolling Out Fixes Across Multiple Repos </STRONG></P><P>Here's a scenario every team knows: you find a bug that affects multiple repositories. The fix is straightforward, but applying it means cloning each repo, making the same change, creating a PR, writing the description — rinse and repeat. We found that AI assistants are surprisingly good at this. Describe the issue once, point it at the affected repos, and let it implement the fix across all of them. It even creates individual pull requests with proper descriptions. Three PRs from a single conversation, no copy-pasting required. This is the kind of task where AI doesn't need to be creative — it just needs to be consistent and thorough. And it nails that.</P><P><STRONG>Build Failure Detective Work </STRONG></P><P>The classic "hey, can you check why the pipeline failed?" message. Sometimes you're in a meeting, sometimes you're just deep in another task and don't want to context-switch. Build logs are long, noisy, and full of red herrings. We started delegating the initial investigation to AI. It digs through CI/CD build logs, identifies the actual failures buried in the noise, and gives you a verdict: is this a safe update that just needs a test fix, or is there a genuine regression? The kind of log archaeology that normally takes 15 minutes of scrolling and squinting — done in about one. You still make the call on what to do next. But at least you're making it with a clear summary instead of raw log output.</P><P><STRONG>What We Actually Learned</STRONG></P><P>After a few weeks of working this way, a few things became clear:</P><UL><LI><STRONG>AI is best at the boring stuff</STRONG>. The more repetitive and well-defined a task is, the more value you get. Don't start with "write me a new microservice." Start with "check these 12 PRs for me."</LI><LI><STRONG>Trust, but verify.</STRONG> AI will confidently give you wrong answers sometimes. Cross-checking matters. The good news is that the AI itself can help you verify — ask it to double-check against documentation or specs.</LI><LI><STRONG>Context is everything.</STRONG> The more your AI assistant understands your codebase, conventions, and tooling, the more useful it becomes. Invest time in setting that up — it pays off quickly.</LI><LI><STRONG>It changes how you plan your day.</STRONG> When you know you can offload the tedious parts, you start structuring your work differently. Kick off an AI task before a meeting, review the results after. It's like having a junior developer who never gets bored.</LI></UL>2026-03-12T10:28:51.321000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/update-btp-trial-update-kyma-runtime-availability/ba-p/14362992[UPDATE] BTP Trial Update: Kyma Runtime Availability2026-03-31T17:18:32.448000+02:00marcoporruhttps://community.sap.com/t5/user/viewprofilepage/user-id/592649<P>Dear SAP BTP trial community,</P><P>We always aimed to give users the most freedom and flexibility in the SAP BTP trial to ensure that everyone can test and explore our software or use it for learning purposes. For the next weeks, the SAP BTP trial will not be able to deliver this experience for new SAP BTP trial accounts as we have to prioritize the security and reliability of our SAP BTP trial system. We are experiencing an unusually high amount of misuse limiting the availability for all of our users. </P><P>Therefore, we decided to temporarily reduce the offering for the SAP BTP Kyma runtime for new BTP Trial Accounts. You will still be able to build applications and preview them, but deployment in the SAP BTP Kyma runtime will not be possible for the next weeks. </P><P>How can you still explore SAP BTP? </P><UL><LI>All other services will remain available and can be explored on the <A href="https://discovery-center.cloud.sap/viewServices?regions=all&commercialModel=trial" target="_blank" rel="noopener nofollow noreferrer">SAP Discovery Center</A></LI><LI>As a partner: Refer to our newly released <A href="https://news.sap.com/2025/08/partners-free-sap-build-licenses-tdd-ai-powered-intelligent-applications/" target="_blank" rel="noopener noreferrer">free TDD licenses</A></LI><LI>As a customer: Leverage the free tier offering as part of Pay-As-You-Go for SAP BTP available in the <A href="https://www.sap.com/store.html" target="_blank" rel="noopener noreferrer">SAP Store</A></LI></UL><P>All updates will be posted in the SAP Community. </P><P> </P><P>See you soon, </P><P>SAP BTP trial product team</P>2026-03-31T17:18:32.448000+02:00https://community.sap.com/t5/technology-blog-posts-by-members/calling-the-sap-business-partner-api-from-a-kyma-serverless-function-via/ba-p/14285338Calling the SAP Business Partner API from a Kyma Serverless Function via BTP Destination Service2026-04-02T10:12:48.110000+02:00neilaspinhttps://community.sap.com/t5/user/viewprofilepage/user-id/167493<H2 id="toc-hId-1766524556">Introduction</H2><P>One of the most common integration patterns in SAP BTP is connecting a Kyma serverless function to an external SAP API — securely, without hardcoding credentials. In this post I'll walk through exactly how to do that using the BTP Destination Service, a Kyma ServiceBinding, and the SAP API Business Accelerator Hub sandbox.</P><P>By the end you'll have a working Node.js Kyma function that:</P><OL><LI>Obtains an OAuth token from XSUAA using client credentials</LI><LI>Resolves a named BTP Destination to get the target API URL</LI><LI>Calls the SAP Business Partner API (A_BusinessPartner) and returns JSON results</LI></OL><P>All credentials are stored in Kubernetes secrets — nothing is hardcoded.</P><HR /><H2 id="toc-hId-1570011051">Prerequisites</H2><UL><LI>SAP BTP Trial account with a Kyma cluster enabled</LI><LI>kubectl configured against your Kyma cluster</LI><LI>An account on <A href="https://api.sap.com/" target="_blank" rel="noopener noreferrer">api.sap.com</A> with an API key for the sandbox</LI><LI>VSCode with the Kubernetes extension (optional but useful)</LI></UL><HR /><H2 id="toc-hId-1373497546">Step 1: Enable the Serverless Module in Kyma</H2><P>By default, the Serverless module may not be enabled on your Kyma cluster. Check:</P><PRE><CODE>kubectl get crd | grep serverless</CODE></PRE><P>If nothing is returned, go to your BTP cockpit, open the Kyma cluster overview, click <STRONG>Modules → Add</STRONG>, and enable <STRONG>Serverless</STRONG>. Wait a few minutes and recheck.</P><HR /><H2 id="toc-hId-1176984041">Step 2: Create the BTP Destination Service Instance</H2><P>We need a Destination Service instance to store and resolve our API destination. Rather than creating it through the BTP cockpit UI (which has limitations on Trial for Kyma runtime), we use the BTP Service Operator directly in Kubernetes.</P><P>First verify the BTP Service Operator CRDs are available:</P><PRE><CODE>kubectl get crd | grep sap</CODE></PRE><P>You should see <CODE>serviceinstances.services.cloud.sap.com</CODE> and <CODE>servicebindings.services.cloud.sap.com</CODE>.</P><P>Create <CODE>k8s/destination-instance.yaml</CODE>:</P><PRE><CODE>apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
name: destination-service
namespace: default
spec:
serviceOfferingName: destination
servicePlanName: lite</CODE></PRE><P>Apply it:</P><PRE><CODE>kubectl apply -f k8s/destination-instance.yaml
kubectl get serviceinstances -n default</CODE></PRE><P>Wait until the status shows <CODE>Ready</CODE>.</P><HR /><H2 id="toc-hId-980470536">Step 3: Create the Service Binding</H2><P>The binding injects the Destination Service credentials (XSUAA client ID, client secret, token URL, and service URI) into a Kubernetes secret.</P><P>Create <CODE>k8s/destination-binding.yaml</CODE>:</P><PRE><CODE>apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
name: destination-service-binding
namespace: default
spec:
serviceInstanceName: destination-service</CODE></PRE><P>Apply it:</P><PRE><CODE>kubectl apply -f k8s/destination-binding.yaml
kubectl get servicebindings -n default</CODE></PRE><P>Verify the secret was created and check the keys:</P><PRE><CODE>kubectl get secret destination-service-binding -n default -o jsonpath='{.data}' | python3 -m json.tool</CODE></PRE><P>You'll see keys including <CODE>clientid</CODE>, <CODE>clientsecret</CODE>, <CODE>url</CODE> (XSUAA token endpoint), and <CODE>uri</CODE> (Destination Service endpoint).</P><HR /><H2 id="toc-hId-783957031">Step 4: Create the BTP Destination</H2><P>In BTP cockpit go to <STRONG>Connectivity → Destinations → New Destination → From Scratch</STRONG> and fill in:</P><P>Field Value</P><TABLE><TBODY><TR><TD>Name</TD><TD>S4H_SANDBOX_BP</TD></TR><TR><TD>Type</TD><TD>HTTP</TD></TR><TR><TD>URL</TD><TD><A href="https://sandbox.api.sap.com/s4hanacloud/sap/opu/odata/sap/API_BUSINESS_PARTNER" target="_blank" rel="noopener noreferrer">https://sandbox.api.sap.com/s4hanacloud/sap/opu/odata/sap/API_BUSINESS_PARTNER</A></TD></TR><TR><TD>Authentication</TD><TD>NoAuthentication</TD></TR><TR><TD>Additional Property: APIKey</TD><TD><EM>your api.sap.com API key</EM></TD></TR></TBODY></TABLE><P>Save it. The API key is stored as a destination property — the function will pass it as a request header.</P><HR /><H2 id="toc-hId-587443526">Step 5: Store the API Key as a Kubernetes Secret</H2><P>Never put API keys in your function code or YAML. Create a secret:</P><PRE><CODE>kubectl create secret generic bp-function-apikey \
--from-literal=apikey='<your-api-key>' \
-n default</CODE></PRE><HR /><H2 id="toc-hId-390930021">Step 6: Enable Istio Sidecar Injection</H2><P>The Kyma APIRule requires Istio to be injected into the function pod. Label the namespace:</P><PRE><CODE>kubectl label namespace default istio-injection=enabled --overwrite</CODE></PRE><HR /><H2 id="toc-hId-194416516">Step 7: Deploy the Function</H2><P>Here is the complete <CODE>k8s/function.yaml</CODE>. The inline source contains the full Node.js function — no external dependencies required beyond Node built-ins.</P><PRE><CODE>apiVersion: serverless.kyma-project.io/v1alpha2
kind: Function
metadata:
name: bp-function
namespace: default
labels:
app: bp-function
spec:
runtime: nodejs20
source:
inline:
dependencies: |
{
"name": "kyma-bp-function",
"version": "1.0.0",
"description": "Kyma serverless function to fetch SAP Business Partners via BTP Destination Service",
"main": "handler.js",
"engines": {
"node": ">=18"
}
}
source: |
const https = require("https");
const http = require("http");
const zlib = require("zlib");
async function getDestinationToken() {
const credentials = Buffer.from(
`${process.env.CLIENTID}:${process.env.CLIENTSECRET}`
).toString("base64");
const tokenUrl = new URL(`${process.env.XSUAA_URL}/oauth/token`);
const body = "grant_type=client_credentials";
return new Promise((resolve, reject) => {
const options = {
hostname: tokenUrl.hostname,
path: `${tokenUrl.pathname}?${body}`,
method: "POST",
headers: {
Authorization: `Basic ${credentials}`,
"Content-Type": "application/x-www-form-urlencoded",
},
};
const req = https.request(options, (res) => {
let data = "";
res.on("data", (chunk) => (data += chunk));
res.on("end", () => {
try {
resolve(JSON.parse(data).access_token);
} catch (e) {
reject(new Error(`Failed to parse token response: ${data}`));
}
});
});
req.on("error", reject);
req.end();
});
}
async function getDestination(destinationName, token) {
const destUrl = new URL(
`${process.env.DESTINATION_URI}/destination-configuration/v1/destinations/${destinationName}`
);
return new Promise((resolve, reject) => {
const options = {
hostname: destUrl.hostname,
path: destUrl.pathname,
method: "GET",
headers: {
Authorization: `Bearer ${token}`,
},
};
const req = https.request(options, (res) => {
let data = "";
res.on("data", (chunk) => (data += chunk));
res.on("end", () => {
try {
resolve(JSON.parse(data));
} catch (e) {
reject(new Error(`Failed to parse destination response: ${data}`));
}
});
});
req.on("error", reject);
req.end();
});
}
async function fetchBusinessPartners(destination) {
const destUrl = destination.destinationConfiguration;
const targetUrl = new URL(`${destUrl.URL}/A_BusinessPartner?$format=json&$top=20`);
const protocol = targetUrl.protocol === "https:" ? https : http;
return new Promise((resolve, reject) => {
const options = {
hostname: targetUrl.hostname,
path: `${targetUrl.pathname}${targetUrl.search}`,
method: "GET",
headers: {
APIKey: process.env.API_KEY,
Accept: "application/json",
},
};
const req = protocol.request(options, (res) => {
const chunks = [];
const encoding = res.headers["content-encoding"];
const stream = encoding === "gzip" ? res.pipe(zlib.createGunzip())
: encoding === "deflate" ? res.pipe(zlib.createInflate())
: res;
stream.on("data", (chunk) => chunks.push(chunk));
stream.on("end", () => {
const raw = Buffer.concat(chunks).toString("utf8");
try {
resolve({ statusCode: res.statusCode, body: JSON.parse(raw) });
} catch (e) {
resolve({ statusCode: res.statusCode, body: raw });
}
});
stream.on("error", reject);
});
req.on("error", reject);
req.end();
});
}
module.exports = {
main: async function (event, context) {
const DESTINATION_NAME = "S4H_SANDBOX_BP";
try {
const token = await getDestinationToken();
const destination = await getDestination(DESTINATION_NAME, token);
const result = await fetchBusinessPartners(destination);
return {
statusCode: result.statusCode,
headers: { "Content-Type": "application/json" },
body: result.body,
};
} catch (err) {
console.error("Error:", err.message);
return {
statusCode: 500,
headers: { "Content-Type": "application/json" },
body: { error: err.message },
};
}
},
};
resourceConfiguration:
function:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
env:
- name: CLIENTID
valueFrom:
secretKeyRef:
name: destination-service-binding
key: clientid
- name: CLIENTSECRET
valueFrom:
secretKeyRef:
name: destination-service-binding
key: clientsecret
- name: XSUAA_URL
valueFrom:
secretKeyRef:
name: destination-service-binding
key: url
- name: DESTINATION_URI
valueFrom:
secretKeyRef:
name: destination-service-binding
key: uri
- name: API_KEY
valueFrom:
secretKeyRef:
name: bp-function-apikey
key: apikey</CODE></PRE><P>Apply it:</P><PRE><CODE>kubectl apply -f k8s/function.yaml
kubectl get functions -n default</CODE></PRE><P>Wait for <CODE>RUNNING: True</CODE>.</P><HR /><H2 id="toc-hId--2096989">Step 8: Expose the Function with an APIRule</H2><P>Create <CODE>k8s/apirule.yaml</CODE> using the v2 API (v1beta1 is deprecated):</P><PRE><CODE>apiVersion: gateway.kyma-project.io/v2
kind: APIRule
metadata:
name: bp-function
namespace: default
spec:
gateway: kyma-system/kyma-gateway
hosts:
- bp-function.<your-cluster-domain>.kyma.ondemand.com
rules:
- methods:
- GET
noAuth: true
path: /*
service:
name: bp-function
port: 80</CODE></PRE><P>Get your cluster domain:</P><PRE><CODE>kubectl get gateway -n kyma-system kyma-gateway -o jsonpath='{.spec.servers[0].hosts[0]}'</CODE></PRE><P>Apply and check status:</P><PRE><CODE>kubectl apply -f k8s/apirule.yaml
kubectl get apirule bp-function -n default -o jsonpath='{.status.state}'</CODE></PRE><HR /><H2 id="toc-hId-148643863">Step 9: Test It</H2><PRE><CODE>curl https://bp-function.<your-cluster-domain>.kyma.ondemand.com</CODE></PRE><P>You should get a JSON response containing Business Partner records from the SAP sandbox:</P><PRE><CODE>{
"statusCode": 200,
"body": {
"d": {
"results": [
{
"BusinessPartner": "1000037",
"BusinessPartnerFullName": "...",
...
}
]
}
}
}</CODE></PRE><HR /><H2 id="toc-hId--47869642">How It Works</H2><P>The flow is:</P><OL><LI>The function starts and reads XSUAA credentials from the Kubernetes secret injected by the BTP Service Operator</LI><LI>It calls the XSUAA token endpoint with client credentials to get a bearer token</LI><LI>It uses that token to call the BTP Destination Service and resolve the <CODE>S4H_SANDBOX_BP</CODE> destination — getting back the target URL and any stored properties</LI><LI>It calls the SAP API sandbox using the resolved URL, passing the API key as a header</LI><LI>The response is gzip-decoded (the sandbox always returns gzip) and returned as JSON</LI></OL><H3 id="toc-hId--537786154">A Note on Gzip</H3><P>The SAP API Hub sandbox always returns gzip-compressed responses. Node's <CODE>https</CODE> module does not automatically decompress these. The function handles this explicitly using Node's built-in <CODE>zlib</CODE> module, detecting the <CODE>content-encoding</CODE> response header and piping through <CODE>zlib.createGunzip()</CODE> or <CODE>zlib.createInflate()</CODE> as appropriate.</P><HR /><H2 id="toc-hId--440896652">Key Points</H2><P><STRONG>No external npm packages</STRONG> — the function uses only Node.js built-ins (<CODE>https</CODE>, <CODE>http</CODE>, <CODE>zlib</CODE>), keeping the deployment simple and fast.</P><P><STRONG>Credentials never touch code or git</STRONG> — XSUAA credentials come from the BTP Service Operator binding secret, the API key from a manually created Kubernetes secret. Neither appears in any YAML committed to source control.</P><P><STRONG>Destination Service as the integration layer</STRONG> — by resolving the destination at runtime rather than hardcoding the URL, you can swap the backend (sandbox vs. production, different regions) just by changing the BTP destination configuration, with no code changes.</P><P><STRONG>Istio sidecar is required</STRONG> — the Kyma APIRule validation requires the function pod to have the Istio sidecar injected. Make sure you label the namespace with <CODE>istio-injection=enabled</CODE> before deploying the function.</P><HR /><H2 id="toc-hId--637410157">What's Next</H2><P>From here you could extend this pattern to:</P><UL><LI>Query specific Business Partners by key</LI><LI>POST data back to the API</LI><LI>Chain multiple API calls within a single function</LI><LI>Trigger the function from a Kyma event rather than an HTTP call</LI><LI>Deploy to a production BTP account pointing at a real S/4HANA system rather than the sandbox</LI></UL><P>The same pattern — BTP Destination Service + Kyma ServiceBinding + serverless function — works for any SAP API on the Business Accelerator Hub, and for non-SAP APIs too.</P>2026-04-02T10:12:48.110000+02:00https://community.sap.com/t5/sap-for-utilities-blog-posts/the-utility-clean-core-blueprint-how-sap-btp-enables-modernization-without/ba-p/14366797The Utility Clean Core Blueprint: How SAP BTP Enables Modernization Without Disrupting Regulated OPR2026-04-07T13:31:11.299000+02:00Atul_Joshi85https://community.sap.com/t5/user/viewprofilepage/user-id/2274193<P> </P><H1 id="toc-hId-1664257766"><SPAN>The Utility Clean Core Blueprint: How SAP BTP Enables Modernization Without Disrupting</SPAN> Regulated Operations</H1><P><EM> </EM></P><P><EM>A White Paper by Atul Joshi </EM></P><P><EM> </EM></P><H1 id="toc-hId-1467744261">Executive Summary</H1><P>Utilities today face a paradox: they must modernize rapidly to support digital customer expectations, DER integration, and grid intelligence — yet they operate in one of the most regulated, risk‑averse environments in the world. Traditional SAP landscapes, especially SAP IS‑U, are burdened with decades of custom code, point‑to‑point integrations, and rigid processes that make transformation slow, expensive, and operationally risky.</P><P>This white paper introduces <STRONG>The Utility Clean Core Blueprint</STRONG>, a modernization model that enables utilities to evolve their SAP landscape without destabilizing missions, critical billing, metering, and customer operations. The blueprint leverages <STRONG>SAP Business Technology Platform (BTP)</STRONG> as the innovation and integration layer, allowing utilities to decouple custom logic, orchestrate processes, and adopt cloud capabilities — all while keeping the SAP core stable, compliant, and upgrade‑ready.</P><P> </P><OL><LI><SPAN>The Modernization Challenge in Regulated Utilities</SPAN></LI></OL><P> </P><P>Utilities operate under unique constraints:</P><UL><LI><STRONG>Regulatory oversight</STRONG> limits downtime and process changes.</LI><LI><STRONG>High‑volume billing cycles</STRONG> require predictable system performance.</LI><LI><STRONG>Legacy customizations</STRONG> make upgrades risky and expensive.</LI><LI><STRONG>Aging IS‑U systems</STRONG> cannot support modern digital experiences.</LI><LI><STRONG>Integration sprawl</STRONG> slows innovation and increases operational risk.</LI></UL><P>Most utilities want to move toward S/4HANA Utilities, but the path is blocked by:</P><UL><LI>10–20 years of Z‑code</LI><LI>tightly coupled interfaces</LI><LI>custom workflows embedded in IS‑U</LI><LI>inflexible monolithic architecture</LI></UL><P>This is where <STRONG>Clean Core</STRONG> becomes not just a best practice — but a survival strategy.</P><H1 id="toc-hId-1271230756">2. What Clean Core Really Means for Utilities</H1><P> </P><P>Clean Core is often misunderstood as “removing custom code.” For utilities, it means something deeper:</P><P><STRONG>Clean Core = A stable SAP transactional engine + an agile innovation layer on SAP BTP</STRONG></P><P>This approach allows:</P><UL><LI>predictable billing and metering operations</LI><LI>faster innovation cycles</LI><LI>safer upgrades</LI><LI>cloud‑ready architecture</LI><LI>regulatory compliance</LI></UL><P>In other words: <STRONG>Keep the core clean. Move the complexity out. Modernize without breaking operations.</STRONG></P><H1 id="toc-hId-1074717251">3. SAP BTP: The Enabler of Utility Modernization</H1><P> </P><P>SAP BTP provides the capabilities utilities need to modernize safely:</P><P><STRONG> </STRONG></P><P><STRONG>Key BTP Services for Utilities</STRONG></P><TABLE><TBODY><TR><TD><P><STRONG>Capability</STRONG></P></TD><TD><P><STRONG>BTP Service</STRONG></P></TD><TD><P><STRONG>Utility Value</STRONG></P></TD></TR><TR><TD><P><STRONG>Event-driven decoupling</STRONG></P></TD><TD><P><STRONG>Event Mesh</STRONG></P></TD><TD><P>Real-time meter events, outage events, billing triggers</P></TD></TR><TR><TD><P><STRONG>Integration modernization</STRONG></P></TD><TD><P><STRONG>Integration Suite</STRONG></P></TD><TD><P>Replace PI/PO, eliminate point-to-point interfaces</P></TD></TR><TR><TD><P><STRONG>Extension development</STRONG></P></TD><TD><P><STRONG>SAP CAP / Kyma</STRONG></P></TD><TD><P>Move Z‑logic out of IS‑U</P></TD></TR><TR><TD><P><STRONG>Workflow automation</STRONG></P></TD><TD><P><STRONG>BTP Workflow</STRONG></P></TD><TD><P>Field service approvals, credit workflows</P></TD></TR><TR><TD><P><STRONG>Data intelligence</STRONG></P></TD><TD><P><STRONG>HANA Cloud</STRONG></P></TD><TD><P>Meter analytics, billing insights</P></TD></TR><TR><TD><P><STRONG>Security & governance</STRONG></P></TD><TD><P><STRONG>IAS/IPS</STRONG></P></TD><TD><P>Unified identity for customer and employee apps</P></TD></TR></TBODY></TABLE><P>BTP becomes the <STRONG>innovation shell</STRONG> around the SAP core.</P><P><STRONG> </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="atul_joshi85_0-1775508453976.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393846i4449917AB93FD3AF/image-size/medium?v=v2&px=400" role="button" title="atul_joshi85_0-1775508453976.png" alt="atul_joshi85_0-1775508453976.png" /></span></P><P> </P><H1 id="toc-hId-878203746">4. The Utility Clean Core Blueprint</H1><P> </P><P>The Utility Clean Core Blueprint consists of five practical steps:</P><H2 id="toc-hId-810772960">Step 1 — Stabilize the Core</H2><UL><LI>Freeze custom development in IS‑U</LI><LI>Identify Z‑objects that block upgrades</LI><LI>Classify custom code into: retire, refactor, or externalize</LI><LI>Clean up unused interfaces</LI></UL><H2 id="toc-hId-614259455">Step 2 — Externalize Custom Logic to BTP</H2><P> </P><P>Move the following out of IS‑U:</P><UL><LI>complex billing rules</LI><LI>validation logic</LI><LI>workflow approvals</LI><LI>customer-facing processes</LI><LI>meter data transformations</LI></UL><P>Use CAP, Node.js, or Java on BTP.</P><H2 id="toc-hId-417745950">Step 3 — Modernize Integrations</H2><P>Replace:</P><UL><LI>PI/PO mappings</LI><LI>RFC-based interfaces</LI><LI>batch jobs</LI></UL><P>With:</P><UL><LI>Event Mesh</LI><LI>Integration Suite</LI><LI>APIs</LI></UL><H2 id="toc-hId-221232445">Step 4 — Introduce Event-Driven Architecture</H2><P>Trigger events for:</P><UL><LI>meter reads</LI><LI>billing completion</LI><LI>move-in/move-out</LI><LI>credit actions</LI><LI>outage notifications</LI></UL><P>This decouples processes and reduces core load.</P><H2 id="toc-hId-24718940">Step 5 — Enable Coexistence with S/4HANA</H2><P>Run IS‑U + S/4HANA + BTP in hybrid mode:</P><UL><LI>IS‑U remains the billing engine</LI><LI>S/4 handles finance, procurement, asset management</LI><LI>BTP orchestrates innovation</LI></UL><P>This avoids a risky “big bang” migration.</P><H1 id="toc-hId-468862799">5. Practical Utility Case Studies (Realistic, Practitioner-Level)</H1><H3 id="toc-hId--314456720">Case Study 1: Reducing Billing Run Failures with BTP Event Mesh</H3><P>A North American utility struggled with billing run failures caused by custom validations embedded in IS‑U.</P><P><STRONG><U>Problem:</U></STRONG> Every enhancement point slowed down billing and caused unpredictable dumps. <STRONG><U>Solution:</U></STRONG></P><UL><LI>Move validation logic to a CAP service on BTP</LI><LI>Trigger validation via Event Mesh before billing</LI><LI>Return results via API</LI></UL><P><STRONG><U>Outcome:</U></STRONG></P><UL><LI>Billing runtime reduced by 27%</LI><LI>Zero dumps in 3 months</LI><LI>Clean Core achieved without touching core billing logic</LI></UL><H3 id="toc-hId--510970225"><STRONG>Case Study 2: </STRONG>Modernizing<STRONG> Move-In/Move-Out Without Touching IS‑U</STRONG></H3><P> </P><P>A regulated utility needed a digital customer onboarding experience.</P><P><STRONG>Problem:</STRONG> IS‑U move-in/out processes were too rigid and heavily customized.</P><P><STRONG>Solution:</STRONG></P><UL><LI>Build a customer onboarding app on BTP</LI><LI>Orchestrate workflow using BTP Workflow</LI><LI>Trigger IS‑U move-in via API only at the final step</LI></UL><P><STRONG>Outcome:</STRONG></P><UL><LI>Customer onboarding time reduced from 3 days to 30 minutes</LI><LI>No changes required in IS‑U core</LI><LI>Regulatory compliance maintained</LI></UL><P><STRONG> </STRONG></P><P><STRONG>Case Study 3: Eliminating 40+ Point-to-Point Integrations</STRONG></P><P>Problem: European utility had PI/PO interfaces that were expensive to maintain.</P><P><STRONG>Solution:</STRONG></P><UL><LI>Replace PI/PO with Integration Suite</LI><LI>Introduce canonical data models</LI><LI>Use Event Mesh for asynchronous processes</LI></UL><P><STRONG>Outcome:</STRONG></P><UL><LI>Integration failures reduced by 60%</LI><LI>Upgrade downtime reduced by 40%</LI><LI>IT operations cost reduced significantly</LI></UL><P><STRONG> </STRONG></P><H1 id="toc-hId--120677716">6. The Coexistence Architecture: IS‑U + S/4 + BTP</H1><P>Utilities cannot afford a big-bang migration. The coexistence model allows:</P><P><STRONG>IS‑U</STRONG></P><P>→ Billing, metering, device management</P><P><STRONG>S/4HANA</STRONG></P><P>→ Finance, procurement, asset management</P><P><STRONG>SAP BTP</STRONG></P><P>→ Innovation, integration, automation, analytics</P><P>This architecture supports:</P><UL><LI>gradual modernization</LI><LI>regulatory stability</LI><LI>continuous innovation</LI></UL><H1 id="toc-hId--317191221">7. Governance: The Missing Piece in Most Utility Programs</H1><P>Clean Core is not a technical exercise — it is a governance discipline.</P><P><STRONG>Key governance principles</STRONG></P><UL><LI>No custom code in the core</LI><LI>All new extensions must be BTP-first</LI><LI>Event-driven architecture preferred over batch</LI><LI>API-led integration</LI><LI>Quarterly review of Z‑objects</LI><LI>Architecture board approval for all enhancements</LI></UL><P>This is how utilities stay modern <EM>after</EM> the transformation.</P><H1 id="toc-hId--513704726">8. Conclusion: Modernize Without Disruption</H1><P>Utilities cannot pause operations to modernize. They need a blueprint that:</P><UL><LI>protects regulated processes</LI><LI>reduces technical debt</LI><LI>accelerates innovation</LI><LI>prepares for S/4HANA</LI><LI>supports the energy transition</LI></UL><P><STRONG> </STRONG></P><P><STRONG>The Utility Clean Core Blueprint</STRONG> provides exactly that.</P><P> </P><P>By using SAP BTP as the innovation layer, utilities can modernize safely, incrementally, and intelligently — without destabilizing the mission-critical systems that keep the lights on.</P><P> </P>2026-04-07T13:31:11.299000+02:00https://community.sap.com/t5/technology-blog-posts-by-sap/configure-kyma-to-work-with-akamai-cdn/ba-p/14372412Configure Kyma to work with Akamai CDN2026-04-21T08:38:31.650000+02:00mlukhttps://community.sap.com/t5/user/viewprofilepage/user-id/1601022<P><FONT face="arial,helvetica,sans-serif">The article describes how to configure SAP BTP, Kyma runtime to work with Content Delivery Network (CDN) such as Akamai CDN.</FONT></P><P><FONT face="arial,helvetica,sans-serif">A Content Delivery Network consists of multiple servers placed around the world. These servers, called edge servers, are responsible for caching website content and serving it to end users from the nearest location. The original content lives in the 'origin server', which is Kyma in our case.</FONT></P><P data-unlink="true"><FONT face="arial,helvetica,sans-serif">The configuration that defines how Akamai edge servers handle traffic for a given domain is called a 'Property'. It maps hostnames (for example, <A href="http://www.example.com" target="_blank" rel="noopener nofollow noreferrer">www.example.com</A>) to specific edge behaviors, such as caching rules, security settings, and origin server communication.</FONT></P><P><FONT face="arial,helvetica,sans-serif">Akamai edge servers also support TLS termination, so they act as endpoints for HTTPs connections and serve TLS certificates for external domains.</FONT></P><P><FONT face="arial,helvetica,sans-serif">A single Kyma instance can serve content for multiple domains (multiple Akamai Properties), which may then be redirected to any workloads. It is visualized in the following diagram:</FONT></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="cdn.drawio.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/398640iF75AA472A505A818/image-size/large?v=v2&px=999" role="button" title="cdn.drawio.png" alt="cdn.drawio.png" /></span> </P><P><FONT face="arial,helvetica,sans-serif">As we can see, there are two Akamai Properties, each with two host names. We want to redirect requests for all external host names to the same Kyma instance, ab1234.kyma.ondemand.com, with the IP address 203.0.113.50. At this point, we need a host name on the Kyma side that would accept the traffic. <SPAN>To keep things simple,</SPAN> we may use a subdomain of the Kyma cluster domain, like cdn.ab1234.kyma.ondemand.com, for which we can easily register a DNS entry and generate a TLS certificate.</FONT></P><P>In the Akamai Property, you need to configure a rule in the following way:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="akamai-property-config.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/397100i2838DDE4A73D4147/image-size/large?v=v2&px=999" role="button" title="akamai-property-config.png" alt="akamai-property-config.png" /></span></P><P>Let's now imagine that an end user makes a request to the domain <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A>, configured as in the above screenshot. In such a case, the Akamai edge server acts as a reverse proxy and contacts the Kyma origin server in the following way:</P><UL><LI>Connection is technically made to the origin server defined in the 'Origin Server Hostname', so the edge server resolves the DNS name <FONT face="arial,helvetica,sans-serif">cdn.ab1234.kyma.ondemand.com</FONT> and connects to the IP address 203.0.113.50.</LI><LI>In Layer 4, the SNI host name points to the external domain <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A>.</LI><LI>In Layer 7, the Host header also points to the external domain <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A>.</LI><LI>It is expected that the origin server provides a valid TLS certificate with a Common Name or Subject Alternative Name equal to the origin hostname, which is <FONT face="arial,helvetica,sans-serif">cdn.ab1234.kyma.ondemand.com</FONT>.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="akamai2.drawio.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/398603i11011793FF256ABF/image-size/large?v=v2&px=999" role="button" title="akamai2.drawio.png" alt="akamai2.drawio.png" /></span></P><P>To accept such traffic on the Kyma side, we need a custom Istio Gateway:</P><UL><LI>With internal domain:<UL><LI>cdn.ab1234.kyma.ondemand.com</LI></UL></LI><LI>With external domains:<UL><LI>Either <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A>, api.foo.example.com, <A href="http://www.bar.example.com" target="_blank" rel="noopener nofollow noreferrer">www.bar.example.com</A>, <A href="http://www.bar.example.com" target="_blank" rel="noopener nofollow noreferrer">www.bar.example.com</A>, m.bar.example.com</LI><LI>Or *.foo.example.com, *.bar.example.com</LI><LI>Or *.example.com</LI></UL></LI><LI>Pointing to secret with a key and a certificate for internal domain cdn.ab1234.kyma.ondemand.com.</LI></UL><P>Note that the Gateway above doesn't need a TLS certificate for external domains, which is hosted by the CDN. The Akamai edge server expects the certificate to be valid for the origin hostname.</P><P>To create such a configuration, follow this example:</P><P>1. Create three resources: DNSEntry, Certificate, and Istio Gateway.</P><pre class="lia-code-sample language-bash"><code>kubectl create ns gateway-ns</code></pre><pre class="lia-code-sample language-yaml"><code>apiVersion: dns.gardener.cloud/v1alpha1
kind: DNSEntry
metadata:
name: cdn-internal-domain-dnsentry
namespace: gateway-ns
annotations:
dns.gardener.cloud/class: garden
spec:
dnsName: "cdn.ab1234.kyma.ondemand.com"
ttl: 600
targets:
- 203.0.113.50 # Load Balancer IP address</code></pre><pre class="lia-code-sample language-yaml"><code>apiVersion: cert.gardener.cloud/v1alpha1
kind: Certificate
metadata:
name: cdn-internal-domain-certificate
namespace: istio-system
spec:
secretName: cdn-internal-domain-certificate
commonName: "cdn.ab1234.kyma.ondemand.com"
issuerRef:
name: garden</code></pre><pre class="lia-code-sample language-yaml"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cdn-gateway
namespace: gateway-ns
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
servers:
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: cdn-internal-domain-certificate
hosts:
- "cdn.ab1234.kyma.ondemand.com"
- "*.example.com"</code></pre><P>2. Deploy an application.</P><pre class="lia-code-sample language-bash"><code>kubectl create ns application
kubectl label namespace application istio-injection=enabled --overwrite</code></pre><pre class="lia-code-sample language-yaml"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
namespace: application
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: application
labels:
app: httpbin
service: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
namespace: application
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80</code></pre><P>3. Create an APIRule resource, which exposes the above application via the previously created Gateway and on a particular external domain:</P><pre class="lia-code-sample language-abap"><code>apiVersion: gateway.kyma-project.io/v2
kind: APIRule
metadata:
name: application-foo-external-domain
namespace: application
spec:
gateway: gateway-ns/cdn-gateway
hosts:
- "www.foo.example.com"
rules:
- methods:
- GET
noAuth: true
path: /*
service:
name: httpbin
port: 8000</code></pre><P> 4. For each external domain, create a separate APIRule resource: </P><pre class="lia-code-sample language-abap"><code>apiVersion: gateway.kyma-project.io/v2
kind: APIRule
metadata:
name: application-bar-external-domain
namespace: application
spec:
gateway: gateway-ns/cdn-gateway
hosts:
- "www.bar.example.com"
rules:
- methods:
- GET
noAuth: true
path: /*
service:
name: httpbin
port: 8000</code></pre><P>5. To test this configuration, we need to simulate what the Akamai edge server does:</P><UL><LI>Make a TLS connection to the internal domain cdn.ab1234.kyma.ondemand.com</LI><LI>Use SNI host name pointing to external domain <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A></LI><LI>Provide an HTTP Host header pointing to the external domain <A href="http://www.foo.example.com" target="_blank" rel="noopener nofollow noreferrer">www.foo.example.com</A></LI></UL><P>You can use the following bash command:</P><pre class="lia-code-sample language-bash"><code>(
echo -ne "GET /headers HTTP/1.1\r\n"
echo -ne "Host: www.foo.example.com\r\n"
echo -ne "Connection: close\r\n\r\n"
) | openssl s_client -connect "cdn.ab1234.kyma.ondemand.com:443" -servername "www.foo.example.com" -quiet</code></pre><P>6. If this works, your Kyma instance is prepared. Once the Akamai Property configuration is applied, try to make a request via the external domain:</P><pre class="lia-code-sample language-bash"><code>curl "https://www.foo.example.com/headers"</code></pre><P> </P>2026-04-21T08:38:31.650000+02:00