https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-HANA-blog-posts.xml SAP Community - SAP HANA 2026-04-10T14:01:09.336002+00:00 python-feedgen SAP HANA blog posts in SAP Community https://community.sap.com/t5/technology-blog-posts-by-members/automated-test-suite-rhel-ha-solution-for-sap-s-4hana-ensa-2-using-ansible/ba-p/14336484 Automated Test Suite: RHEL HA Solution for SAP S/4HANA ENSA 2 using Ansible 2026-02-25T13:33:15.204000+01:00 Prerna_Mohta29 https://community.sap.com/t5/user/viewprofilepage/user-id/43696 <H2 id="toc-hId-1790567005"><SPAN>Introduction</SPAN></H2><P><SPAN>Testing of High Availability Clusters used for managing SAP S/4HANA application servers is critical for ensuring their reliability and resilience. Many issues can be avoided in the first place during the pre-go-live stage by carrying out certain tests.</SPAN></P><P><SPAN>While manual verification of an HA cluster setup is possible, these tests can also be automated using Ansible. Developing custom roles and playbooks allows for a modular, reusable, and scalable design to automate each test case. This approach is particularly effective for tests like </SPAN><STRONG>simulating a node failure to verify failover and resource migration</STRONG><SPAN>.</SPAN></P><P><SPAN>Manual testing becomes tedious, time-consuming, and error-prone, especially in HA cluster environments where multiple nodes must be constantly observed during every test.</SPAN></P><H1 id="toc-hId-1464970781"><SPAN>What is the Automated Test suite?</SPAN></H1><P><SPAN>This Automated Test suite boosts quality assurance efficiency, test coverage, and accelerates bug reproduction/verification and post-maintenance, leading to faster production-ready RHEL HA cluster environments for ENSA 2.</SPAN></P><P><SPAN>The Automated Test suite requires a dedicated, out-of-cluster RHEL Ansible Control node to execute and monitor tests—including failovers and node crashes—on the SAP HA cluster. Using Ansible makes the tests transparent and easy to understand, and its built-in plugins allow saving logs for future review and audits.</SPAN></P><H1 id="toc-hId-1268457276"><SPAN>Which tests are performed?</SPAN></H1><P><SPAN>This automation test suite also covers the tests for the SAP HA Interface for SAP application servers.&nbsp;</SPAN><SPAN>Each test carries out specific tasks to verify the cluster configuration and current state, conducts the actual test, verifies the output, and compares it with the expected results.&nbsp;</SPAN></P><P><SPAN>The following table outlines what the test suite actually verifies on the systems being tested:</SPAN></P><TABLE><TBODY><TR><TD><H3 id="toc-hId-1330109209"><STRONG>Test Name</STRONG></H3></TD><TD><H3 id="toc-hId-1133595704"><STRONG>Test Title</STRONG></H3></TD><TD><H3 id="toc-hId-937082199"><STRONG>Minimum no. of HA Nodes</STRONG></H3></TD></TR><TR><TD><P><SPAN>Test01</SPAN></P></TD><TD><P><SPAN>Name and version of HA software</SPAN></P></TD><TD><P><SPAN>2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test02</SPAN></P></TD><TD><P><SPAN>HA configuration showing no errors</SPAN></P></TD><TD><P><SPAN>2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test03</SPAN></P></TD><TD><P><SPAN>Shared Library (HA-Interface) loads without any errors</SPAN></P></TD><TD><P><SPAN>2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test04</SPAN></P></TD><TD><P><SPAN>Manual move of ASCS works correctly with lock data</SPAN></P></TD><TD><P><SPAN>2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test05</SPAN></P></TD><TD><P><SPAN>Irrecoverable outage of the Enqueue Server (ES) 2 is handled correctly</SPAN></P></TD><TD><P><SPAN>2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test06</SPAN></P></TD><TD><P><SPAN>Outage of the Enqueue Replicator (ER) 2 is handled correctly</SPAN></P></TD><TD><P><SPAN>&gt;2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test07</SPAN></P></TD><TD><P><SPAN>ASCS moves correctly under load without specifying the destination node (</SPAN><STRONG>Skipped in the current version</STRONG><SPAN>)</SPAN></P></TD><TD><P><SPAN>&gt;2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test08</SPAN></P></TD><TD><P><SPAN>ASCS moves correctly in case of hardware or OS failure (</SPAN><STRONG>Note: Node crash execution</STRONG><SPAN>)</SPAN></P></TD><TD><P><SPAN>&gt;2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test09</SPAN></P></TD><TD><P><SPAN>Recoverable outage of the Message Server is handled correctly (if the SAP Profile Parameter “Restart_Program” is used for the Message Server) (</SPAN><STRONG>Note: Backup and Auto-config)</STRONG></P></TD><TD><P><SPAN>&gt;2</SPAN></P></TD></TR><TR><TD><P><SPAN>Test10</SPAN></P></TD><TD><P><SPAN>Irrecoverable outage of the Message Server is handled correctly (if the SAP Profile Parameter “Start_Program” is used for the Message Server) (</SPAN><STRONG>Note: Backup and Auto-config)</STRONG></P></TD><TD><P><SPAN>&gt;2</SPAN></P></TD></TR></TBODY></TABLE><P><SPAN>Please note that </SPAN><FONT color="#339966"><SPAN>test07</SPAN></FONT><SPAN> is skipped in the current version and may be implemented and updated in future releases. Additional test cases are planned for inclusion in future releases to enhance coverage.</SPAN></P><H2 id="toc-hId-611485975"><SPAN>Prerequisites</SPAN></H2><P><SPAN>1. A minimum of 2 node cluster configuration with ASCS and ERS resources to start with the initial tests, but 3 nodes are recommended. </SPAN><SPAN><BR /></SPAN><SPAN>Refer to the following document for the guidelines to configure the cluster that can be tested with the Ansible playbooks described in this blog: </SPAN><A href="https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/index" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Configuring HA clusters to manage SAP NetWeaver or SAP S/4HANA Application server instances using the RHEL HA Add-On | Red Hat Enterprise Linux for SAP Solutions</SPAN></A><SPAN> (Only ENSA2).&nbsp;</SPAN><SPAN>Y</SPAN>our pacemaker cluster should look like the following:</P><pre class="lia-code-sample language-python"><code>[root@s4node01: ~]# pcs cluster status Cluster Status: ..... Node List: * Online: [ s4node01 s4node02 s4node03 ] PCSD Status: s4node01: Online s4node03: Online s4node02: Online [root@s4node01: ~]# pcs resource status …… * Resource Group: s4h_ASCS20_group: * s4h_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started s4node03 * s4h_fs_ascs20 (ocf:heartbeat:Filesystem): Started s4node03 * s4h_vip_ascs20 (ocf:heartbeat:IPaddr2): Started s4node03 * s4h_ascs20 (ocf:heartbeat:SAPInstance): Started s4node03 * Resource Group: s4h_ERS29_group: * s4h_lvm_ers29 (ocf:heartbeat:LVM-activate): Started s4node01 * s4h_fs_ers29 (ocf:heartbeat:Filesystem): Started s4node01 * s4h_vip_ers29 (ocf:heartbeat:IPaddr2): Started s4node01 * s4h_ers29 (ocf:heartbeat:SAPInstance): Started s4node01</code></pre><P><SPAN>This is also a precondition before running any test.</SPAN></P><P><SPAN>2. Ensure that the SAP HA Interface for SAP ABAP application server instances is configured as mentioned here: </SPAN><A href="https://access.redhat.com/solutions/3606101" target="_blank" rel="noopener nofollow noreferrer"><SPAN>How to enable the SAP HA Interface for SAP ABAP application server instances managed by the RHEL HA Add-On? - Red Hat Customer Portal.</SPAN></A></P><P><SPAN>3. Ensure that the </SPAN><A href="https://docs.galaxy.saponrhel.org/collections/sap/sap_operations/index.html" target="_blank" rel="noopener nofollow noreferrer"><SPAN>sap.sap_operations</SPAN></A><SPAN> collection for </SPAN><A href="https://docs.galaxy.saponrhel.org/collections/sap/sap_operations/host_info_module.html#ansible-collections-sap-sap-operations-host-info-module" target="_blank" rel="noopener nofollow noreferrer"><SPAN>host_info</SPAN></A><SPAN> and </SPAN><A href="https://docs.galaxy.saponrhel.org/collections/sap/sap_operations/pcs_status_info_module.html#ansible-collections-sap-sap-operations-pcs-status-info-module" target="_blank" rel="noopener nofollow noreferrer"><SPAN>pcs_status_info</SPAN></A><SPAN> module is installed on the ansible control node.</SPAN></P><H2 id="toc-hId-414972470"><SPAN>Getting started</SPAN></H2><P><SPAN>1. Clone the </SPAN><A href="https://github.com/sap-linuxlab/community.sap_ha_cluster_qa" target="_blank" rel="noopener nofollow noreferrer"><SPAN>community.sap_ha_cluster_qa</SPAN></A><SPAN> repository and enter into that directory</SPAN></P><pre class="lia-code-sample language-python"><code># git clone https://github.com/sap-linuxlab/community.sap_ha_cluster_qa.git # cd community.sap_ha_cluster_qa.git</code></pre><P><SPAN>2. Verify that the ansible.cfg, inventory and playbook files match your specific Ansible environment</SPAN></P><P><SPAN>3. Make sure that the inventory file contains at least the hostnames of all the reachable cluster nodes that you want to test:</SPAN></P><P><SPAN>For example:</SPAN></P><pre class="lia-code-sample language-python"><code># cat tests/inventory/x86_64.yml --- all: children: s4hana-3n: hosts: s4node01: s4node02: s4node03:</code></pre><P><SPAN>4. Ensure that the nodes are reachable via ansible ping, for example:</SPAN></P><pre class="lia-code-sample language-python"><code># ansible all -m ping -i tests/inventory/x86_64.yml s4node01 | SUCCESS =&gt; { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } s4node02 | SUCCESS =&gt; { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } s4node03 | SUCCESS =&gt; { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" }</code></pre><P>&nbsp;</P><H2 id="toc-hId-218458965"><SPAN>How to run the tests</SPAN></H2><P><SPAN>1. While in the </SPAN><A href="https://github.com/sap-linuxlab/community.sap_ha_cluster_qa" target="_blank" rel="noopener nofollow noreferrer"><SPAN>community.sap_ha_cluster_qa</SPAN></A><SPAN> directory run the playbook as follows for test01 to verify name and version of HA software.</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ansible_collections/sap/cluster_qa/playbooks/test01.yml -v PLAY [Playbook to run test01 test case on ASCS and ERS instances] *************************************************************************** TASK [Collect necessary gather_facts] *************************************************************************** ok: [s4node01] ok: [s4node03] ok: [s4node02] ...... TASK [sap.cluster_qa.test01 : Print test results completing test01 test case for current instance] *************************************************************************** ok: [s4node02] =&gt; { "msg": { "changed": false, "failed": false, "ha_get_failoverconfig_info": { "HAActive": true, "HAActiveNode": "s4node02", "HADocumentation": "https://github.com/ClusterLabs/sap_cluster_connector", "HANodes": "", "HAProductVersion": "Pacemaker", "HASAPInterfaceVersion": "sap_cluster_connector" } } } ...... TASK [sap.cluster_qa.test01 : Print test results completing test01 test case for current instance] *************************************************************************** ok: [s4node03] =&gt; { "msg": { "changed": false, "failed": false, "ha_get_failoverconfig_info": { "HAActive": true, "HAActiveNode": "s4node03", "HADocumentation": "https://github.com/ClusterLabs/sap_cluster_connector", "HANodes": "", "HAProductVersion": "Pacemaker", "HASAPInterfaceVersion": "sap_cluster_connector" } } }</code></pre><P><BR /><SPAN>2. Similarly, you can run playbooks for the next test case by replacing the playbook. In this case test02.yml to verify HA configuration shows no errors</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test02.yml -v ……… TASK [sap.cluster_qa.test02 : Print the results completing TEST02 test case for ERS node] *************************************************************************** skipping: [s4node02] =&gt; {} skipping: [s4node03] =&gt; {} ok: [s4node01] =&gt; { "msg": { "changed": false, "failed": false, "ha_check_config_info": [ { "category": "SAPControl-SAP-CONFIGURATION", "comment": "0 ABAP instances detected", "description": "Redundant ABAP instance configuration", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-CONFIGURATION", "comment": "All Enqueue server separated from application server", "description": "Enqueue separation", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-CONFIGURATION", "comment": "All MessageServer separated from application server", "description": "MessageServer separation", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-STATE", "comment": "SCS instance status ok", "description": "SCS instance running", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-CONFIGURATION", "comment": "SAPInstance includes is-ers patch", "description": "SAPInstance RA sufficient version (s4ascs_S4H_20)", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-CONFIGURATION", "comment": "Enqueue replication enabled", "description": "Enqueue replication (s4ascs_S4H_20)", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-STATE", "comment": "Enqueue replication active", "description": "Enqueue replication state (s4ascs_S4H_20)", "state": "SAPControl-HA-SUCCESS" }, { "category": "SAPControl-SAP-CONFIGURATION", "comment": "SAPInstance includes is-ers patch", "description": "SAPInstance RA sufficient version (s4ers_S4H_29)", "state": "SAPControl-HA-SUCCESS" } ] } } PLAY RECAP *************************************************************************** s4node01 : ok=32 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 s4node02 : ok=30 changed=0 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 s4node03 : ok=32 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 </code></pre><P><SPAN>3. Similarly, you can run the tests for each test case by replacing the command with the corresponding test03 playbook as shown below. Test03 verifies the shared Library (HA-Interface) is being loaded without any errors.&nbsp;</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test03.yml -v ....... TASK [sap.cluster_qa.test03 : Printing SAP HA trace logs for ERS instance] *************************************************************************** ok: [s4node01] =&gt; { "msg": [ "SAP HA Trace: Thu Jan 22 15:59:09 2026", "SAP HA Trace: --- SAP_HA_FindSAPInstance Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Thu Jan 22 15:59:09 2026", "SAP HA Trace: === SAP_HA_StartCluster ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector cpa ...", "SAP HA Trace: SAP_HA_StartCluster: FOUND PENDING ACTION -&gt; SAP_HA_START_IN_PROGRESS", "SAP HA Trace: Thu Jan 22 15:59:09 2026", "SAP HA Trace: --- SAP_HA_StartCluster Exit-Code: SAP_HA_START_IN_PROGRESS ---", "SAP HA Trace: Mon Jan 26 13:35:13 2026", "SAP HA Trace: === SAP_HA_FindSAPInstance ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector lsr ...", "SAP HA Trace: searchClusterFile: S4H:29 found", "SAP HA Trace: Mon Jan 26 13:35:13 2026", "SAP HA Trace: --- SAP_HA_FindSAPInstance Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Mon Jan 26 13:35:13 2026", "SAP HA Trace: === SAP_HA_StopCluster ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector cpa ...", "SAP HA Trace: SAP_HA_StopCluster: FOUND PENDING ACTION -&gt; SAP_HA_STOP_IN_PROGRESS", "SAP HA Trace: Mon Jan 26 13:35:13 2026", "SAP HA Trace: --- SAP_HA_StopCluster Exit-Code: SAP_HA_STOP_IN_PROGRESS ---", "SAP HA Trace: Mon Jan 26 13:35:17 2026", "SAP HA Trace: === SAP_HA_FindSAPInstance ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector lsr ...", "SAP HA Trace: searchClusterFile: S4H:29 found", "SAP HA Trace: Mon Jan 26 13:35:17 2026", "SAP HA Trace: --- SAP_HA_FindSAPInstance Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Mon Jan 26 13:35:17 2026", "SAP HA Trace: === SAP_HA_StartCluster ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector cpa ...", "SAP HA Trace: SAP_HA_StartCluster: FOUND PENDING ACTION -&gt; SAP_HA_START_IN_PROGRESS", "SAP HA Trace: Mon Jan 26 13:35:17 2026", "SAP HA Trace: --- SAP_HA_StartCluster Exit-Code: SAP_HA_START_IN_PROGRESS ---", "SAP HA Trace: Mon Jan 26 13:36:20 2026", "SAP HA Trace: === SAP_HA_CheckConfig ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector hcc ...", "SAP HA Trace: Mon Jan 26 13:36:20 2026", "SAP HA Trace: --- SAP_HA_CheckConfig Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Mon Jan 26 13:36:20 2026", "SAP HA Trace: === SAP_HA_FreeConfigCheck ===", "SAP HA Trace: Mon Jan 26 13:36:20 2026", "SAP HA Trace: --- SAP_HA_FreeConfigCheck Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Mon Jan 26 13:36:21 2026", "SAP HA Trace: === SAP_HA_CheckConfig ===", "SAP HA Trace: Fire system command /usr/bin/sap_cluster_connector hcc ...", "SAP HA Trace: Mon Jan 26 13:36:21 2026", "SAP HA Trace: --- SAP_HA_CheckConfig Exit-Code: SAP_HA_OK ---", "SAP HA Trace: Mon Jan 26 13:36:21 2026", "SAP HA Trace: === SAP_HA_FreeConfigCheck ===", "SAP HA Trace: Mon Jan 26 13:36:21 2026", "SAP HA Trace: --- SAP_HA_FreeConfigCheck Exit-Code: SAP_HA_OK ---" ] } skipping: [s4node02] =&gt; {} skipping: [s4node03] =&gt; {} PLAY RECAP *************************************************************************** s4node01 : ok=32 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 s4node02 : ok=30 changed=0 unreachable=0 failed=0 skipped=4 rescued=0 ignored=0 s4node03 : ok=32 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0</code></pre><P><SPAN>4. Test04 for verifying manual ASCS move:</SPAN></P><pre class="lia-code-sample language-abap"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test04.yml -v ...... TASK [sap.cluster_qa.test04 : Asserting the locks of ASCS and ERS after move completing the TEST04 Test Case] *************************************************************************** ok: [s4node01] =&gt; { "changed": false, "msg": "All assertions passed" } ok: [s4node02] =&gt; { "changed": false, "msg": "All assertions passed" } ok: [s4node03] =&gt; { "changed": false, "msg": "All assertions passed" } PLAY RECAP *************************************************************************** s4node01 : ok=69 changed=1 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 s4node02 : ok=66 changed=0 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 s4node03 : ok=70 changed=2 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 </code></pre><P><SPAN>Lock table data is compared and asserted 3 times in this test run: ASCS and ERS before move, ASCS before and after move, ASCS and ERS after move.&nbsp;</SPAN></P><P><BR /><SPAN>5. Test05 for verifying irrecoverable outage of the Enqueue Server (ES) 2 is handled correctly.</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test05.yml -v ...... TASK [sap.cluster_qa.test05 : Verifying ASCS not on the same node] *************************************************************************** ok: [s4node01] =&gt; { "changed": false, "msg": "All assertions passed" } ok: [s4node02] =&gt; { "changed": false, "msg": "All assertions passed" } ok: [s4node03] =&gt; { "changed": false, "msg": "All assertions passed" } PLAY RECAP *************************************************************************** s4node01 : ok=38 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 s4node02 : ok=38 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 s4node03 : ok=36 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0</code></pre><P><BR /><SPAN>6. Test06 for verifying outage of the Enqueue Replicator 2 is handled correctly</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test06.yml -v ...... TASK [sap.cluster_qa.test06 : Verifying ERS not on the same node as ASCS] *************************************************************************** ok: [s4node01] =&gt; { "changed": false, "msg": "ERS successfully moved to s4node01, different from ASCS node s4node02" } ok: [s4node02] =&gt; { "changed": false, "msg": "ERS successfully moved to s4node01, different from ASCS node s4node02" } ok: [s4node03] =&gt; { "changed": false, "msg": "ERS successfully moved to s4node01, different from ASCS node s4node02" } PLAY RECAP *************************************************************************** s4node01 : ok=53 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 s4node02 : ok=49 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 s4node03 : ok=49 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0</code></pre><P><SPAN>7. Test07 (skipped in the current version)</SPAN></P><P><SPAN>8. Test08 to verify ASCS moves correctly in case of hardware or OS failure. Please note that in this test the primary ASCS node will be crashed using the “</SPAN><FONT color="#339966"><SPAN>echo c &gt; /proc/sysrq-trigger</SPAN></FONT><SPAN>” command.</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test08.yml -v ...... TASK [sap.cluster_qa.test08 : Verifying ASCS not on the same node] *************************************************************************** ok: [s4node01] =&gt; { "changed": false, "msg": "ASCS successfully moved from s4node03 to s4node02" } PLAY RECAP *************************************************************************** s4node01 : ok=40 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 s4node02 : ok=33 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 s4node03 : ok=18 changed=1 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0</code></pre><P><SPAN>&nbsp;9. Test09: Recoverable outage of the Message Server is handled correctly (if the SAP Profile Parameter “Restart_Program” is used for the Message Server). Note that this test will check for the “Restart_Program” parameter in the instance profile. If not found, it will be inserted, and the instances will restart before performing the actual test.</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test09.yml -v ...... TASK [sap.cluster_qa.test09 : Display test summary] *************************************************************************** ok: [s4node01] =&gt; { "msg": [ "===============================================", " TEST09 SUMMARY", "===============================================", "Message Server killed: 6 times", "Initial ASCS location: s4node02", "Final ASCS location: s4node02", "Initial ERS location: s4node01", "Final ERS location: s4node01", "HA Action taken: ASCS Restart on same node", "ASCS/ERS separation maintained: YES", "===============================================" ] } PLAY RECAP ********************************************************************* s4node01 : ok=105 changed=1 unreachable=0 failed=0 skipped=77 rescued=0 ignored=0 s4node02 : ok=123 changed=6 unreachable=0 failed=0 skipped=22 rescued=0 ignored=0 s4node03 : ok=81 changed=0 unreachable=0 failed=0 skipped=64 rescued=0 ignored=0</code></pre><P><SPAN>Please note this test may take approximately 7 to 10 minutes since the message server is killed repeatedly up to 6 times or until HA software intervenes to perform a failover.</SPAN></P><P><SPAN>10. Test10: Recoverable outage of the Message Server is handled correctly (if the SAP Profile Parameter “Start_Program” is used for the message server). Note that this test will check for the “Start_Program” parameter in the instance profile. If not found, it will be inserted and the instances will restart before performing the actual test.</SPAN></P><pre class="lia-code-sample language-python"><code># ansible-playbook -i tests/inventory/x86_64.yml ./ansible_collections/sap/cluster_qa/playbooks/test10.yml -v ...... TASK [sap.cluster_qa.test10 : Display test summary] *************************************************************************** ok: [s4hana17] =&gt; { "msg": [ "===============================================", " TEST10 SUMMARY", "===============================================", "Test: Irrecoverable Message Server outage", "Profile Parameter: Start_Program (not Restart_Program)", "Message Server killed: YES", "Initial ASCS location: s4hana19", "Final ASCS location: s4hana18", "Initial ERS location: s4hana17", "Final ERS location: s4hana17", "HA Action taken: ASCS Failover", "ASCS/ERS separation maintained: YES", "===============================================" ] } PLAY RECAP *************************************************************************** s4hana17 : ok=136 changed=4 unreachable=0 failed=0 skipped=10 rescued=0 ignored=0 s4hana18 : ok=119 changed=0 unreachable=0 failed=0 skipped=6 rescued=0 ignored=0 s4hana19 : ok=124 changed=1 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0 rescued=0 ignored=0</code></pre><P><SPAN>Refer to the </SPAN><SPAN>README.md</SPAN><SPAN> file of each test role for more details about the tests. The </SPAN><FONT color="#339966"><SPAN>README.md</SPAN></FONT><SPAN> for each test role can be found in the following location of the same directory:&nbsp; </SPAN><FONT color="#339966"><SPAN>ansible_collections/sap/cluster_qa/roles/&lt;test-name&gt;/README.md<FONT color="#000000">.</FONT></SPAN></FONT></P><P><FONT color="#000000"><SPAN><STRONG>Authors: Amir Memon</STRONG></SPAN></FONT></P> 2026-02-25T13:33:15.204000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/sap-hana-system-replication-takeover-and-client-connectivity/ba-p/14326275 SAP HANA System Replication Takeover and Client Connectivity 2026-02-25T18:00:00.017000+01:00 HakanHaslaman https://community.sap.com/t5/user/viewprofilepage/user-id/185386 <DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""><P><U>Documented Scope and Architectural Responsibilities</U><BR /><BR /><BR /><STRONG>Introduction</STRONG><BR />This article summarizes publicly available SAP documentation and reflects the documented behavior of SAP HANA System Replication at the time of writing.</P><P>SAP HANA System Replication is widely used in productive landscapes to improve availability and to enable takeover to a secondary system.<BR /><BR />In operational environments it may be observed that after a takeover the database role changes while application connectivity depends on the surrounding client access path.</P><P>This article does not provide configuration or implementation guidance.<BR />Instead, it consolidates SAP documentation to clarify the documented scope of System Replication and the responsibilities described in related high-availability integration documentation.</P><P>The purpose is to align operational expectations with documented behavior.<BR /><BR /><BR /><STRONG>Documented Purpose of System Replication<BR /></STRONG>According to SAP documentation, SAP HANA System Replication continuously transfers database changes from a primary system to a secondary system and supports a role change (takeover).<BR /><BR /><SPAN class="">SAP </SPAN><SPAN class="">documentation </SPAN><SPAN class="">describes </SPAN><SPAN class="">different </SPAN><SPAN class="">consistency </SPAN><SPAN class="">and </SPAN><SPAN class="">data-</SPAN><SPAN class="">loss </SPAN><SPAN class="">characteristics </SPAN><SPAN class="">depending </SPAN><SPAN class="">on </SPAN><SPAN class="">the </SPAN><SPAN class="">selected </SPAN><SPAN class="">replication </SPAN><SPAN class="">mode.</SPAN></P><P><STRONG>Sources<BR /></STRONG>SAP HANA System Replication Guide<BR /><A class="" href="https://help.sap.com/doc/c81e9406d08046c0a118c8bef71f6bdc/2.0.07/en-US/SAP_HANA_System_Replication_Guide_en.pdf" target="_new" rel="noopener noreferrer">https://help.sap.com/doc/c81e9406d08046c0a118c8bef71f6bdc/2.0.07/en-US/SAP_HANA_System_Replication_Guide_en.pdf</A><BR />SAP Help Portal – System Replication Overview<BR /><A class="" href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/676844172c2442f0bf6c8b080db05ae7.html" target="_new" rel="noopener noreferrer">https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/676844172c2442f0bf6c8b080db05ae7.html</A></P><P>After takeover, the secondary system becomes the new primary database system from the database perspective.<BR />The documentation therefore describes System Replication primarily in terms of database availability and data consistency.<BR /><BR /><BR /><STRONG>Documented Behavior During Takeover<BR /></STRONG>During a takeover, SAP HANA changes the database role of the participating systems.</P><P>According to SAP documentation, once the takeover command completes, the former secondary system becomes the new active primary system. The documentation further describes that the system replays the last transaction logs and starts to accept queries.<BR /><BR />The System Replication Guide documents the database role change and service startup behavior.<BR />The documentation describes database state and service availability, while network routing and client access handling are described separately in high-availability integration documentation.<BR /><BR /><BR /><STRONG>Actions After Takeover in HA Integration Documentation<BR /></STRONG>SAP documentation describes integration mechanisms that allow external components to react to a role change.</P><P>SAP HANA provides a HA/DR provider hook that enables external high-availability components to execute actions after a takeover.</P><P>Documented examples include service endpoint switching or cluster-related actions.</P><P><STRONG>Sources<BR /></STRONG>Implementing SAP HANA HA/DR Providers<BR /><A class="" href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/1200ab8ef0c54c54be2d0e7f5327f7ed.html" target="_new" rel="noopener noreferrer">https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/1200ab8ef0c54c54be2d0e7f5327f7ed.html</A><BR />Installing and Configuring HA/DR Provider Hook<BR /><A class="" href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/2962efcfdd6740689d5705bdabe9a2d5.html" target="_new" rel="noopener noreferrer">https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/2962efcfdd6740689d5705bdabe9a2d5.html</A></P><P>The HA/DR integration documentation describes coordination with external high-availability mechanisms following a database role change.<BR /><BR /><BR /><STRONG>Database Role and Client Connectivity in Documented Architecture<BR /></STRONG>System Replication changes the database role.<BR />Client connectivity depends on the access path used by the application.</P><P>Typical environments may use infrastructure components such as:</P><UL><LI><SPAN class="">virtual </SPAN><SPAN class="">IP </SPAN><SPAN class="">redirection</SPAN></LI><LI><SPAN class="">DNS-</SPAN><SPAN class="">based </SPAN><SPAN class="">redirection</SPAN></LI><LI><SPAN class="">cluster </SPAN><SPAN class="">management </SPAN><SPAN class="">software</SPAN></LI><LI><SPAN class="">application-</SPAN><SPAN class="">server-</SPAN><SPAN class="">side </SPAN><SPAN class="">failover </SPAN><SPAN class="">configuration</SPAN></LI></UL><P>SAP System Replication documentation describes the database role and persistence synchronization, while HA integration documentation describes coordination with external availability mechanisms.<BR />These documents therefore address different parts of the overall availability architecture.<BR /><BR /><BR /><STRONG>Interpreting Connection Failures After Takeover<BR /></STRONG>In operational situations, a takeover may be completed from the database perspective while application connections still fail if the client access path still points to the former primary host or has not yet been updated by surrounding infrastructure.</P><P>From a documentation perspective, this reflects different layers of the availability architecture rather than a single mechanism.</P><P>SAP documentation separates:</P><UL><LI>database role change (System Replication)</LI><LI>external availability coordination (HA integration)</LI></UL><P><STRONG>Reference<BR /></STRONG><A href="https://me.sap.com/notes/1999880" target="_blank" rel="noopener noreferrer">SAP Note 1999880 - FAQ: SAP HANA System Replication</A><BR /><BR /><BR /><STRONG>Relationship to High Availability Architecture<BR /></STRONG>SAP HANA high availability consists of multiple documented components, including:</P><UL><LI>System Replication</LI><LI>cluster or HA integration</LI><LI>infrastructure routing mechanisms</LI></UL><P>System Replication addresses database availability.<BR />Other components coordinate access paths and service endpoints in the surrounding infrastructure.<BR /><BR /><BR /><STRONG>Conclusion<BR /></STRONG>SAP HANA System Replication provides database availability and role takeover capabilities, while consistency and potential data loss characteristics depend on the configured replication mode.</P><P>Client connectivity depends on surrounding high-availability and infrastructure mechanisms that are documented separately from System Replication.</P><P>Understanding the documented scope of these components helps distinguish database state from client access routing and supports clearer troubleshooting during incidents.<BR /><BR />For implementation or configuration guidance, always follow the official SAP documentation and SAP recommendations.<BR /><BR /><BR /><STRONG>Disclaimer<BR /></STRONG>This article summarizes publicly available SAP documentation and does not replace official SAP configuration, sizing, or implementation guidance.&nbsp;This article describes documented architectural behavior and is not an SAP support statement.</P></DIV></DIV></DIV></DIV></DIV></DIV> 2026-02-25T18:00:00.017000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/avoiding-hana-connection-exhaustion-a-practical-guide-to-hikari-amp-multi/ba-p/14336199 Avoiding HANA Connection Exhaustion: A Practical Guide to Hikari & Multi-Tenant Scaling 2026-02-26T09:53:05.868000+01:00 rishabhdhakarwal https://community.sap.com/t5/user/viewprofilepage/user-id/698059 <H2 id="toc-hId-1790564158">Introduction</H2><P>Modern SAP applications running on <STRONG>SAP BTP with CAP Java and SAP HANA</STRONG> rely heavily on efficient database connection management.</P><P>Opening a new database connection for every request is expensive:</P><UL><LI>It increases latency</LI><LI>It consumes database resources</LI><LI>It reduces scalability</LI></UL><P>To solve this, we use <STRONG>connection pools</STRONG>.</P><P>In this blog, we’ll walk through how connection pooling works in CAP Java, what can go wrong in multi-tenant systems, and how to configure it correctly to avoid production outages.</P><HR /><H2 id="toc-hId-1594050653">What is a Connection Pool?</H2><P>A connection pool is a cache of reusable database connections maintained by your application.</P><PRE>open connection → execute query → close connection</PRE><P>becomes:</P><PRE>borrow connection → execute query → return to pool</PRE><H3 id="toc-hId-1526619867">Benefits</H3><UL><LI>Faster response times</LI><LI>Reduced database load</LI><LI>Reuse of expensive DB sessions</LI><LI>Better scalability</LI></UL><HR /><H2 id="toc-hId-1201023643">How CAP Java Uses Connection Pools</H2><P>In CAP Java:</P><UL><LI>Database access uses <STRONG>JDBC connection pooling via Spring Boot</STRONG></LI><LI>The default pool implementation is <STRONG>HikariCP</STRONG></LI><LI>Each <STRONG>application instance maintains its own connection pool</STRONG></LI></UL><P>This means that if your application is scaled horizontally, the number of connections increases with the number of instances.</P><HR /><H2 id="toc-hId-1004510138">Why This Becomes Critical in Production</H2><P>In the basic mode, the HANA database is provisioned with <STRONG>2 vCPUs</STRONG>, which allows roughly <STRONG>1000 concurrent connections</STRONG>.</P><P>Now consider a typical CAP multi-tenant setup:</P><UL><LI>Multiple services</LI><LI>Multiple tenants</LI><LI>Multiple app instances (autoscaling)</LI><LI>Each instance maintaining its own pool</LI></UL><P>The total DB connections can grow very quickly if not controlled.</P><HR /><H2 id="toc-hId-807996633">Multi-Tenant Scaling Problem</H2><P>Without optimization, the connection count grows like this:</P><PRE>connections = instances × tenants × pool size</PRE><P>Example:</P><UL><LI>4 tenants</LI><LI>3 application instances</LI><LI>default pool size = 10</LI></UL><PRE>4 × 3 × 10 = 120 connections</PRE><P>Now multiply this across multiple services and sidecars, you can quickly reach the HANA limit.</P><HR /><H2 id="toc-hId-611483128">The Solution: Shared Pools with combinePools</H2><P>CAP Java provides a very important optimization:</P><PRE>cds: multi-tenancy: datasource: combinePools: enabled: true</PRE><P>This ensures that instead of creating pools per tenant, the application uses:</P><P><STRONG>One shared connection pool per HANA database per application instance</STRONG></P><P>Now the formula becomes:</P><PRE>connections = instances × pool size</PRE><P>This makes connection usage predictable and scalable.</P><HR /><H2 id="toc-hId-414969623">HikariCP Configuration (Production)</H2><P>A production-ready CAP Java configuration looks like this:</P><PRE>cds: multi-tenancy: datasource: combinePools: enabled: true hikari: maximum-pool-size: 40 minimum-idle: 10 idle-timeout: 600000 max-lifetime: 3300000</PRE><P>Parameter Description</P><TABLE border="1" cellpadding="6"><TBODY><TR><TD>maximumPoolSize</TD><TD>Maximum connections per instance</TD></TR><TR><TD>minimumIdle</TD><TD>Warm connections kept ready</TD></TR><TR><TD>idleTimeout</TD><TD>Idle connection eviction time</TD></TR><TR><TD>maxLifetime</TD><TD>Maximum lifetime before connection refresh</TD></TR></TBODY></TABLE><HR /><H2 id="toc-hId-218456118">How to Size Your Pool Correctly</H2><P>Use this simple guideline:</P><PRE>maximumPoolSize ≈ concurrent DB requests per instance</PRE><P>And always validate:</P><PRE>instances × maximumPoolSize &lt; HANA DB connection limit</PRE><P>For example:</P><UL><LI>3 instances</LI><LI>pool size 40</LI></UL><PRE>3 × 40 = 120 connections</PRE><P>Well within a 1000 connection limit.</P><HR /><H2 id="toc-hId-21942613">Monitoring &amp; Observability</H2><P>You should continuously monitor:</P><UL><LI>Active DB connections</LI><LI>Idle connections</LI><LI>Pool utilization</LI><LI>Total HANA connection usage</LI></UL><P><STRONG>Set alerts at 70–80% of DB capacity.</STRONG></P><HR /><H2 id="toc-hId-172683465">Common Pitfalls</H2><UL><LI>Not enabling <CODE>combinePools</CODE> in multi-tenant apps</LI><LI>Oversizing connection pools</LI><LI>Multiple services sharing same HANA DB without coordination</LI><LI>No monitoring of DB connection usage</LI><LI>Assuming default values are safe for production</LI></UL><HR /><H2 id="toc-hId--23830040">Summary</H2><P>Area Recommendation</P><TABLE border="1" cellpadding="6"><TBODY><TR><TD>Connection Pool</TD><TD>Use HikariCP</TD></TR><TR><TD>Multi-Tenancy</TD><TD>Enable combinePools</TD></TR><TR><TD>Pool Sizing</TD><TD>Based on concurrency</TD></TR><TR><TD>Monitoring</TD><TD>Mandatory</TD></TR><TR><TD>Scaling</TD><TD>Validate against DB limits</TD></TR></TBODY></TABLE><HR /><H2 id="toc-hId--220343545">Final Thoughts</H2><P>Connection pooling is often invisible, until it fails in production.</P><P>With the right configuration in CAP Java and SAP HANA, you can:</P><UL><LI>avoid outages</LI><LI>improve performance</LI><LI>scale safely in multi-tenant environments</LI></UL><P>If you are running CAP Java in production, take a few minutes to review your pool configuration, it can prevent your next incident.</P><HR /><H2 id="toc-hId--416857050">References</H2><UL><LI><A href="https://github.com/brettwooldridge/HikariCP" target="_blank" rel="noopener nofollow noreferrer">HikariCP</A></LI><LI><A href="https://cap.cloud.sap/docs/java/multitenancy#db-connection-pooling" target="_blank" rel="noopener nofollow noreferrer">DB Connection Pooling</A></LI><LI><A href="https://cap.cloud.sap/docs/java/cqn-services/persistence-services#datasource-configuration" target="_blank" rel="noopener nofollow noreferrer">CAP Datasource Configuration</A></LI></UL> 2026-02-26T09:53:05.868000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/understanding-sap-hana-disaster-recovery-technical-scope-distance-and-data/ba-p/14292769 Understanding SAP HANA Disaster Recovery - Technical Scope, Distance, and Data Protection Boundaries 2026-03-01T18:00:00.018000+01:00 HakanHaslaman https://community.sap.com/t5/user/viewprofilepage/user-id/185386 <P class="lia-align-justify" style="text-align : justify;"><STRONG>Introduction</STRONG><BR />SAP HANA provides documented mechanisms to support disaster recovery (DR) scenarios in which an entire site, data center, or availability zone becomes unavailable.</P><P class="lia-align-justify" style="text-align : justify;">From a technical perspective, disaster recovery addresses failure domains that exceed the scope of local high-availability mechanisms. Its primary objective is to restore database operations after site-level outages, while preserving data consistency according to architecturally defined recovery objectives.</P><P class="lia-align-justify" style="text-align : justify;">Disaster recovery in SAP HANA is implemented through a combination of documented mechanisms such as system replication, backup-based recovery, distance-aware configurations, and controlled takeover procedures.</P><P class="lia-align-justify" style="text-align : justify;">This article consolidates SAP-documented SAP HANA disaster-recovery mechanisms into a single technical view.</P><P class="lia-align-justify" style="text-align : justify;">Its purpose is to make scope, distance considerations, and data-protection boundaries explicit, supporting sound DR architecture design and precise technical discussions.</P><P class="lia-align-justify" style="text-align : justify;">All statements are derived from official SAP Help Portal documentation.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>1. Disaster Recovery in SAP HANA - Technical Definition</STRONG><BR />In SAP HANA documentation, disaster recovery refers to mechanisms that enable database availability after site-level failures, such as:</P><UL><LI>complete data-center outages</LI><LI>regional infrastructure failures</LI><LI>loss of connectivity between sites</LI></UL><P class="lia-align-justify" style="text-align : justify;">Although some SAP HANA mechanisms such as system replication can be used for both high availability and disaster recovery scenarios, disaster recovery addresses broader failure domains such as site-level outages.</P><P class="lia-align-justify" style="text-align : justify;">This separation of objectives is consistently reflected in SAP HANA administration and architecture documentation.</P><P class="lia-align-justify" style="text-align : justify;"><U>Key boundary</U><BR />In this context, disaster recovery focuses on broader failure domains such as site-level outages rather than isolated single-component failures.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>2. Failure Domains and Distance Considerations</STRONG><BR />A defining characteristic of disaster recovery is physical and logical separation between systems.</P><P class="lia-align-justify" style="text-align : justify;">From a technical standpoint, DR architectures introduce:</P><UL><LI>network latency</LI><LI>increased failure isolation</LI><LI>replication latency considerations</LI></UL><P class="lia-align-justify" style="text-align : justify;">Distance directly influences:</P><UL><LI>feasible replication modes</LI><LI>achievable Recovery Point Objective (RPO)</LI><LI>achievable Recovery Time Objective (RTO)</LI></UL><P class="lia-align-justify" style="text-align : justify;">Distance and latency influence the choice of replication mode and DR architecture.</P><P class="lia-align-justify" style="text-align : justify;"><U>Key boundary</U><BR />Distance and latency are key architectural design considerations that influence the feasible replication mode and the achievable recovery characteristics.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>3. SAP HANA System Replication as a DR Mechanism</STRONG><BR /><STRONG>3.1 Technical Role in Disaster Recovery</STRONG><BR />SAP HANA System Replication (SR) is a primary building block for replication-based disaster recovery architectures.</P><P class="lia-align-justify" style="text-align : justify;">In DR scenarios:</P><UL><LI>primary and secondary systems are deployed in separate sites</LI><LI>replication protects against site-level failures</LI><LI>takeover is initiated when the primary site is unavailable</LI></UL><P class="lia-align-justify" style="text-align : justify;">System replication continuously transfers data changes from the primary system to the secondary system, maintaining a replica of the current database state.</P><P>SAP Help Portal reference<BR /><A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/b74e16a9e09541749a745f41246a065e.html?utm_source=chatgpt.com&amp;locale=en-US" target="_blank" rel="noopener noreferrer">SAP HANA System Replication - Overview</A></P><P class="lia-align-justify" style="text-align : justify;"><STRONG>3.2 Replication Modes and Distance</STRONG><BR />SAP HANA system replication supports synchronous and asynchronous replication modes.<BR />From a disaster-recovery perspective:</P><UL><LI>Synchronous replication<BR />is typically constrained by distance and latency and is therefore limited to shorter distances</LI><LI>Asynchronous replication<BR />is commonly used for long-distance DR scenarios and accepts some replication lag</LI></UL><P class="lia-align-justify" style="text-align : justify;">The selected replication mode directly influences:</P><UL><LI>potential data loss during failover</LI><LI>achievable RPO values</LI></UL><P class="lia-align-justify" style="text-align : justify;">This behavior is an inherent consequence of documented SR semantics.</P><P>SAP Help Portal reference<BR /><A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c039a1a5b8824ecfa754b55e0caffc01.html" target="_blank" rel="noopener noreferrer">SAP HANA System Replication - Replication Modes</A></P><P class="lia-align-justify" style="text-align : justify;"><U>Key boundary</U><BR />Replication mode determines data-loss characteristics, not historical recovery capability.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>4. Multitier and Multitarget System Replication</STRONG><BR />SAP HANA supports multitier system replication in chained setups as well as multitarget system replication for replication to more than one secondary system.</P><P class="lia-align-justify" style="text-align : justify;">In disaster-recovery architectures, multitier replication can be used to:</P><UL><LI>combine high availability and disaster recovery objectives</LI><LI>separate near-site and far-site replicas</LI><LI>increase isolation between failure domains</LI></UL><P class="lia-align-justify" style="text-align : justify;">Each replication tier operates within documented technical constraints, including supported replication paths and takeover rules.</P><P>SAP Help Portal reference<BR /><A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ca6f4c62c45b4c85a109c7faf62881fc.html" target="_blank" rel="noopener noreferrer">SAP HANA Multi-Tier System Replication</A><BR /><A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c3fe0a3c263c49dc9404143306455e16.html" target="_blank" rel="noopener noreferrer">Supported Replication Modes Between Systems</A></P><P class="lia-align-justify" style="text-align : justify;"><U>Key boundary</U><BR />Multitier replication can support multi-site architectures and extended failure-domain separation; however, it does not replace backup-based recovery mechanisms.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>5. Backup-Based Disaster Recovery</STRONG><BR /><STRONG>5.1 Technical Model</STRONG><BR />In addition to replication-based approaches, disaster recovery can be implemented using backup-based recovery.</P><P class="lia-align-justify" style="text-align : justify;">In this model:</P><UL><LI>data and log backups are stored in a remote or isolated location</LI><LI>recovery is performed by restoring backups at the target site</LI><LI>point-in-time recovery is possible, depending on backup and log availability</LI></UL><P class="lia-align-justify" style="text-align : justify;">Backup-based DR provides:</P><UL><LI>strong isolation between sites</LI><LI>access to historical restore points</LI></UL><P class="lia-align-justify" style="text-align : justify;">It typically involves longer recovery times compared to replication-based DR.</P><P>SAP Help Portal reference<BR /><A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/ef085cd5949c40b788bba8fd3c65743e.html" target="_blank" rel="noopener noreferrer">Planning Your Backup and Recovery Strategy</A></P><P class="lia-align-justify" style="text-align : justify;"><STRONG>5.2 Replication-Based vs. Backup-Based DR</STRONG><BR />From a technical standpoint:</P><UL><LI>Replication-based DR<BR />prioritizes continuity with limited data loss</LI><LI>Backup-based DR<BR />prioritizes isolation and historical recovery</LI></UL><P class="lia-align-justify" style="text-align : justify;">Replication-based and backup-based approaches can be combined to address different availability and recovery requirements.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>6. Disaster Recovery vs. High Availability</STRONG><BR />Although both HA and DR may use system replication, their objectives and assumptions differ fundamentally.</P><UL><LI>High availability<BR />handles component-level failures within a site</LI><LI>Disaster recovery<BR />handles complete loss of a site or failure domain</LI></UL><P class="lia-align-justify" style="text-align : justify;">Disaster recovery designs emphasize:</P><UL><LI>physical separation</LI><LI>controlled takeover procedures</LI><LI>explicitly defined data-loss boundaries</LI></UL><P class="lia-align-justify" style="text-align : justify;"><U>Key boundary</U><BR />DR is validated against site-level failure scenarios, not host failures.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>7. RPO and RTO in Disaster Recovery Context</STRONG><BR />In SAP HANA DR architectures:</P><UL><LI>RPO<BR />is influenced by replication mode or backup frequency</LI><LI>RTO<BR />is influenced by takeover procedures, recovery execution, and operational readiness</LI></UL><P class="lia-align-justify" style="text-align : justify;">SAP HANA provides the technical mechanisms required to support different RPO/RTO targets.<BR />Actual values are derived from architecture and configuration, not configured as product parameters.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>8. Design Boundaries and Architectural Implications</STRONG><BR />From a disaster-recovery architecture perspective:</P><UL><LI>no single mechanism covers all site-level failure scenarios</LI><LI>replication-based DR trades isolation for continuity</LI><LI>backup-based DR trades speed for historical recovery</LI><LI>distance and latency impose hard technical constraints</LI></UL><P class="lia-align-justify" style="text-align : justify;">Understanding these documented boundaries is essential for realistic DR expectations and technically sound architecture design.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>Summary</STRONG><BR />SAP HANA provides documented technical mechanisms to support disaster-recovery scenarios involving site-level failures.</P><P class="lia-align-justify" style="text-align : justify;">Disaster recovery is not defined by a single feature, but by the combination of system replication, backup mechanisms, distance-aware configuration, and controlled recovery procedures.</P><P class="lia-align-justify" style="text-align : justify;">By understanding the documented scope, distance considerations, and data-protection boundaries of SAP HANA disaster-recovery mechanisms, architectures can be designed with clear, realistic, and technically defensible recovery expectations.</P> 2026-03-01T18:00:00.018000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/bi-remediation-checks-during-ecc-to-s-4hana-migration/ba-p/14338910 BI Remediation Checks during ECC to S/4HANA Migration 2026-03-02T07:22:03.809000+01:00 NitinK https://community.sap.com/t5/user/viewprofilepage/user-id/124663 <P><FONT size="3">As SAP prepares to end support for ECC by <STRONG>2027</STRONG>, many organizations are making the crucial move to SAP S/4HANA. The shift can unlock improved capabilities and performance, but it can also introduce changes that ripple through the <STRONG>BI landscape</STRONG> if we don’t plan for them early.</FONT></P><P><FONT size="3">From a BI perspective, there are multiple aspects to consider during the transition. This blog focuses on the BI remediation checks that help protect <STRONG>data trust, reporting</STRONG>, and <STRONG>decision-making continuity</STRONG> during and after the migration.</FONT></P><P><FONT size="3"><STRONG>Why do we need BI remediation?&nbsp;<BR /></STRONG>The migration from <STRONG>SAP ECC to SAP S/4HANA</STRONG> introduces significant changes to the underlying data structures. These changes impact how data is <STRONG>extracted, modelled,</STRONG> and <STRONG>consumed</STRONG> in the BI system. Because of that, reporting remediation becomes critical to ensure <STRONG>data consistency, accuracy, and continuity</STRONG> in analytics after the migration.</FONT></P><P><FONT size="3"><STRONG>With remediation, we ensure:</STRONG></FONT></P><UL><LI><FONT size="3"><STRONG>BI models and reports remain functional</STRONG> and aligned with the new S/4HANA structures</FONT></LI><LI><FONT size="3"><STRONG>Disruption is minimized</STRONG> for business operations and reporting processes</FONT></LI></UL><P><FONT size="3"><STRONG>Data model simplifications in S/4HANA (what changes underneath)</STRONG></FONT><BR /><FONT size="3">When SAP S/4HANA was first introduced, the SAP ERP 6.0 data model underwent major restructuring. Redundancies were removed, and many smaller tables were consolidated into fewer, larger ones.</FONT><BR /><FONT size="3"><STRONG>Figure 1</STRONG> can help visualize the concept and why “what worked in ECC” may not behave the same way in S/4HANA.</FONT></P><P><FONT size="3"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NitinK_0-1772380819019.png" style="width: 791px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/378192iBC1E95319544C1F3/image-dimensions/791x439?v=v2" width="791" height="439" role="button" title="NitinK_0-1772380819019.png" alt="NitinK_0-1772380819019.png" /></span></FONT></P><P><FONT size="3">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;<STRONG><EM><U>FIGURE 1</U></EM></STRONG></FONT></P><P><FONT size="3"><STRONG>Impact of S/4HANA migration on the BI system (what to check)</STRONG></FONT></P><P><FONT size="3"><STRONG>1) Impact on BW Datasources / Extractors</STRONG></FONT><BR /><FONT size="3">If you run SAP BW, one of the first questions is:</FONT><BR /><FONT size="3">“After ECC becomes S/4HANA, will my extractors and DataSources still be supported and behave the same way?”</FONT><BR /><FONT size="3"><A href="https://me.sap.com/notes/2500202/E" target="_blank" rel="noopener noreferrer">SAP note </A><A href="https://me.sap.com/notes/2500202/E" target="_blank" rel="noopener noreferrer">2500202</A>&nbsp; provides an<SPAN>&nbsp;Excel sheet produced by SAP that details the impact of S/4HANA on BW Datasources. This sheet is periodically updated. The same note also contains a PPT that guides us on how to interpret and use the Excel list.</SPAN></FONT></P><P><FONT size="3">A proven approach is:</FONT></P><UL><LI><FONT size="3">Inventory your active extractors (e.g., via <STRONG>ROOSOURCE</STRONG>). &nbsp;</FONT></LI><LI><FONT size="3">Identify which are truly used (process chains / BI statistics where available)</FONT></LI><LI><FONT size="3">Cross-check them against the <A href="https://me.sap.com/notes/2500202/E" target="_blank" rel="noopener noreferrer">SAP note </A><A href="https://me.sap.com/notes/2500202/E" target="_blank" rel="noopener noreferrer">2500202</A> list and categorize outcomes (no change / restricted / obsolete / alternative required)</FONT></LI></UL><P><FONT size="3"><STRONG>2) Tables with Replacement Objects</STRONG> (don’t rely on “classic tables” blindly)</FONT><BR /><FONT size="3">If, as part of your BI remediation strategy, you plan to continue using certain classic ECC tables, for example, relying on <STRONG>MARD</STRONG> instead of adopting the newer <STRONG>MATDOC</STRONG> structure, you must depend on the corresponding <STRONG>Replacement Object</STRONG> rather than accessing the table directly.</FONT><BR /><FONT size="3">With the data model simplification <SPAN>in </SPAN><STRONG>SAP S/4HANA</STRONG><SPAN>, many traditional ECC tables are no longer the authoritative “single source of truth.” In many cases, data is aggregated or derived via CDS views that join and enrich information from multiple sources. Because of this shift, it becomes essential to transition from reading classic tables (such as </SPAN><STRONG>MARC</STRONG><SPAN>, </SPAN><STRONG>MARD</STRONG><SPAN>, and others) to using their designated </SPAN><STRONG>Replacement Objects</STRONG><SPAN>, ensuring that analytics and integrations receive accurate, up‑to‑date data.</SPAN></FONT></P><UL><LI><FONT size="3">In Native HANA, it is advisable to verify whether a table has a replacement object or whether it can still be used directly.</FONT></LI><LI><FONT size="3">In SAP BW, the datasource typically absorbs most of the underlying data model changes. However, in hybrid scenarios, particularly when calculation views are built on ECC base tables or when custom datasources built on ECC base tables, it bec<FONT face="arial,helvetica,sans-serif">omes essential to thoroughly assess the impact on the affected tables.</FONT></FONT></LI></UL><P><FONT size="3"><STRONG>Figure 2</STRONG> shows an example for table MARD. NSDM_E_MARD serves as the replacement view for the classic MARD table. Although data continues to reside in MARD, those entries generally represent older documents migrated to S/4HANA or new S/4HANA records that are stored without measure fields. To obtain the full set of information (both characteristics and measures) for transactions created in S/4HANA, you must use the replacement object, in this case NSDM_E_MARD, rather than relying on MARD directly.</FONT></P><P><FONT size="3"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NitinK_1-1772380819033.png" style="width: 735px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/378191i752AC00778E19AEA/image-dimensions/735x364?v=v2" width="735" height="364" role="button" title="NitinK_1-1772380819033.png" alt="NitinK_1-1772380819033.png" /></span></FONT></P><P><FONT size="3"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NitinK_2-1772380819035.png" style="width: 625px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/378190iC1BB5E6D153AB81E/image-dimensions/625x161?v=v2" width="625" height="161" role="button" title="NitinK_2-1772380819035.png" alt="NitinK_2-1772380819035.png" /></span></FONT></P><P><FONT size="3"><STRONG>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <EM><U>FIGURE 2</U></EM></STRONG></FONT></P><P><FONT size="3"><STRONG>3) Replacement of obsolete tables<BR /></STRONG>Some ECC tables are<SPAN>&nbsp;replaced or made obsolete, and direct reporting on such tables is no longer recommended or supported. Examples tables include:<BR /></SPAN></FONT></P><UL><LI><FONT size="3">KONV, VBUP, VBUK, EIKP, EIPO</FONT></LI></UL><P><FONT size="3"><SPAN>Obsolete tables may still exist, but</SPAN><SPAN>&nbsp;they can behave as compatibility views. For accuracy and performance, they should be switched to transparent tables where applicable.</SPAN></FONT></P><UL><LI><FONT size="3"><A href="https://me.sap.com/notes/3250204/E" target="_blank" rel="noopener noreferrer">SAP Note 3250204</A>: describes about the replacement table PRDC_ELEMENTS for KNOV table.</FONT></LI><LI><FONT size="3"><A href="https://me.sap.com/notes/2198647" target="_blank" rel="noopener noreferrer">SAP Note 2198647</A>: talks about tables VBUP, VBUK and SD data model changes.</FONT></LI><LI><FONT size="3"><A href="https://me.sap.com/notes/3482527/E" target="_blank" rel="noopener noreferrer">SAP Note 3482527</A>: <SPAN>covers about </SPAN>new S/4HANA foreign trade.&nbsp;</FONT></LI></UL><P><FONT size="3"><STRONG>4) Key fields changed (primary key shifts can break pipelines)<BR /></STRONG>In S/4HANA<SPAN>, the </SPAN><STRONG>primary keys</STRONG><SPAN> of some tables have changed compared to ECC. Some examples are:</SPAN></FONT></P><UL><LI><FONT size="3">VBFA</FONT></LI><LI><FONT size="3">FAAV_ANLC</FONT></LI></UL><P><FONT size="3">These changes directly affect how data is stored, identified, and retrieved. That can break:</FONT></P><UL><LI><FONT size="3">Joins in HANA models</FONT></LI><LI><FONT size="3">Deduplication logic</FONT></LI><LI><FONT size="3">Delta logic or key-based transformations in BW</FONT></LI></UL><P><FONT size="3"><STRONG>Action</STRONG></FONT></P><UL><LI><FONT size="3">Review where these tables are used in extraction/transformation layers</FONT></LI><LI><FONT size="3">Re-check join conditions, uniqueness assumptions, and surrogate key logic</FONT></LI><LI><FONT size="3">Re-test deltas and aggregations end-to-end</FONT></LI></UL><P><FONT size="3"><STRONG>5) Field length extensions (small change, big downstream effect)<BR /></STRONG>Some fields<SPAN>&nbsp;were extended in S/4HANA to support broader functional requirements, enhanced integration needs, and future extensibility.<BR />One such example is:</SPAN></FONT></P><UL><LI><FONT size="3">VBAP-VBTYP (Char1) Document Category replaced by VBAP-VBTYPL (Char4)</FONT></LI><UL><LI><FONT size="3"><A href="https://me.sap.com/notes/0002495386" target="_blank" rel="noopener noreferrer">SAP Note 2495386</A> details the BW datasource that uses this object.</FONT></LI></UL></UL><P><FONT size="3">Another major example:</FONT></P><UL><LI><FONT size="3">Material Number (<A href="https://me.sap.com/notes/2215424" target="_blank" rel="noopener noreferrer">SAP Note 2215424</A>) changed from char(18) to char(40)</FONT></LI></UL><P><FONT size="3">A key design decision regarding material numbers is whether your organization will:</FONT></P><UL><LI><FONT size="3">Adopt a true 40‑character material number, introducing a new business meaning, or</FONT></LI><LI><FONT size="3">Retain the existing 18‑character logic, while simply expanding the technical field length to 40 characters.</FONT></LI></UL><P><FONT size="3">The chosen approach directly influences how material numbers flow through your BI data models and determines the adjustments required in downstream reporting and integrations.</FONT></P><P><FONT size="3"><STRONG>6) Centralized Master Data via Business Partner (Customer/Vendor integration)<BR /></STRONG>SAP fundamentally<SPAN>&nbsp;changed how master data is managed by introducing a unified, centralized model for customers, vendors, and business partners. This replaces the traditional ECC siloed structures with a harmonized object: the </SPAN><STRONG>Business Partner (BP).<BR /></STRONG>It’s <SPAN>not a “technical-only” change, it’s a semantic change in how master data is governed<BR />Ple</SPAN><SPAN>ase check the </SPAN><A href="https://me.sap.com/notes/2265093/E" target="_blank" rel="noopener noreferrer">SAP Note 2265093</A><SPAN> note for more information.</SPAN></FONT></P><P><FONT size="3"><STRONG>7) Redirecting ECC Virtual Tables in HANA Calculation Views<BR /></STRONG>When<SPAN>&nbsp;your BI models use calculation views that reference ECC base tables, these views must be updated after the migration to ensure they point to the corresponding objects in the S/4HANA system. Redirecting them is essential to prevent calculation view failures and to avoid inconsistent or incorrect analytical results after the migration. Please refer the </SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-members/flip-virtual-tables-in-calculation-views-from-ecc-to-s-4hana/ba-p/14070361" target="_blank">blog</A><SPAN> for mitigation of this impact.</SPAN></FONT></P><DIV>&nbsp;</DIV><DIV><FONT size="3"><STRONG>Final thoughts:&nbsp;</STRONG>ECC to S/4HANA migration is not only an ERP transformation, but also a <STRONG>BI continuity challenge</STRONG>. The safest approach is to treat BI remediation as a structured workstream, validating extractors, replacing obsolete dependencies, and proactively addressing table/key/field changes before they surface as production reporting issues.</FONT></DIV><P><FONT size="3">If this post helps you structure your BI remediation checks, please consider sharing your own experiences or additions in the comments.</FONT></P><P>&nbsp;</P> 2026-03-02T07:22:03.809000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/migrating-to-sap-hana-cloud-what-actually-gets-better-part-1-of-2/ba-p/14339700 Migrating to SAP HANA Cloud: What Actually Gets Better (Part 1 of 2) 2026-03-02T16:18:10.052000+01:00 DEEPA_DORAIRAJ https://community.sap.com/t5/user/viewprofilepage/user-id/1752099 <P><STRONG>SAP COMMUNITY |<SPAN>&nbsp; </SPAN>TECHNICAL GUIDE<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>SAP HANA CLOUD |<SPAN>&nbsp; </SPAN>PART 1 OF 2</STRONG></P><P><STRONG>Migrating to SAP HANA Cloud: What Actually Gets Better</STRONG></P><P><EM>Part 1 of 2: The genuine capabilities that open up when you move from on-premise HANA to HANA Cloud — HDI containers, SAP AI Core, and the BTP application ecosystem.</EM></P><P><STRONG>By Deepa Dorairaj<SPAN>&nbsp; </SPAN></STRONG><SPAN>&nbsp;</SPAN>| SAP AI Solution Architect<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>HANA Data Engineering<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>Published 2026</P><P><EM>This is Part 1 of a two-part series on SAP HANA Cloud migration. Part 1 covers what genuinely improves when you move to the cloud. Part 2 covers the pain points and what to watch out for. Both perspectives are necessary for an honest picture of the migration journey.</EM></P><P><EM>TL;DR: Moving from on-premise HANA to HANA Cloud is not just an infrastructure change. It unlocks three capabilities that are difficult or impossible to replicate on-premise: a modern database development workflow through HDI containers and Git, access to SAP AI Core for production-grade AI deployments on your SAP data, and a rich application ecosystem through BTP that connects your HANA data to S/4HANA, Fiori, Integration Suite, and beyond.</EM></P><H2 id="toc-hId-1790659009">Why This Article Exists</H2><P>Most migration content falls into one of two traps: vendor-produced material that glosses over the real challenges, or war stories that focus exclusively on what broke. The honest picture is more nuanced than either.</P><P>Part 2 of this series covers the friction — the tooling gaps, scripting compatibility issues, and security model changes that will slow your team down. You should read that too, especially if you are in the planning phase.</P><P>But this article is about the other side of that equation: what genuinely gets better. Not what SAP's marketing says gets better — what actually changes for the teams doing real work in real HANA Cloud environments. The three areas covered here represent capabilities that were either unavailable, severely limited, or operationally painful in on-premise. In HANA Cloud they become first-class, production-ready patterns.</P><P><STRONG>IMPROVEMENT 1 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1594145504">HDI Containers: A Modern Database Development Workflow</H2><P><STRONG>What HDI Actually Is — and Why It Changes Everything</STRONG></P><P>The HANA Deployment Infrastructure (HDI) container model is the single biggest workflow improvement that HANA Cloud brings for development teams. On-premise HANA, most teams deploy database objects directly against schemas — DDL scripts run manually or through transport management, objects exist in a shared schema that everyone touches, and version control is an afterthought bolted on top of an inherently stateful system.</P><P>HDI flips this model entirely. Database objects are defined as source artifacts — .hdbtable, .hdbview, .hdbprocedure files — that live in a Git repository alongside application code. The HDI container is deployed from those artifacts, and the deployment engine handles dependency resolution, delta deployment, and environment isolation automatically.</P><P>The result is that your database development workflow starts to look like your application development workflow: version controlled, reviewable, deployable, and reproducible.</P><P><STRONG>Git Integration: Version Control as a First-Class Citizen</STRONG></P><P>On-premise HANA development rarely had true version control for database objects. Teams used SAP Transport Management for moving objects between landscapes, but transport management is not version control — it doesn't give you branching, pull requests, history, or the ability to roll back to any point in time with confidence.</P><P>HANA Cloud's HDI model integrates directly with Git repositories through SAP Business Application Studio and the HANA Cloud toolchain. Every database object is a file in a repository. Changes go through the same review process as application code. Deployments are triggered from specific commits or branches.</P><UL><LI>Feature branches for database changes — developers work on isolated branches without affecting the shared schema, exactly as they would for application code.</LI><LI>Pull request reviews for schema changes — DDL changes are reviewed by peers before deployment, catching errors before they reach development or production environments.</LI><LI>Full commit history for every database object — the ability to see exactly what changed, when, and who made the change, with the ability to revert to any previous state.</LI><LI>Branch-based environment promotion — development, QA, and production environments are deployed from specific branches, making the promotion process auditable and repeatable.</LI></UL><P><EM>This is the change that development teams feel most immediately after migration. The frustration of untraceable schema changes, overwritten objects, and transport conflicts disappears when database artifacts live in Git like everything else.</EM></P><P><STRONG>Environment Isolation and Team Collaboration</STRONG></P><P>On-premise HANA landscapes typically have a development system, a QA system, and a production system — three full HANA instances that are expensive to maintain and still don't provide true isolation between individual developers working simultaneously.</P><P>HDI containers provide lightweight, isolated development environments within a single HANA Cloud instance. Each developer or team can have their own container — a complete, isolated copy of the database schema — without requiring a separate HANA system. Changes in one container have zero impact on another.</P><UL><LI>Developer containers — each developer works in their own isolated container, testing changes without risk of interfering with colleagues.</LI><LI>Feature containers — a complete isolated environment for a specific feature or project, spun up and torn down as needed.</LI><LI>Automated dependency management — the HDI deployment engine resolves object dependencies automatically. If a view depends on a table that doesn't exist yet, the deployment fails with a clear error rather than creating an inconsistent state silently.</LI><LI>Simplified landscape transport — promoting changes from development to QA to production means deploying the same Git artifacts to different containers, not managing SAP transport requests.</LI></UL><P><STRONG>Cleaner Deployment and Versioning</STRONG></P><P>One of the most underappreciated HDI benefits is idempotent deployment. On-premise, re-running a DDL script against an existing object requires DROP and recreate logic, ALTER statements, or manual intervention. HDI's deployment engine compares the desired state (your artifact files) against the current state (what's in the container) and applies only the necessary changes — automatically, safely, and repeatably.</P><pre class="lia-code-sample language-sql"><code>-- On-premise: manual drop and recreate required DROP TABLE my_schema.my_table; CREATE TABLE my_schema.my_table (id INT, name VARCHAR(100)); -- HDI: define once in .hdbtable artifact, deploy repeatedly -- HDI engine handles delta — no manual DROP/CREATE needed COLUMN TABLE my_table (id INT, name NVARCHAR(100));</code></pre><P>&nbsp;</P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Capability</STRONG></P></TD><TD width="416"><P><STRONG>What It Unlocks</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>Git version control</STRONG></P></TD><TD width="416"><P>Full history, branching, pull request reviews, and rollback for every database object</P></TD></TR><TR><TD width="208"><P><STRONG>Developer isolation</STRONG></P></TD><TD width="416"><P>Individual HDI containers eliminate shared schema conflicts between developers</P></TD></TR><TR><TD width="208"><P><STRONG>Automated dependency resolution</STRONG></P></TD><TD width="416"><P>HDI deployment engine manages object dependencies — no manual sequencing of DDL scripts</P></TD></TR><TR><TD width="208"><P><STRONG>Idempotent deployment</STRONG></P></TD><TD width="416"><P>Deploy the same artifacts repeatedly without manual DROP/CREATE logic</P></TD></TR><TR><TD width="208"><P><STRONG>Landscape transport</STRONG></P></TD><TD width="416"><P>Git-based promotion replaces SAP transport requests for database objects</P></TD></TR><TR><TD width="208"><P><STRONG>Feature branching</STRONG></P></TD><TD width="416"><P>Database changes developed and tested in isolation before merging to main branch</P></TD></TR></TBODY></TABLE><P><STRONG>IMPROVEMENT 2 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1397631999">SAP AI Core: Production-Grade AI on Your SAP Data</H2><P><STRONG>Why AI Core Changes the Enterprise AI Equation</STRONG></P><P>On-premise HANA has machine learning capabilities — PAL (Predictive Analysis Library) and APL (Automated Predictive Library) provide a solid foundation for classical ML workloads that run close to the data. But these capabilities have a fundamental limitation: they are designed for traditional ML patterns and do not provide a path to deploying modern foundation models, RAG systems, or generative AI against your SAP data in a production-ready, governed way.</P><P>SAP AI Core, accessible through BTP and natively integrated with HANA Cloud, is a different category of capability entirely. It is a managed AI runtime that handles the infrastructure, scaling, and lifecycle management of AI workloads — allowing you to focus on the AI logic rather than the infrastructure that runs it.</P><P>For SAP architects and data engineers, the critical insight is this: AI Core is where your HANA Cloud data becomes the foundation for genuinely intelligent enterprise applications, not just analytical ones.</P><P><STRONG>Vector Embeddings and RAG for SAP Data</STRONG></P><P>HANA Cloud includes native vector storage capabilities — the ability to store and query vector embeddings directly in the database alongside your structured SAP data. This is the foundational capability for Retrieval-Augmented Generation (RAG) systems that can answer questions about your SAP landscape using live data rather than static training.</P><P>The architecture this enables is significant: instead of sending sensitive SAP business data to an external AI service for every query, you store embeddings of your SAP content in HANA Cloud's vector store and retrieve only the relevant context for each query. The foundation model gets the context it needs without your raw SAP data leaving your controlled environment.</P><pre class="lia-code-sample language-sql"><code>-- Create a vector store table in HANA Cloud CREATE TABLE sap_knowledge_embeddings ( id BIGINT PRIMARY KEY, content NCLOB, source_type NVARCHAR(100), embedding REAL_VECTOR(1536) ); -- Semantic similarity search against SAP knowledge base SELECT TOP 5 id, content, source_type, COSINE_SIMILARITY(embedding, TO_REAL_VECTOR(?)) AS score FROM sap_knowledge_embeddings ORDER BY score DESC;</code></pre><P>This pattern — storing SAP process documentation, BAPI descriptions, master data definitions, and business rules as searchable embeddings — is the foundation of a genuine enterprise AI assistant that knows your specific SAP landscape, not just generic SAP knowledge.</P><P><STRONG>Connecting Foundation Models via AI Core APIs</STRONG></P><P>AI Core provides governed, auditable access to foundation models — including SAP's own generative AI models and third-party models through the Generative AI Hub — with the enterprise controls that on-premise environments cannot provide: usage tracking, cost allocation, rate limiting, and compliance logging built in.</P><P>The connection between HANA Cloud data and AI Core models enables patterns that were genuinely not achievable on-premise:</P><UL><LI>Intelligent document processing — extracting structured data from unstructured SAP documents (purchase orders, invoices, contracts) using foundation models and storing results directly in HANA Cloud tables.</LI><LI>Natural language interfaces to SAP data — allowing business users to query HANA Cloud analytics in plain language, with AI Core translating natural language to SQL and grounding responses in live data.</LI><LI>Automated exception handling — AI agents that monitor SAP business processes, identify anomalies in HANA Cloud data, and take corrective actions through SAP APIs without human intervention.</LI><LI>Fine-tuned models on SAP data — training or fine-tuning foundation models on your organization's specific SAP data patterns through AI Core's managed training infrastructure.</LI></UL><P><STRONG>Grounding AI in Live SAP Business Data</STRONG></P><P>The most powerful aspect of the HANA Cloud and AI Core combination is the ability to ground AI outputs in live, governed SAP business data. Generic AI models hallucinate when asked about your specific business — your customer master, your material catalog, your financial postings. Grounded models, retrieval systems, and tool-calling agents that query HANA Cloud directly produce outputs that are accurate, current, and auditable.</P><P><EM>The architecture pattern that is emerging as the standard for enterprise SAP AI: HANA Cloud as the data foundation and vector store → AI Core as the model runtime and orchestration layer → SAP Business AI applications as the user-facing interface. Each layer does what it is best at, and the combination is genuinely more capable than any of its parts.</EM></P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Capability</STRONG></P></TD><TD width="416"><P><STRONG>What It Unlocks</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>Native vector storage</STRONG></P></TD><TD width="416"><P>Store and query embeddings directly in HANA Cloud alongside structured SAP data</P></TD></TR><TR><TD width="208"><P><STRONG>RAG on SAP data</STRONG></P></TD><TD width="416"><P>Build retrieval systems that answer questions using live SAP content without external data exposure</P></TD></TR><TR><TD width="208"><P><STRONG>Foundation model access</STRONG></P></TD><TD width="416"><P>Governed, auditable access to SAP and third-party foundation models through Generative AI Hub</P></TD></TR><TR><TD width="208"><P><STRONG>Fine-tuning capability</STRONG></P></TD><TD width="416"><P>Train models on your specific SAP data patterns through AI Core's managed infrastructure</P></TD></TR><TR><TD width="208"><P><STRONG>Grounded AI outputs</STRONG></P></TD><TD width="416"><P>AI responses anchored to live HANA Cloud data — accurate, current, and auditable</P></TD></TR><TR><TD width="208"><P><STRONG>Intelligent automation</STRONG></P></TD><TD width="416"><P>AI agents that monitor SAP processes and act on HANA data without human intervention</P></TD></TR></TBODY></TABLE><P><STRONG>IMPROVEMENT 3 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1201118494">The BTP Application Ecosystem: Connecting HANA Cloud to Everything</H2><P><STRONG>From Database to Platform</STRONG></P><P>On-premise HANA is primarily a database — an exceptional one, but a database. Applications connect to it, consume its data, and leave. The integration patterns are well established but fundamentally passive: HANA stores and processes data, applications request it.</P><P>HANA Cloud on BTP is part of a platform ecosystem that changes this dynamic. The connections between HANA Cloud and the broader SAP and non-SAP landscape are richer, more standardized, and more capable than what on-premise environments support. Your HANA Cloud instance is not just a database — it is a data foundation for a platform that can build, connect, and deploy enterprise applications at a speed that on-premise infrastructure cannot match.</P><P><STRONG>Connecting HANA Cloud to S/4HANA and ECC</STRONG></P><P>The integration between HANA Cloud and SAP's core ERP systems — S/4HANA and ECC — is one of the most practically valuable improvements for organizations running mixed landscapes. On-premise, cross-system data access required custom RFC connections, ALE configurations, or ETL pipelines that were expensive to build and fragile to maintain.</P><P>Through BTP's Integration Suite and HANA Cloud's federation capabilities, S/4HANA and ECC data becomes accessible in HANA Cloud with significantly less custom integration work. This enables analytical and AI use cases that span transactional ERP data and the broader data landscape without requiring full data replication.</P><UL><LI>Real-time operational reporting on S/4HANA data through HANA Cloud without impacting ERP system performance.</LI><LI>Cross-system analytics combining ERP data with data from non-SAP sources in a single HANA Cloud analytical layer.</LI><LI>AI and ML workloads that consume live ERP data through governed, auditable connections rather than stale data extracts.</LI></UL><P><STRONG>Building Fiori and BTP Applications on HANA Cloud Data</STRONG></P><P>SAP Business Application Studio — the cloud-based IDE that replaces the Eclipse-based tooling of on-premise development — provides a first-class development environment for building Fiori applications, CAP (Cloud Application Programming model) services, and full-stack BTP applications directly on top of HANA Cloud data.</P><P>The CAP model in particular represents a significant improvement over traditional on-premise application development patterns. CAP services expose HANA Cloud data through OData and REST APIs with built-in authorization, query handling, and SAP Fiori integration — dramatically reducing the boilerplate code required to build enterprise-grade applications on SAP data.</P><pre class="lia-code-sample language-sql"><code>-- CAP service definition exposing HANA Cloud entity service AnalyticsService { <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1746372">@readonly</a> entity SalesOrders as SELECT from db.SalesOrders { order_id, customer, amount, status, created_at, region } where status = 'ACTIVE'; } -- Automatically generates OData V4 endpoint -- with filtering, sorting, and pagination built in</code></pre><P><STRONG>OData and REST API Exposure</STRONG></P><P>Exposing HANA Cloud data through standardized OData and REST APIs — consumable by Fiori applications, third-party tools, mobile applications, and external systems — is significantly more streamlined on BTP than on-premise. The CAP framework handles the API layer, and BTP's API Management provides governance, rate limiting, and monitoring for those APIs across all consumers.</P><P>For organizations building an enterprise data mesh or API-first data strategy, this is one of the most compelling HANA Cloud advantages — your HANA data becomes a governed, discoverable, consumable API product rather than a database that specific applications happen to know how to query.</P><P><STRONG>SAP Integration Suite and Side-by-Side Extensibility</STRONG></P><P>SAP Integration Suite on BTP provides pre-built integration flows between HANA Cloud and the broader SAP and non-SAP ecosystem — Salesforce, Microsoft, ServiceNow, and hundreds of other systems — without custom middleware development. On-premise integration required custom development, dedicated middleware servers, and ongoing maintenance of bespoke integration code.</P><P>Side-by-side extensibility — the BTP pattern for extending S/4HANA and other SAP systems without modifying core — relies on HANA Cloud as its data persistence layer. Extension applications built on BTP consume S/4HANA data through governed APIs, process and store results in HANA Cloud, and surface insights back to SAP users through Fiori — all without touching the core ERP system.</P><P><EM>Side-by-side extensibility is the pattern that makes HANA Cloud most valuable for organizations that need to innovate on top of S/4HANA without the risk and cost of core modifications. HANA Cloud is the data foundation that makes this pattern production-ready.</EM></P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Capability</STRONG></P></TD><TD width="416"><P><STRONG>What It Unlocks</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>S/4HANA and ECC integration</STRONG></P></TD><TD width="416"><P>Cross-system analytics and AI workloads on live ERP data without custom RFC development</P></TD></TR><TR><TD width="208"><P><STRONG>CAP application development</STRONG></P></TD><TD width="416"><P>Build Fiori and full-stack BTP applications on HANA Cloud data with dramatically less boilerplate</P></TD></TR><TR><TD width="208"><P><STRONG>OData and REST APIs</STRONG></P></TD><TD width="416"><P>Expose HANA Cloud data as governed, discoverable API products consumable by any client</P></TD></TR><TR><TD width="208"><P><STRONG>SAP Integration Suite</STRONG></P></TD><TD width="416"><P>Pre-built integration flows to SAP and non-SAP systems without custom middleware</P></TD></TR><TR><TD width="208"><P><STRONG>Side-by-side extensibility</STRONG></P></TD><TD width="416"><P>Extend S/4HANA through BTP applications backed by HANA Cloud without core modifications</P></TD></TR><TR><TD width="208"><P><STRONG>BTP ecosystem</STRONG></P></TD><TD width="416"><P>Access to the full BTP service catalog — AI, analytics, integration, automation — from a single platform</P></TD></TR></TBODY></TABLE><H2 id="toc-hId-1004604989">The Honest Summary</H2><P>The three improvements covered in this article are not incremental upgrades to what on-premise HANA already does. They are qualitatively different capabilities that change what is possible for development teams, data engineers, and enterprise architects working with SAP data.</P><P>HDI containers with Git integration bring database development into the modern software development workflow — version controlled, isolated, reviewable, and reproducible. SAP AI Core unlocks production-grade AI on your SAP data — RAG systems, foundation model access, intelligent automation — that simply cannot be built with the same reliability and governance on-premise. And the BTP application ecosystem transforms HANA Cloud from a database into a platform foundation for the full range of SAP and cross-system enterprise applications.</P><P>None of this makes the migration easy. Part 2 of this series is honest about the friction — and there is real friction. But the capabilities waiting on the other side of that friction are substantial enough to make the journey worth taking.</P><P><EM>Part 2 of this series covers the pain points in detail: HANA Studio to Cloud Central tooling gaps, stored procedure compatibility issues, and security model changes that will catch your team off guard. Read both before finalizing your migration plan.</EM></P> 2026-03-02T16:18:10.052000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147 Good to know: "HASH Partitioning" vs "RANGE Partitioning" in context of SAP HANA database 2026-03-04T08:09:16.242000+01:00 Laszlo_Thoma https://community.sap.com/t5/user/viewprofilepage/user-id/170406 <P><ul =""><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-1661540864">Why was this blog post created?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-1465027359">What are the differences?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-1268513854">Where can I find the most important information about the topic of SAP HANA partitioning?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-1072000349">Complexity</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-875486844">What is the conclusion?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-678973339">Other articles</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-482459834">Do you have further questions?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-hash-partitioning-quot-vs-quot-range-partitioning-quot-in/ba-p/14338147#toc-hId-285946329">Contribution</a></li></ul></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Banner_2026.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/377740i34B860E50D95FF1D/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Banner_2026.png" alt="SAP_Community_Blog_Banner_2026.png" /></span></P><P>&nbsp;</P><P class="lia-align-right" style="text-align : right;"><FONT color="#FF0000">last updated: 2026-03-04</FONT></P><H1 id="toc-hId-1661540864"><FONT color="#000000">Why was this blog post created?<BR /></FONT></H1><P class="lia-align-left" style="text-align : left;"><FONT color="#000000">The area of partitioning is a very complex topic when it comes to the SAP HANA database (in case of all kind of database). The blog post aim is to summarize the facts and explain the different types on high level (pro, cons). Deeper understanding is available in the referenced documents. A high-level understanding is a critical step in order to choose the correct and most effective partitioning in your actual need/situation.</FONT></P><H1 id="toc-hId-1465027359"><FONT color="#000000">What are the differences?</FONT></H1><P class="lia-align-left" style="text-align : left;"><FONT color="#000000">The following tables contains the differences.&nbsp;Please note that this is a general categorization, both HASH and RANGE type of partitioning method are more complex and intricate than this table mentions. The table uses a simplified categorization.<BR /></FONT></P><TABLE border="1" width="100%"><TBODY><TR><TD width="33.333333333333336%">&nbsp;</TD><TD width="33.333333333333336%"><STRONG>HASH</STRONG></TD><TD width="33.333333333333336%"><STRONG>RANGE</STRONG></TD></TR><TR><TD width="33.333333333333336%"><STRONG>Amount of data</STRONG></TD><TD width="33.333333333333336%">Typically less data.</TD><TD width="33.333333333333336%">Typically huge amount of data.</TD></TR><TR><TD width="33.333333333333336%"><STRONG>When to use?</STRONG></TD><TD width="33.333333333333336%">Solves the 2 billion limit.</TD><TD width="33.333333333333336%">Next to partitioning (2 billion limit) there are further requirements (e.g. performance needs).</TD></TR><TR><TD width="33.333333333333336%"><STRONG>Focus</STRONG></TD><TD width="33.333333333333336%">Focus is on the partitioning itself.</TD><TD width="33.333333333333336%">Focus is not only the partitioning but further requirements also.</TD></TR><TR><TD width="33.333333333333336%"><STRONG>Implementation</STRONG></TD><TD width="33.333333333333336%">Easier setup.</TD><TD width="33.333333333333336%">Preparation and planning necessary, more complex task.</TD></TR><TR><TD><STRONG>Implementation Type</STRONG></TD><TD>Mainly technical task.&nbsp;Planning is also necessary and depending on selectivity.</TD><TD>More deeper and detailed planning &amp; technical tasks.</TD></TR><TR><TD><STRONG>Example</STRONG></TD><TD><A href="https://me.sap.com/notes/3281773" target="_blank" rel="noopener noreferrer">3281773</A> - What cause the non uniform data distribution in HASH partitioned table in SAP HANA?</TD><TD><A href="https://me.sap.com/notes/2289491" target="_blank" rel="noopener noreferrer">2289491</A> - Best Practices for Partitioning of Finance Tables</TD></TR><TR><TD width="33.333333333333336%"><STRONG>Involvement</STRONG></TD><TD width="33.333333333333336%">Mainly a DBA task but data owner participation can be required.</TD><TD width="33.333333333333336%">Need further teams (e.g. application knowledge).</TD></TR></TBODY></TABLE><P class="lia-align-left" style="text-align : left;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Image_HASH_vs_RANGE.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/377752i8CEBF5F7037B7EF1/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Image_HASH_vs_RANGE.png" alt="SAP_Community_Blog_Image_HASH_vs_RANGE.png" /></span></P><P class="lia-align-left" style="text-align : left;">&nbsp;</P><H1 id="toc-hId-1268513854"><FONT color="#000000">Where can I find the most important information about the topic of SAP HANA partitioning?</FONT></H1><P><FONT color="#000000">SAP Knowledge Base Article(s):</FONT></P><UL><LI><SPAN class=""><A href="https://launchpad.support.sap.com/#/notes/2044468" target="_blank" rel="noopener noreferrer">2044468</A><SPAN>&nbsp;</SPAN>- FAQ: SAP HANA Partitioning</SPAN></LI><LI><SPAN class=""><A href="https://me.sap.com/notes/3146645" target="_blank" rel="noopener noreferrer">3146645</A> - What is the best approach in partitioning tables on SAP HANA?</SPAN></LI><LI><A href="https://launchpad.support.sap.com/#/notes/3307500" target="_blank" rel="noopener noreferrer">3307500</A><SPAN>&nbsp;</SPAN>- How to decide which partitioning type and column(s) should been used to partition a table in SAP HANA?</LI></UL><P class="lia-align-left" style="text-align : left;"><FONT color="#000000">SAP Community Article:</FONT></P><UL><LI><FONT color="#000000"><A class="" href="https://blogs.sap.com/2023/01/06/collected-information-regarding-partitioning-in-sap-hana-with-examples/" target="_blank" rel="noopener noreferrer">Collected information regarding partitioning in SAP HANA (with examples)</A></FONT></LI></UL><H1 id="toc-hId-1072000349"><SPAN class="">Complexity</SPAN></H1><P><SPAN class="">The complexity is also indicated by the fact that all relevant information can be found in the central documentation (<A href="https://launchpad.support.sap.com/#/notes/2044468" target="_blank" rel="noopener noreferrer">2044468</A>), however, based on experience, further explanations are needed for the interpretation and use of this documentation, which is why additional supporting documents were created (e.g.&nbsp;<A href="https://me.sap.com/notes/3146645" target="_blank" rel="noopener noreferrer">3146645</A>&nbsp;or&nbsp;</SPAN><A href="https://launchpad.support.sap.com/#/notes/3307500" target="_blank" rel="noopener noreferrer">3307500</A><SPAN class="">). Furthermore, the Blog listed above was created to present specific examples and to compile the supporting documents.</SPAN></P><H1 id="toc-hId-875486844"><SPAN class="">What is the conclusion?</SPAN></H1><P><SPAN class="">Understanding is the key element to do the proper partitioning of the table based on different requirements.&nbsp;This document explains a top down approach and gives an overview about the topic and the necessary documentations.</SPAN></P><H1 id="toc-hId-675964508" id="toc-hId-678973339"><SPAN>Other articles</SPAN></H1><P><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/03/29/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/" target="_blank" rel="noopener noreferrer">Where can I find knowledge and information belongs to SAP HANA?</A><BR /><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/06/02/where-can-i-find-information-about-the-available-tools-for-sap-hana-all-types-of-use/" target="_blank" rel="noopener noreferrer">Where can I find information about the available tools for SAP HANA (all types of use)?</A></P><H1 id="toc-hId-479451003" id="toc-hId-482459834">Do you have further questions?</H1><P>Please do not hesitate to contact me if you have question or observation regarding the article.<BR />Q&amp;A link for SAP HANA:<SPAN>&nbsp;</SPAN><A href="https://answers.sap.com/tags/73554900100700000996" target="_blank" rel="noopener noreferrer">https://answers.sap.com/tags/73554900100700000996</A>&nbsp;</P><H1 id="toc-hId-282937498" id="toc-hId-285946329">Contribution</H1><P>If you find any missing information belongs to the topic, please let me know. I am happy to add the new content. My intention is to maintain the content continuously to keep the info up-to-date.</P><P><FONT color="#999999"><STRONG>Release Information</STRONG></FONT></P><TABLE width="100%" cellspacing="1"><TBODY><TR><TD height="58px">Release Date</TD><TD height="58px">Description</TD></TR><TR><TD height="30px">2026.03.04</TD><TD height="30px">First/initial Release of the SAP Blog Post documentation (Technical Article).</TD></TR></TBODY></TABLE> 2026-03-04T08:09:16.242000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/enterprise-ai-in-action-sap-joule-delivers-where-others-don-t/ba-p/14341438 Enterprise AI in Action: SAP Joule Delivers Where Others Don’t 2026-03-04T12:33:21.937000+01:00 ManishaTadse https://community.sap.com/t5/user/viewprofilepage/user-id/1788086 <P><FONT face="arial,helvetica,sans-serif">AI is becoming part of everyday enterprise work, from drafting emails to designing complex processes. While ChatGPT and Gemini can generate functional specs and ABAP code, the real question is: “<U><STRONG>Why use SAP Joule?</STRONG>”</U></FONT><BR /><FONT face="arial,helvetica,sans-serif">The difference is context – Joule is embedded in S/4HANA, understanding system configuration, business rules, workflows, and live data to deliver outputs that are accurate, compliant, and ready for real-world use. Unlike general-purpose AI, it doesn’t just produce well-written text; it grasps the technical and business landscape to ensure every recommendation aligns with your enterprise environment.</FONT><BR /><FONT face="arial,helvetica,sans-serif">To understand this difference in practical terms, let’s look at two common S/4HANA scenarios: <STRONG>a Purchase Order compliance check </STRONG>and<STRONG> a Sales Order credit validation</STRONG>. On the surface, both appear straightforward – define the requirement, identify the enhancement spot, document the logic, and generate the necessary ABAP code.</FONT><BR /><FONT face="arial,helvetica,sans-serif">A general-purpose AI tool can certainly help structure the functional specification and outline the technical approach. However, the real challenge in enterprise projects is not drafting the document – it is ensuring that the design aligns with the system’s actual configuration, enhancement framework, workflow setup, authorization model, and business data. This is where the distinction between generic AI and embedded, SAP-native intelligence becomes clear.</FONT></P><P><STRONG><FONT face="arial,helvetica,sans-serif">1. Purchase Order Compliance Check</FONT></STRONG><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Business Requirement:</STRONG></U> Block Purchase Order creation if:</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Vendor is not approved in a custom Z-table</FONT><BR /><FONT face="arial,helvetica,sans-serif">• PO value exceeds 1,000,000</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Purchasing group = “A01”</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Mandatory attachment is missing</FONT><BR /><U><STRONG><FONT face="arial,helvetica,sans-serif">Using Chatgpt or Gemini:</FONT></STRONG></U><BR /><FONT face="arial,helvetica,sans-serif">• Enhancement via BAdI ME_PROCESS_PO_CUST</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Logic to check vendor approval, PO value, purchasing group, and attachment</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Error messages and test cases</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Pros:</STRONG></U> Clear FS structure, general logic explained</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Cons:</STRONG></U> Generic — it won’t know your actual Z-table structure, workflow configuration, or document type setup. You must feed these details manually.</FONT><BR /><U><STRONG><FONT face="arial,helvetica,sans-serif">Using SAP Joule:</FONT></STRONG></U><BR /><FONT face="arial,helvetica,sans-serif">• Identifies the correct BAdI or workflow in your S/4 system</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Pulls actual vendor master and custom table fields</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Checks your approval strategy and document type configuration</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Generates FS logic aligned with your live system setup</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Pros:</STRONG></U> System-grounded, accurate, reduces rework, supports compliance</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>2. Sales Order Credit Validation</STRONG></FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Business Requirement</STRONG>:</U> Prevent Sales Order creation if a customer’s credit limit is exceeded.</FONT><BR /><U><FONT face="arial,helvetica,sans-serif"><STRONG>Using Chatgpt or Gemini:</STRONG></FONT></U><BR /><FONT face="arial,helvetica,sans-serif">• Identify a relevant BAdI or user exit</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Check credit exposure against the limit</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Trigger an error message if the limit is exceeded</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Suggest test cases for validation</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Pros:</STRONG></U> Clean structure, well-written, and easy to understand</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Cons:</STRONG></U> Generic — it doesn’t know your actual credit control area, company code setup, or the exact fields and BAdIs in your S/4HANA system. All system-specific details must be manually added.</FONT><BR /><U><FONT face="arial,helvetica,sans-serif"><STRONG>Using SAP Joule:</STRONG></FONT></U><BR /><FONT face="arial,helvetica,sans-serif">• Detect the active BAdI or enhancement spot in your system</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Pull the correct credit control configuration and field names</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Align logic with company code settings and authorization roles</FONT><BR /><FONT face="arial,helvetica,sans-serif">• Suggest tests based on your live system data</FONT><BR /><FONT face="arial,helvetica,sans-serif"><U><STRONG>Pros:</STRONG></U> Context-aware, system-aligned, lower risk of errors</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Key Differentiators</STRONG></FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Verified SAP Knowledge Base:</STRONG> Responses are grounded in SAP's official documentation and verified sources.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>SAP-Specific Methodology: </STRONG>Understands SAP implementation approaches, configuration requirements, and technical constraints.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Industry-Specific Compliance: </STRONG>Aligned with SAP's consulting standards and best practices.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>SAP Ecosystem Integration: </STRONG>Deep understanding of how SAP modules, technologies, and processes interconnect.</FONT></LI></OL><P><STRONG>Comparison Table</STRONG></P><TABLE><TBODY><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Capability</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>SAP Joule for Consultants</STRONG></FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>ChatGPT</STRONG></FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Gemini</STRONG></FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Notes</STRONG></FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>SAP Domain Expertise</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">Deep, verified SAP knowledge from official sources</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">General SAP knowledge from public training data</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">General SAP knowledge with real-time search</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Joule is trained on SAP-verified documentation</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Template Generation</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">SAP-specific templates and methodologies</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">Generic business templates</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">Generic business templates with Google integration</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Templates follow SAP best practices and standards</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Functional Specification Writing</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">SAP implementation-focused with proper technical structure</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">General functional spec approaches</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">Structured but generic approaches</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Joule understands SAP-specific requirements and constraints</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Industry Standards Compliance</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">SAP best practices and implementation standards</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">General industry standards</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">General standards with research capabilities</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Aligned with SAP's consulting methodologies</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Technical Documentation Structure</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">SAP-aligned formats (ABAP, configuration paths, transaction codes)</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">Generic technical documentation</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">Structured technical docs</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Joule provide SAP-specific technical details and paths</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Integration Knowledge</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">Deep SAP ecosystem and module integration expertise</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">Broad but surface-level SAP integration knowledge</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">General integration knowledge</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Joule understand SAP module interdependencies and best practices</FONT></P></TD></TR><TR><TD width="127.865px"><P><FONT face="arial,helvetica,sans-serif"><STRONG>Business Process Understanding</STRONG></FONT></P></TD><TD width="149.625px"><P><FONT face="arial,helvetica,sans-serif">SAP module-specific processes and workflows</FONT></P></TD><TD width="132.76px"><P><FONT face="arial,helvetica,sans-serif">General business process knowledge</FONT></P></TD><TD width="110.729px"><P><FONT face="arial,helvetica,sans-serif">General business knowledge with research</FONT></P></TD><TD width="163.688px"><P><FONT face="arial,helvetica,sans-serif">Joule trained on SAP business process documentation</FONT></P></TD></TR></TBODY></TABLE><P><FONT face="arial,helvetica,sans-serif">For functional specification writing in SAP contexts, Joule is the superior choice because:</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Accuracy:</STRONG> knowledge comes from verified SAP sources rather than general internet training data.&nbsp;</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Relevance:</STRONG> understands SAP-specific technical requirements, configuration paths, and implementation constraints.&nbsp;</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Structure:</STRONG> can provide proper SAP documentation formats, including transaction codes, customization paths, and technical specifications.&nbsp;</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Best Practices:</STRONG> aligned with SAP's consulting methodologies and implementation standards.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><STRONG>We’ve seen how generic AI can draft specs and code, but context matters. In your S/4HANA projects, where has AI helped — and where has it fallen short? Share your experiences, challenges, or tips below!</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Source:</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG><A href="https://help.sap.com/docs" target="_blank" rel="noopener noreferrer">SAP Help Portal | SAP Online Help</A></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG><A href="https://sapit-crossfunctions-prod-ragdoll.eu10.sapdas.cloud.sap/joule" target="_blank" rel="noopener nofollow noreferrer">Joule</A></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Other Blogs:</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-conversational-ai-to-business-ai-why-sap-joule-changes-enterprise/ba-p/14332217" target="_blank">From Conversational AI to Business AI: Why SAP Jou... - SAP Community</A></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/system-conversion-to-sap-s-4hana/ba-p/14322484#M188143" target="_blank">System Conversion to SAP S/4HANA - SAP Community</A></STRONG></FONT></P><P>&nbsp;</P> 2026-03-04T12:33:21.937000+01:00 https://community.sap.com/t5/artificial-intelligence-blogs-posts/knowledge-graphs-on-sap-hana-from-zero-to-enterprise-rag-with-triple-store/ba-p/14342320 Knowledge Graphs on SAP HANA: From Zero to Enterprise RAG with Triple Store 2026-03-05T11:37:21.007000+01:00 Kunal__Kumar https://community.sap.com/t5/user/viewprofilepage/user-id/1692145 <H2 id="introduction" id="toc-hId-1791370211">Introduction</H2><P>If you’ve ever worked with Retrieval-Augmented Generation (RAG) systems, you know the drill, you embed your documents into vectors, store them in a vector database, and at query time, you retrieve the top-K most similar chunks to feed as context to an LLM. It works. But there’s a problem.</P><P><STRONG>Vector search alone is semantically “flat.”</STRONG> It tells you <EM>which chunks look similar</EM> to a query, but it doesn’t understand the <EM>relationships</EM> between entities in your data. It cannot reason about connections, follow chains of facts, or understand how entity A relates to entity B through entity C.</P><P>This is where <STRONG>Knowledge Graphs</STRONG> come in.</P><P>In this blog, I’ll walk you through — from absolute scratch — how i built a Knowledge Graph layer on <STRONG>SAP HANA</STRONG> using a custom <STRONG>Triple Store</STRONG>, and how i combined it with vector embedding-based retrieval to create a <STRONG>Hybrid RAG system</STRONG> that gives our AI agents both <EM>relevance</EM> (from vectors) and <EM>reasoning</EM> (from graphs).</P><P>Everything you see is from POC codebase. Let’s begin.</P><HR /><H2 id="table-of-contents" id="toc-hId-1594856706">Table of Contents</H2><OL><LI><A href="#1-what-is-a-knowledge-graph" target="_blank" rel="noopener nofollow noreferrer">What is a Knowledge Graph?</A></LI><LI><A href="#2-what-are-rdf-triplets" target="_blank" rel="noopener nofollow noreferrer">What are RDF Triplets?</A></LI><LI><A href="#3-why-do-we-need-knowledge-graphs-in-rag" target="_blank" rel="noopener nofollow noreferrer">Why Do We Need Knowledge Graphs in RAG?</A></LI><LI><A href="#4-extracting-triplets-from-text" target="_blank" rel="noopener nofollow noreferrer">Extracting Triplets from Text</A><UL><LI>4a. Rule-Based Extraction with spaCy</LI><LI>4b. LLM-Based Extraction</LI></UL></LI><LI><A href="#5-the-multi-agent-triplet-processing-pipeline" target="_blank" rel="noopener nofollow noreferrer">The Multi-Agent Triplet Processing Pipeline</A></LI><LI><A href="#6-storing-triplets-in-sap-hana-triple-store" target="_blank" rel="noopener nofollow noreferrer">Storing Triplets in SAP HANA Triple Store</A></LI><LI><A href="#7-the-complete-ingestion-pipeline" target="_blank" rel="noopener nofollow noreferrer">The Complete Ingestion Pipeline</A></LI><LI><A href="#8-querying-the-knowledge-graph" target="_blank" rel="noopener nofollow noreferrer">Querying the Knowledge Graph</A></LI><LI><A href="#9-hybrid-retrieval-vector--graph" target="_blank" rel="noopener nofollow noreferrer">Hybrid Retrieval: Vector + Graph</A></LI><LI><A href="#10-the-full-rag-pipeline" target="_blank" rel="noopener nofollow noreferrer">The Full RAG Pipeline</A></LI><LI><A href="#11-exposing-via-mcp-for-ai-agents" target="_blank" rel="noopener nofollow noreferrer">Exposing via MCP for AI Agents</A></LI><LI><A href="#12-architecture-summary" target="_blank" rel="noopener nofollow noreferrer">Architecture Summary</A></LI><LI><A href="#13-conclusion" target="_blank" rel="noopener nofollow noreferrer">Conclusion</A></LI></OL><HR /><H2 id="what-is-a-knowledge-graph" id="toc-hId-1398343201">1. What is a Knowledge Graph?</H2><P>A <STRONG>Knowledge Graph</STRONG> is a structured representation of real-world entities and the relationships between them. Think of it as a web of connected facts.</P><P>For example, consider this sentence:</P><BLOCKQUOTE><P>“SAP SE is headquartered in Walldorf, Germany and develops enterprise software.”</P></BLOCKQUOTE><P>A human reading this immediately understands three facts: - <STRONG>SAP SE</STRONG> → <EM>is_headquartered_in</EM> → <STRONG>Walldorf, Germany</STRONG> - <STRONG>SAP SE</STRONG> → <EM>develops</EM> → <STRONG>enterprise software</STRONG> - <STRONG>Walldorf</STRONG> → <EM>is_located_in</EM> → <STRONG>Germany</STRONG></P><P>A Knowledge Graph captures exactly these facts as structured, queryable data. Each fact is stored as a <STRONG>triplet</STRONG> (more on this next).</P><H3 id="why-is-this-different-from-a-regular-database" id="toc-hId-1330912415">Why is this different from a regular database?</H3><P>Feature Relational Database Knowledge Graph</P><TABLE><COLGROUP><COL /><COL /><COL /></COLGROUP><TBODY><TR><TD>Structure</TD><TD>Fixed schema (tables, columns)</TD><TD>Flexible (entities + relationships)</TD></TR><TR><TD>Relationships</TD><TD>JOINs across tables</TD><TD>First-class citizens (edges)</TD></TR><TR><TD>Schema changes</TD><TD>ALTER TABLE needed</TD><TD>Just add new triplets</TD></TR><TR><TD>Query style</TD><TD>SQL with JOINs</TD><TD>Graph traversal / pattern matching</TD></TR><TR><TD>Best for</TD><TD>Structured, predictable data</TD><TD>Connected, evolving knowledge</TD></TR></TBODY></TABLE><P>A Knowledge Graph is <STRONG>schema-flexible</STRONG> — you don’t need to define all possible relationships upfront. You can keep adding new types of facts without changing any table structure.</P><HR /><H2 id="what-are-rdf-triplets" id="toc-hId-1005316191">2. What are RDF Triplets?</H2><P>RDF stands for <STRONG>Resource Description Framework</STRONG>. It’s a standard model for representing knowledge as <STRONG>triplets</STRONG> — the atomic unit of a Knowledge Graph.</P><P>Every triplet has three parts:</P><PRE><CODE>(Subject, Predicate, Object)</CODE></PRE><UL><LI><STRONG>Subject</STRONG>: The entity we’re talking about (e.g., “SAP SE”)</LI><LI><STRONG>Predicate</STRONG>: The relationship or property (e.g., “is_headquartered_in”)</LI><LI><STRONG>Object</STRONG>: The value or target entity (e.g., “Walldorf”)</LI></UL><H3 id="visual-representation" id="toc-hId-937885405">Visual Representation</H3><PRE><CODE> [SAP SE] ---is_headquartered_in--→ [Walldorf] | | |---develops--→ [Enterprise Software] | | [Germany] ←--is_located_in---┘</CODE></PRE><P>Each arrow is a triplet. A collection of triplets forms a <STRONG>graph</STRONG> — hence “Knowledge Graph.”</P><H3 id="why-triplets" id="toc-hId-741371900">Why Triplets?</H3><P>Triplets are powerful because they are: 1. <STRONG>Atomic</STRONG> — Each triplet is a self-contained fact 2. <STRONG>Composable</STRONG> — You can combine millions of triplets into a unified graph 3. <STRONG>Queryable</STRONG> — You can traverse relationships: “What does SAP SE develop?” or “Where is the company headquartered?” 4. <STRONG>Language-agnostic</STRONG> — The structure is universal regardless of source language</P><HR /><H2 id="why-do-we-need-knowledge-graphs-in-rag" id="toc-hId-415775676">3. Why Do We Need Knowledge Graphs in RAG?</H2><P>Let’s understand this with a concrete example.</P><H3 id="the-problem-with-vector-only-rag" id="toc-hId-348344890">The Problem with Vector-Only RAG</H3><P>Imagine your enterprise has documents about employees, projects, and departments. A user asks:</P><BLOCKQUOTE><P>“Who is the project lead for Project Phoenix and which department does that person belong to?”</P></BLOCKQUOTE><P>With <STRONG>vector-only RAG</STRONG>, the system: 1. Embeds the query into a vector 2. Finds the top-K most similar document chunks 3. Hopes that <EM>one single chunk</EM> contains all the information</P><P>But what if: - <STRONG>Chunk A</STRONG> says: “John Smith is the project lead for Project Phoenix” - <STRONG>Chunk B</STRONG> says: “John Smith works in the Cloud Engineering department”</P><P>Vector search might retrieve Chunk A (it’s most similar to the query), but miss Chunk B entirely. The LLM then cannot answer the full question.</P><H3 id="how-knowledge-graphs-solve-this" id="toc-hId-151831385">How Knowledge Graphs Solve This</H3><P>With a Knowledge Graph, we have these triplets stored:</P><PRE><CODE>(John Smith, is_project_lead_of, Project Phoenix) (John Smith, works_in, Cloud Engineering) (Cloud Engineering, is_department_of, SAP)</CODE></PRE><P>When the system retrieves Chunk A via vector search and finds “John Smith,” it can <STRONG>traverse the graph</STRONG> to discover that John Smith works in Cloud Engineering — even though that second fact was in a completely different document chunk.</P><H3 id="the-power-of-hybrid-retrieval" id="toc-hId--119913489">The Power of Hybrid Retrieval</H3><P>Capability Vector Search Knowledge Graph Hybrid (Both)</P><TABLE><TBODY><TR><TD>Semantic similarity</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD>Relationship traversal</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD>Multi-hop reasoning</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD>Handles unseen phrasing</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR><TR><TD>Entity connections</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span></TD></TR></TBODY></TABLE><P>By combining both, we get <STRONG>the best of both worlds</STRONG>: semantic understanding from vectors AND structured reasoning from graphs.</P><HR /><H2 id="extracting-triplets-from-text" id="toc-hId--23023987">4. Extracting Triplets from Text</H2><P>The first step in building a Knowledge Graph is extracting triplets from unstructured text. i implemented two approaches in our system.</P><H3 id="a.-rule-based-extraction-with-spacy-nlp-approach" id="toc-hId--512940499">4a. Rule-Based Extraction with spaCy (NLP Approach)</H3><P>Our first approach uses <STRONG>spaCy</STRONG>, a natural language processing library, to extract Subject-Verb-Object (SVO) patterns from sentences.</P><P>Here’s the code from our <A target="_blank" rel="noopener">triplets_service.py</A>:</P><pre class="lia-code-sample language-python"><code>import re from typing import List, Tuple import spacy _nlp = None def get_nlp(): global _nlp if _nlp is None: _nlp = spacy.load("en_core_web_sm") return _nlp def clean(text: str) -&gt; str: """Basic text cleaning function""" return re.sub(r"\s+", " ", text).strip() def extract_svo_spacy(sentence: str): nlp = get_nlp() doc = nlp(sentence) triplets = [] for token in doc: if token.pos_ == "VERB": # processing every verb # subject can be to the left or as a child subj = [w for w in token.children if w.dep_ in ("nsubj", "nsubjpass")] objs = [w for w in token.children if w.dep_ in ("dobj", "attr", "dative", "oprd")] # prepositional objects for prep in [w for w in token.children if w.dep_ == "prep"]: pobj = [w for w in prep.children if w.dep_ == "pobj"] objs.extend(pobj) if subj and objs: for o in objs: triplets.append((clean(subj[0].text), clean(token.lemma_), clean(o.text))) return triplets</code></pre><P>&nbsp;</P><H4 id="how-this-works-step-by-step" id="toc-hId--1002857011">How This Works — Step by Step</H4><OL><LI><STRONG>Load the spaCy model</STRONG> (<CODE>en_core_web_sm</CODE>) — a pre-trained English NLP model</LI><LI><STRONG>Parse the sentence</STRONG> — spaCy tokenizes it and identifies parts of speech (POS) and dependency relations</LI><LI><STRONG>Find every VERB</STRONG> — verbs are potential predicates in our triplets</LI><LI><STRONG>Find subjects</STRONG> — tokens with dependency labels <CODE>nsubj</CODE> (nominal subject) or <CODE>nsubjpass</CODE> (passive nominal subject)</LI><LI><STRONG>Find objects</STRONG> — tokens with dependency labels <CODE>dobj</CODE> (direct object), <CODE>attr</CODE> (attribute), etc.</LI><LI><STRONG>Handle prepositional objects</STRONG> — “works <EM>in</EM> engineering” → the object is “engineering” via the preposition “in”</LI><LI><STRONG>Build triplets</STRONG> — <A target="_blank" rel="noopener">(subject_text, verb_lemma, object_text)</A></LI></OL><P><STRONG>Example</STRONG>:<BR />Input: <CODE>"SAP develops enterprise software"</CODE><BR />Output: <CODE>[("SAP", "develop", "software")]</CODE></P><H4 id="batch-processing" id="toc-hId--1199370516">Batch Processing</H4><P>i also have a batch function in <A target="_blank" rel="noopener">triplets_service.py</A> to process an entire corpus:</P><pre class="lia-code-sample language-python"><code>def extract_corpus_triplets(corpus: List[str]) -&gt; List[List[Tuple[str, str, str]]]: """extracting triplets from a list of text chunks""" results = [] for text in corpus: sent = text.strip() if not sent: continue triplets = extract_svo_spacy(sent) results.append(triplets) return results</code></pre><H4 id="limitations-of-rule-based-extraction" id="toc-hId--1395884021">Limitations of Rule-Based Extraction</H4><P>While fast and deterministic, spaCy-based extraction has limitations: - Misses <STRONG>implicit relationships</STRONG> (“Headquartered in Walldorf since 1972” — the subject “SAP” isn’t in this fragment) - Produces <STRONG>shallow predicates</STRONG> (just the verb lemma, not semantic relationship names) - Struggles with <STRONG>complex sentences</STRONG> and nested clauses</P><P>This is why i built a second, more powerful approach.</P><H3 id="b.-llm-based-extraction-ai-powered-approach" id="toc-hId--1298994519">4b. LLM-Based Extraction (AI-Powered Approach)</H3><P>Our primary extraction method uses a <STRONG>Large Language Model</STRONG> (via SAP AI Core) to understand text semantically and produce high-quality triplets.</P><P>Here’s the code from our <A target="_blank" rel="noopener">knowledge_graph_service.py</A>:</P><pre class="lia-code-sample language-python"><code>async def generate_triplets(text_chunk: str): """ Uses an LLM via the Generative AI Hub SDK to extract RDF triplets from a given text chunk. """ prompt_for_triplets = f""" Extract key factual statements from the text as RDF triplets. STRICT REQUIREMENTS: 1. Return ONLY a valid JSON object 2. Use exactly this format: {{"triplets": [["subject", "predicate", "object"]]}} 3. Each triplet must have exactly 3 elements 4. Use clear, concise predicates (e.g., "is", "has", "uses", "located_in") 5. Maximum 15 triplets per response 6. No explanations, no extra text outside JSON Text: "{text_chunk}" JSON: """ try: response_content = await get_llm_response_async(prompt=prompt_for_triplets) response_json = json.loads(response_content) triplets = response_json.get("triplets", []) # converting the inner lists to tuples for consistency return [tuple(triplet) for triplet in triplets] except json.JSONDecodeError: print("Failed to parse JSON response from LLM.") return [] except Exception as e: print(f"An error occurred: {e}") return []</code></pre><P>&nbsp;</P><H4 id="how-this-works" id="toc-hId--1788911031">How This Works</H4><OL><LI><STRONG>Craft a precise prompt</STRONG> — We instruct the LLM to extract facts as <CODE>[subject, predicate, object]</CODE> arrays in JSON format</LI><LI><STRONG>Call the LLM asynchronously</STRONG> — via <A target="_blank" rel="noopener">get_llm_response_async()</A> which hits SAP AI Core’s inference API</LI><LI><STRONG>Parse the JSON response</STRONG> — extract the <A target="_blank" rel="noopener">triplets</A> array</LI><LI><STRONG>Convert to tuples</STRONG> — for consistency with the rest of our pipeline</LI><LI><STRONG>Handle errors gracefully</STRONG> — JSON parse failures and API errors return empty lists</LI></OL><H4 id="why-llm-based-extraction-is-superior" id="toc-hId--1985424536">Why LLM-Based Extraction is Superior</H4><P>Aspect spaCy (Rule-Based) LLM (AI-Powered)</P><TABLE><COLGROUP><COL /><COL /><COL /></COLGROUP><TBODY><TR><TD>Implicit relationships</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>Misses them</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Infers from context</TD></TR><TR><TD>Semantic predicates</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>Raw verb lemmas</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Meaningful labels like “is_headquartered_in”</TD></TR><TR><TD>Complex sentences</TD><TD><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>Often fails</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Handles well</TD></TR><TR><TD>Consistency</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Deterministic</TD><TD><span class="lia-unicode-emoji" title=":warning:">⚠️</span>May vary (hence we validate)</TD></TR><TR><TD>Speed</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Very fast</TD><TD><span class="lia-unicode-emoji" title=":warning:">⚠️</span>API call overhead</TD></TR><TR><TD>Cost</TD><TD><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>Free</TD><TD><span class="lia-unicode-emoji" title=":warning:">⚠️</span>LLM API costs</TD></TR></TBODY></TABLE><H3 id="batch-processing-with-async-concurrency" id="toc-hId--1888535034">Batch Processing with Async Concurrency</H3><P>Processing an entire corpus (potentially hundreds of chunks) sequentially would be extremely slow. i use <STRONG><CODE>asyncio.gather</CODE></STRONG> for concurrent processing:</P><pre class="lia-code-sample language-python"><code>async def convert_corpus_to_triplets_async( corpus: list, use_orchestrator: bool = True, ) -&gt; List[List[Tuple[str, str, str]]]: """Convert entire corpus to triplets using async concurrency""" if not corpus: return [] try: if use_orchestrator: orchestrator = TripletOrchestrator() results = await orchestrator.process_corpus(corpus) return results else: # Run all LLM calls concurrently tasks = [generate_triplets(text) for text in corpus] results = await asyncio.gather(*tasks, return_exceptions=True) processed_results = [] for i, result in enumerate(results): if isinstance(result, Exception): print(f"Error processing chunk {i}: {result}") processed_results.append([]) else: processed_results.append(result) return processed_results except Exception as e: print(f"Error in convert_corpus_to_triplets_async: {e}") return [[]] * len(corpus)</code></pre><P>&nbsp;</P><P>Notice the <CODE>use_orchestrator</CODE> flag — when <CODE>True</CODE>, i use our <STRONG>Multi-Agent Pipeline</STRONG> for higher quality triplets. Let’s dive into that next.</P><HR /><H2 id="the-multi-agent-triplet-processing-pipeline" id="toc-hId--1623461841">5. The Multi-Agent Triplet Processing Pipeline</H2><P>Raw LLM output isn’t always perfect. The model might produce inconsistent predicate names, duplicate triplets, or factually questionable relationships. To solve this, i built a <STRONG>multi-agent processing pipeline</STRONG> where specialized agents collaborate to refine triplets.</P><H3 id="architecture-overview" id="toc-hId--2113378353">Architecture Overview</H3><PRE><CODE>┌──────────────────────────────────────────────────────────┐ │ TripletOrchestrator │ │ │ │ ┌────────────┐ ┌───────────┐ ┌───────────┐ │ │ │ Analyzer │──→│ Semantic │──→│ Validator │ │ │ │ Agent │ │ Cleaner │ │ Agent │ │ │ └────────────┘ └───────────┘ └───────────┘ │ │ │ │ │ │ │ ┌───────────┐ │ │ │ └─error──→ │ JSON │ │ │ │ │ Repair │ │ │ │ └───────────┘ │ │ │ ┌────────▼────────┐ │ │ │ Aggregator │ │ │ │ Agent │ │ │ └─────────────────┘ │ └──────────────────────────────────────────────────────────┘</CODE></PRE><H3 id="the-state-object" id="toc-hId-1985075438">The State Object</H3><P>All agents share a common state object defined in <A target="_blank" rel="noopener">state_schema.py</A>:</P><pre class="lia-code-sample language-python"><code>class TripletState(BaseModel): """State management for triplet processing pipeline""" raw_text: str initial_triplets: List[List[str]] = [] cleaned_triplets: List[List[str]] = [] validated_triplets: List[List[str]] = [] final_triplets: List[List[str]] = [] # metadata and tracking processing_state: str = "initial" error_messages: List[str] = [] quality_scores: Dict[str, float] = {} retry_count: int = 0 max_retries: int = 3 # agent outputs analyzer_feedback: Optional[str] = None cleaner_feedback: Optional[str] = None validator_feedback: Optional[str] = None</code></pre><P>&nbsp;</P><P>This is a <STRONG>shared state pattern</STRONG> — each agent reads from and writes to specific fields, and the orchestrator controls the flow.</P><H3 id="stage-1-analyzer-agent" id="toc-hId-1788561933">Stage 1: Analyzer Agent</H3><P>The <A target="_blank" rel="noopener">AnalyzerAgent</A> in <A target="_blank" rel="noopener">analyzer_node.py</A> performs the initial extraction with a quality assessment:</P><pre class="lia-code-sample language-python"><code>class AnalyzerAgent: """Analyzes text and extracts initial triplets with quality assessment""" async def process(self, state: TripletState) -&gt; TripletState: try: prompt = self._create_analysis_prompt(state.raw_text) response = await get_llm_response_async(prompt) analysis_result = self._parse_response(response) if analysis_result["success"]: state.initial_triplets = analysis_result["triplets"] state.quality_scores["analyzer"] = analysis_result["quality_score"] state.analyzer_feedback = analysis_result["feedback"] state.processing_state = "analyzed" else: state.error_messages.append(f"Analyzer failed: {analysis_result['error']}") except Exception as e: state.error_messages.append(f"Analyzer exception: {str(e)}") return state</code></pre><P>The analyzer’s prompt asks the LLM to not only extract triplets but also <STRONG>rate the quality</STRONG> (0-1 score) and provide feedback about text complexity. This metadata helps downstream agents.</P><H3 id="stage-2-semantic-cleaner-agent" id="toc-hId-1592048428">Stage 2: Semantic Cleaner Agent</H3><P>The <A target="_blank" rel="noopener">SemanticCleanerAgent</A> in <A target="_blank" rel="noopener">semantic_cleaner_agent.py</A> standardizes the triplets:</P><pre class="lia-code-sample language-python"><code>class SemanticCleanerAgent: """Cleans and standardizes triplets for consistency""" def _create_cleaning_prompt(self, triplets: List[List[str]]) -&gt; str: triplets_str = json.dumps(triplets) return f""" Clean and standardize the following triplets for better semantic consistency. CLEANING TASKS: 1. Normalize entity names (remove extra spaces, fix capitalization) 2. Standardize predicates (use consistent naming like "is_a", "has_property", "located_in") 3. Remove duplicate or very similar triplets 4. Fix obvious semantic issues 5. Ensure subjects and objects are meaningful entities REQUIREMENTS: - Maintain factual accuracy - Use snake_case for predicates - Remove triplets with generic or meaningless entities Original triplets: {triplets_str} """</code></pre><P>&nbsp;</P><P>This agent ensures that predicates like <CODE>"is located in"</CODE>, <CODE>"located_in"</CODE>, and <CODE>"is_in"</CODE> all become a consistent <CODE>"located_in"</CODE>. This consistency is critical for reliable graph queries later.</P><H3 id="stage-3-validator-agent" id="toc-hId-1395534923">Stage 3: Validator Agent</H3><P>The <A target="_blank" rel="noopener">TripletValidatorAgent</A> in <A target="_blank" rel="noopener">triplet_validator_agent.py</A> cross-checks triplets against the original text:</P><pre class="lia-code-sample language-python"><code>class TripletValidatorAgent: """Validates triplets for factual accuracy and relevance""" def _create_validation_prompt(self, triplets, original_text): return f""" Validate the following triplets against the original text for factual accuracy. VALIDATION CRITERIA: 1. Factual accuracy: Are the relationships stated correctly? 2. Semantic validity: Do the predicates make sense? 3. Completeness: Are important facts missing? 4. Consistency: Are there contradictions? 5. Relevance: Are all triplets relevant to the source text? Original text: "{original_text}" Triplets to validate: {json.dumps(triplets)} """</code></pre><P>&nbsp;</P><P>This is a crucial <STRONG>fact-checking step</STRONG> — the validator has access to both the triplets AND the original text, so it can verify that no hallucinated facts slipped through.</P><H3 id="stage-4-aggregator-agent" id="toc-hId-1199021418">Stage 4: Aggregator Agent</H3><P>The <A target="_blank" rel="noopener">AggregatorAgent</A> in <A target="_blank" rel="noopener">aggregator_node.py</A> selects the best output and calculates an overall quality score:</P><pre class="lia-code-sample language-python"><code>class AggregatorAgent: """Aggregates results from all agents and determines final output""" def _select_best_triplets(self, state: TripletState) -&gt; List[List[str]]: """Select the best quality triplets from processing pipeline""" if state.validated_triplets: return state.validated_triplets elif state.cleaned_triplets: return state.cleaned_triplets elif state.initial_triplets: return state.initial_triplets else: return [] def _calculate_overall_quality(self, state: TripletState) -&gt; float: """Weighted average quality score""" weights = { "analyzer": 0.3, "cleaner": 0.3, "validator": 0.4 # validator has higher weight } weighted_sum = 0.0 total_weight = 0.0 for agent, weight in weights.items(): if agent in state.quality_scores: weighted_sum += state.quality_scores[agent] * weight total_weight += weight return weighted_sum / total_weight if total_weight &gt; 0 else 0.0</code></pre><P>&nbsp;</P><P>Notice the <STRONG>graceful degradation</STRONG> — if validation fails, it falls back to cleaned triplets; if cleaning fails, it uses the initial triplets. The system never completely fails.</P><H3 id="the-orchestrator" id="toc-hId-1002507913">The Orchestrator</H3><P>The <A target="_blank" rel="noopener">TripletOrchestrator</A> in <A target="_blank" rel="noopener">orchestrator.py</A> wires everything together with retry logic:</P><pre class="lia-code-sample language-python"><code>class TripletOrchestrator: def __init__(self): self.analyzer = AnalyzerAgent() self.cleaner = SemanticCleanerAgent() self.validator = TripletValidatorAgent() self.aggregator = AggregatorAgent() self.json_repair = JSONRepairAgent() async def preprocess_text_chunk(self, text_chunk: str) -&gt; List[Tuple[str, str, str]]: state = TripletState(raw_text=text_chunk) try: # Stage 1: Initial analysis (with retries) while True: state = await self.analyzer.process(state) if not self._should_retry(state): break state.retry_count += 1 # Stage 2: Semantic cleaning if state.initial_triplets and not self._has_critical_errors(state): while True: state = await self.cleaner.process(state) if not self._should_retry(state): break state.retry_count += 1 # Stage 3: Validation if (state.cleaned_triplets or state.initial_triplets) and not self._has_critical_errors(state): while True: state = await self.validator.process(state) if not self._should_retry(state): break state.retry_count += 1 # Stage 4: Aggregation (no retries needed) state = self.aggregator.process(state) return [tuple(triplet) for triplet in state.final_triplets] except Exception as e: return []</code></pre><P>&nbsp;</P><P>Each stage has <STRONG>retry logic</STRONG> — if an agent fails (e.g., LLM returns malformed JSON), the orchestrator retries up to <CODE>max_retries</CODE> (3) times before moving to the next stage.</P><HR /><H2 id="storing-triplets-in-sap-hana-triple-store" id="toc-hId-1099397415">6. Storing Triplets in SAP HANA Triple Store</H2><P>Now that we can extract high-quality triplets, we need somewhere to store them. We built a <STRONG>Triple Store</STRONG> as a column table in SAP HANA.</P><H3 id="table-schema" id="toc-hId-609480903">Table Schema</H3><P>From our <A target="_blank" rel="noopener">hana_repository.py</A>, here’s the table creation:</P><pre class="lia-code-sample language-python"><code>def ensure_triple_store(): """Create TRIPLE STORE table and indexes if they don't exist""" conn = get_hana_db() try: # Check if table exists exists_sql = """ SELECT 1 FROM SYS.TABLES WHERE TABLE_NAME = 'TRIPLE_STORE' AND SCHEMA_NAME = CURRENT_SCHEMA """ result = conn.sql(exists_sql).collect() if not result.empty: return # Create the table create_sql = """ CREATE COLUMN TABLE TRIPLE_STORE ( ID BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, EMB_REF_ID NVARCHAR(36) NOT NULL, CHUNK_INDEX INTEGER, SUBJECT NVARCHAR(500), PREDICATE NVARCHAR(200), OBJECT NVARCHAR(1000), CREATED_AT TIMESTAMP DEFAULT CURRENT_UTCTIMESTAMP, FOREIGN KEY (EMB_REF_ID) REFERENCES DOCUMENTS_EMBEDDING(ref_id) ) """ conn.execute_sql(create_sql) # Create indexes for fast lookups conn.execute_sql("CREATE INDEX IDX_TRIPLE_SUBJ ON TRIPLE_STORE (SUBJECT)") conn.execute_sql("CREATE INDEX IDX_TRIPLE_PRED ON TRIPLE_STORE (PREDICATE)") conn.execute_sql("CREATE INDEX IDX_TRIPLE_OBJ ON TRIPLE_STORE (OBJECT)") conn.execute_sql("CREATE INDEX IDX_TRIPLE_REF ON TRIPLE_STORE (EMB_REF_ID, CHUNK_INDEX)") except Exception as e: print(f"error ensuring triple store: {e}")</code></pre><H4 id="toc-hId-287748082">&nbsp;</H4><H4 id="understanding-the-schema" id="toc-hId-91234577">Understanding the Schema</H4><P>Column Type Purpose</P><TABLE><COLGROUP><COL /><COL /><COL /></COLGROUP><TBODY><TR><TD><CODE>ID</CODE></TD><TD>BIGINT (auto)</TD><TD>Unique triplet identifier</TD></TR><TR><TD><CODE>EMB_REF_ID</CODE></TD><TD>NVARCHAR(36)</TD><TD><STRONG>Links to the embedding table</STRONG> — this is the bridge between vectors and graph</TD></TR><TR><TD><CODE>CHUNK_INDEX</CODE></TD><TD>INTEGER</TD><TD>Which chunk within a document this triplet came from</TD></TR><TR><TD><CODE>SUBJECT</CODE></TD><TD>NVARCHAR(500)</TD><TD>The subject entity</TD></TR><TR><TD><CODE>PREDICATE</CODE></TD><TD>NVARCHAR(200)</TD><TD>The relationship</TD></TR><TR><TD><CODE>OBJECT</CODE></TD><TD>NVARCHAR(1000)</TD><TD>The object entity</TD></TR><TR><TD><CODE>CREATED_AT</CODE></TD><TD>TIMESTAMP</TD><TD>When the triplet was created</TD></TR></TBODY></TABLE><P>The <STRONG><CODE>EMB_REF_ID</CODE> foreign key</STRONG> is the critical design decision. It links every triplet back to its source embedding in the <CODE>DOCUMENTS_EMBEDDING</CODE> table. This means: - Given a vector search result, we can find its associated triplets - Given a triplet, we can find the original text chunk - The two systems (vector + graph) are <STRONG>tightly integrated</STRONG></P><H4 id="indexes-for-performance" id="toc-hId--105278928">Indexes for Performance</H4><P>i created&nbsp;<STRONG>four indexes</STRONG> for fast querying: - <CODE>IDX_TRIPLE_SUBJ</CODE> — look up triplets by subject - <CODE>IDX_TRIPLE_PRED</CODE> — look up triplets by predicate - <CODE>IDX_TRIPLE_OBJ</CODE> — look up triplets by object - <CODE>IDX_TRIPLE_REF</CODE> — look up triplets by source embedding (composite index)</P><H3 id="the-embedding-table-for-reference" id="toc-hId--8389426">The Embedding Table (for reference)</H3><P>The companion <CODE>DOCUMENTS_EMBEDDING</CODE> table stores the vector embeddings:</P><pre class="lia-code-sample language-python"><code>CREATE COLUMN TABLE DOCUMENTS_EMBEDDING ( id INTEGER GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, document_text NVARCHAR(5000), embedding REAL_VECTOR(3072), chunk_metadata NVARCHAR(1000), ref_id NVARCHAR(36) UNIQUE NOT NULL )</code></pre><P>&nbsp;</P><P><CODE>REAL_VECTOR(3072)</CODE> is SAP HANA’s native vector type — it stores 3072-dimensional embeddings (from OpenAI’s <CODE>text-embedding-3-large</CODE> model) and supports <STRONG>cosine similarity</STRONG> searches natively.</P><H3 id="inserting-triplets" id="toc-hId--204902931">Inserting Triplets</H3><pre class="lia-code-sample language-python"><code>def insert_triplets(triplets_rows: list): """ Insert triplets with their metadata triplets_rows: list of (ref_id, chunk_index, subject, predicate, object) tuples """ ensure_triple_store() conn = get_hana_db() if not triplets_rows: return False sql = """ INSERT INTO TRIPLE_STORE (EMB_REF_ID, CHUNK_INDEX, SUBJECT, PREDICATE, OBJECT) VALUES (?, ?, ?, ?, ?) """ try: cur = conn.connection.cursor() cur.executemany(sql, triplets_rows) conn.connection.commit() cur.close() return True except Exception as e: return False</code></pre><H3 id="toc-hId--401416436">&nbsp;</H3><P>We use <CODE>executemany</CODE> for <STRONG>batch insertion</STRONG> — inserting hundreds of triplets in a single database round-trip instead of one-by-one.</P><HR /><H2 id="the-complete-ingestion-pipeline" id="toc-hId--304526934">7. The Complete Ingestion Pipeline</H2><P>Let’s see how everything comes together in our <A target="_blank" rel="noopener">document_processing_service.py</A>. This is the end-to-end flow: from a raw document URL to stored embeddings AND triplets.</P><pre class="lia-code-sample language-python"><code>async def process_and_embed_file_from_url(file_url: str): """ Download file, extract text, create embeddings and triplets, store everything in HANA """ # Step 1: Download and extract text text_content = process_file_from_url(file_url) # Step 2: Split into overlapping chunks chunks = split_text_into_chunks(text_content) # Step 3: Preprocess chunks preprocessed_chunks = preprocess_text_chunks(chunks) # Step 4: Generate unique reference IDs (linking embeddings ↔ triplets) ref_ids = [str(uuid.uuid4()) for _ in preprocessed_chunks] # Step 5: Create embeddings for all chunks (concurrent) embeddings = await embedding_service.get_embeddings_batch(preprocessed_chunks, max_workers=5) # Step 6: Extract triplets from all chunks (multi-agent pipeline) triplets_per_chunk = await convert_corpus_to_triplets_async(preprocessed_chunks) # Step 7: Insert embeddings FIRST (ref_ids must exist for FK constraint) rows = [] for i, (chunk, embedding_vector) in enumerate(zip(preprocessed_chunks, embeddings)): metadata_json = json.dumps({ "chunk_index": i, "chunk_size": len(chunk), "source_url": file_url, "ref_id": ref_ids[i], }) embedding_string = str(embedding_vector) rows.append((chunk, embedding_string, metadata_json, ref_ids[i])) batch_insertion_embedding(rows=rows) # Step 8: Insert triplets (linked via ref_ids) triplets_rows = [] for chunk_index, (ref_id, chunk_triplets) in enumerate(zip(ref_ids, triplets_per_chunk)): for t in chunk_triplets: if isinstance(t, dict): head, relation, tail = t.get("subject"), t.get("predicate"), t.get("object") else: head, relation, tail = t triplets_rows.append((ref_id, chunk_index, head, relation, tail)) if triplets_rows: insert_triplets(triplets_rows)</code></pre><P>&nbsp;</P><H3 id="the-text-chunking-strategy" id="toc-hId--794443446">The Text Chunking Strategy</H3><pre class="lia-code-sample language-python"><code>def split_text_into_chunks(text: str, chunk_size: int = 1000, overlap: int = 200): """Split text into overlapping chunks for better embedding""" chunks = [] start = 0 while start &lt; len(text): end = start + chunk_size chunk = text[start:end] chunks.append(chunk) start = end - overlap return chunks</code></pre><H3 id="toc-hId--990956951">&nbsp;</H3><P>i used&nbsp;<STRONG>overlapping chunks</STRONG> (200 characters overlap) so that facts spanning chunk boundaries aren’t lost. Each chunk is 1000 characters with a 200-character overlap with the next chunk.</P><H3 id="the-api-endpoint" id="toc-hId--1187470456">The API Endpoint</H3><P>This entire pipeline is exposed via a FastAPI endpoint in <A target="_blank" rel="noopener">genai_hub.py</A>:</P><pre class="lia-code-sample language-python"><code>_router.post("/create-store-embedding", response_model=EmbeddingKGResponse) async def create_and_store_embedding( request: EmbeddingKGRequest, background_tasks: BackgroundTasks ) -&gt; EmbeddingKGResponse: """Create and store embeddings using file URL""" # Processing in background (non-blocking) background_tasks.add_task(process_and_embed_file_from_url, request.file_url) return EmbeddingKGResponse( success=True, message=f"Successfully processed and embedded document from {request.file_url}" )</code></pre><P>&nbsp;</P><P>The processing runs as a <STRONG>background task</STRONG> — the API responds immediately while the heavy work (embedding + triplet extraction) happens asynchronously.</P><HR /><H2 id="querying-the-knowledge-graph" id="toc-hId--922397263">8. Querying the Knowledge Graph</H2><P>Now for the exciting part — retrieving knowledge from our graph. i have two query strategies in <A target="_blank" rel="noopener">knowledge_graph_service.py</A>.</P><H3 id="strategy-1-chunk-based-retrieval" id="toc-hId--1412313775">Strategy 1: Chunk-Based Retrieval</H3><P>Given a set of embedding reference IDs (from vector search results), fetch all associated triplets:</P><pre class="lia-code-sample language-python"><code>def get_triplets_by_chunks(ref_ids: list, query: str): """Getting triplets from specific chunks + optional query expansion""" conn = get_hana_db() ref_placeholders = ",".join(f"'{ref}'" for ref in ref_ids) base_sql = f""" SELECT SUBJECT, PREDICATE, OBJECT FROM TRIPLE_STORE WHERE EMB_REF_ID IN ({ref_placeholders}) ORDER BY CHUNK_INDEX, ID LIMIT 200 """ triplets = [] try: df = conn.sql(base_sql).collect() for _, row in df.iterrows(): triplets.append((row["SUBJECT"], row["PREDICATE"], row["OBJECT"])) except Exception as e: print(f"An error occurred: {e}") # Optional: expand with query-related triplets if query and triplets: expanded = search_related_triplets( query, existing_entities=[t[0] for t in triplets[:10]] ) triplets.extend(expanded) return triplets</code></pre><P>&nbsp;</P><P>This function does two things: 1. <STRONG>Direct retrieval</STRONG> — gets all triplets linked to the vector search results 2. <STRONG>Graph expansion</STRONG> — discovers additional related triplets based on entities found</P><H3 id="strategy-2-keyword-entity-expansion" id="toc-hId--1608827280">Strategy 2: Keyword + Entity Expansion</H3><P>The <A target="_blank" rel="noopener">search_related_triplets</A> function implements a two-phase search:</P><pre class="lia-code-sample language-python"><code>def search_related_triplets(query: str, existing_entities: list, limit: int = 50): """Searching triplets by keyword + entity expansion""" conn = get_hana_db() triplets = [] # Phase 1: Keyword search in triplets query_clean = query.replace("'", "''") keyword_sql = f""" SELECT SUBJECT, PREDICATE, OBJECT FROM TRIPLE_STORE WHERE SUBJECT LIKE '%{query_clean}%' OR OBJECT LIKE '%{query_clean}%' OR PREDICATE LIKE '%{query_clean}%' LIMIT {limit} """ try: df = conn.sql(keyword_sql).collect() for _, row in df.iterrows(): triplets.append((row["SUBJECT"], row["PREDICATE"], row["OBJECT"])) except Exception as e: print(f"error in keyword triplet search: {e}") # Phase 2: Entity expansion if existing_entities: entities_clean = [e.replace("'", "''") for e in existing_entities[:5]] entities_placeholders = "','".join(entities_clean) expansion_sql = f""" SELECT SUBJECT, PREDICATE, OBJECT FROM TRIPLE_STORE WHERE SUBJECT IN ('{entities_placeholders}') OR OBJECT IN ('{entities_placeholders}') LIMIT {limit} """ try: df = conn.sql(expansion_sql).collect() for _, row in df.iterrows(): triplets.append((row["SUBJECT"], row["PREDICATE"], row["OBJECT"])) except Exception as e: print(f"error in entity expansion triplet search: {e}") return triplets</code></pre><P>&nbsp;</P><P><STRONG>Phase 1 (Keyword Search)</STRONG>: Finds triplets where the query terms appear anywhere in subject, predicate, or object. This catches direct mentions.</P><P><STRONG>Phase 2 (Entity Expansion)</STRONG>: Takes entities discovered in Phase 1 and finds ALL triplets mentioning those entities. This is the “graph traversal” — following connections one hop outward.</P><P>This is how we achieve <STRONG>multi-hop reasoning</STRONG>: Vector search finds relevant chunks → chunk triplets reveal entities → entity expansion reveals connected facts from other chunks.</P><HR /><H2 id="hybrid-retrieval-vector-graph" id="toc-hId--1511937778">9. Hybrid Retrieval: Vector + Graph</H2><P>The <A target="_blank" rel="noopener">ContextService</A> in <A target="_blank" rel="noopener">context_service.py</A> is where vector search and knowledge graph come together:</P><pre class="lia-code-sample language-python"><code>class ContextService: """Context service with cache for RAG Pipeline""" async def hybrid_search_context(self, query: str, top_k: int = 5, expand_graph: bool = True) -&gt; str: """ Hybrid retrieval: vector similarity + graph retrieval Returns combined textual context for RAG """ # Step 1: Generate query embedding query_embedding = await embedding_service.get_embedding(query) # Step 2: Vector similarity search vector_results = await search_similiar_documents(query_embedding, top_k=top_k) if vector_results is None or vector_results.empty: return "" context_parts = [] ref_ids = [] # Step 3: Extract results and collect ref_ids for _, row in vector_results.iterrows(): ref_id = row.get("REF_ID") or row.get("ref_id") chunk_text = row.get("DOCUMENT_TEXT", '') similarity = row.get("SIMILARITY", 0) context_parts.append(f"Chunk (similarity: {similarity:.4f}): {chunk_text}") if ref_id: ref_ids.append(ref_id) # Step 4: Graph expansion using ref_ids from vector results if expand_graph and ref_ids: graph_context = get_triplets_by_chunks(ref_ids, query) if graph_context: context_parts.append(f"\nRelated Facts: {graph_context}") return "\n\n".join(context_parts)</code></pre><P>&nbsp;</P><H3 id="the-hybrid-flow-visualized" id="toc-hId--2001854290">The Hybrid Flow Visualized</H3><PRE><CODE>User Query: "Who leads Project Phoenix?" │ ▼ ┌─────────────────────────┐ │ 1. Embed Query │ → [0.023, -0.156, 0.891, ...] └────────────┬────────────┘ │ ▼ ┌─────────────────────────┐ │ 2. Vector Search │ → Top 5 similar chunks │ (COSINE_SIMILARITY) │ with ref_ids └────────────┬────────────┘ │ ▼ ┌─────────────────────────────────────────┐ │ 3. Graph Expansion │ │ - Fetch triplets for those ref_ids │ │ - Keyword search for "Project Phoenix" │ │ - Entity expansion for found entities │ └────────────┬────────────────────────────┘ │ ▼ ┌─────────────────────────┐ │ 4. Combine Context │ │ - Chunk texts │ → "Chunk (similarity: 0.92): ..." │ - Related facts │ → "Related Facts: [(John Smith, leads, Project Phoenix), ...]" └────────────┬────────────┘ │ ▼ ┌─────────────────────────┐ │ 5. Send to LLM │ → Final answer with full context └─────────────────────────┘</CODE></PRE><HR /><H2 id="the-full-rag-pipeline" id="toc-hId--1904964788">10. The Full RAG Pipeline</H2><P>Everything comes together in the <CODE>/chat</CODE> endpoint in <A target="_blank" rel="noopener">RAG_pipeline.py</A>:</P><pre class="lia-code-sample language-python"><code>_pipeline_router.post("/chat", response_model=RAGChatResponse) async def rag_chat(request: RAGChatRequest) -&gt; RAGChatResponse: """Complete RAG pipeline: Retrieve + Augment + Generate""" try: # Check cache first cache_key = context_service.cache.make_key( "rag_response", _make_query_hash(request.query, request.temperature, request.max_tokens), ) cached_response = await context_service.cache.get(cache_key) if cached_response: cached_response["from_cache"] = True return RAGChatResponse(**cached_response) # Hybrid retrieval (vector + graph) context = await context_service.hybrid_search_context( request.query, top_k=request.k, expand_graph=True ) if context == "": return RAGChatResponse( success=True, query=request.query, answer="I couldn't find any relevant documents to answer your question.", ) # Generate answer with augmented context answer = await generate_rag_response( request.query, context, request.temperature, request.max_tokens ) response_data = { "success": True, "query": request.query, "answer": answer, "context_used": context, "from_cache": False } # Cache for 1 hour await context_service.cache.set(cache_key, response_data, ttl=3600) return RAGChatResponse(**response_data) except Exception as e: raise HTTPException(status_code=500, detail={"success": False, "error": str(e)})</code></pre><P>&nbsp;</P><P>The prompt sent to the LLM for generation:</P><pre class="lia-code-sample language-python"><code>async def generate_rag_response(query, context, temperature=0.1, max_tokens=500): prompt = f""" Based on the following context, answer the question: Context: {context} Question: {query} Answer based only on the context provided: """ response = await llm_service.get_llm_response_async(prompt=prompt) return response</code></pre><P>&nbsp;</P><P>The context now contains <STRONG>both</STRONG> similar text chunks (from vectors) <STRONG>and</STRONG> structured facts (from the knowledge graph). The LLM gets the richest possible context to generate its answer.</P><HR /><H2 id="exposing-via-mcp-for-ai-agents" id="toc-hId--2101478293">11. Exposing via MCP for AI Agents</H2><P>i also expose our hybrid RAG as an <STRONG>MCP (Model Context Protocol) tool</STRONG> in <A target="_blank" rel="noopener">mcp/server.py</A>, so AI agents can use it directly:</P><pre class="lia-code-sample language-python"><code>from fastmcp import FastMCP mcp = FastMCP("AI-RAG-Service") @mcp.tool() async def rag_chat(query: str, top_k: int = 3, temperature: float = 0.1) -&gt; str: """Chat with RAG pipeline - retrieves context and generates answer""" # Get context using hybrid search (vector + knowledge graph) context = await context_service.hybrid_search_context( query, top_k=top_k, expand_graph=True ) if not context: return "I couldn't find any relevant documents to answer your question." prompt = f"""Based on the following context, answer the question: Context: {context} Question: {query} Answer based only on the context provided:""" answer = await llm_service.get_llm_response_async(prompt=prompt) return answer</code></pre><P>&nbsp;</P><P>This means any MCP-compatible agent (like those built with LangChain, LangGraph, or Claude) can call our <A target="_blank" rel="noopener">rag_chat</A> tool and get knowledge-graph-augmented answers automatically.</P><HR /><H2 id="architecture-summary" id="toc-hId-1996975498">12. Architecture Summary</H2><P>Here’s the complete system architecture:</P><PRE><CODE>┌─────────────────────────────────────────────────────────────────────┐ │ INGESTION PIPELINE │ │ │ │ Document URL → Download → Text Extraction → Chunking (1000/200) │ │ │ │ │ ├──→ Embedding Service ──→ DOCUMENTS_EMBEDDING table │ │ │ (text-embedding-3-large, 3072 dims) │ │ │ │ │ └──→ Multi-Agent Triplet Pipeline ──→ TRIPLE_STORE table │ │ (Analyzer → Cleaner → Validator → Aggregator) │ │ │ │ Tables linked by ref_id (UUID) ←─ Foreign Key relationship │ └─────────────────────────────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────────────┐ │ RETRIEVAL PIPELINE │ │ │ │ User Query → Embed Query → Cosine Similarity Search (HANA Vector) │ │ │ │ │ ├──→ Top-K similar chunks (text + ref_ids) │ │ │ │ │ └──→ Graph Expansion (ref_id triplets + keyword + entities) │ │ │ │ Combined Context → LLM Prompt → Generated Answer │ │ │ │ Exposed via: FastAPI REST + MCP Tools for AI Agents │ └─────────────────────────────────────────────────────────────────────┘</CODE></PRE><HR /><H2 id="conclusion" id="toc-hId-1800461993">13. Conclusion</H2><P>Knowledge Graphs are not just an academic concept — they’re a practical, powerful enhancement to any enterprise RAG system. By combining vector embeddings with structured knowledge triplets stored in SAP HANA’s Triple Store, i achieved:</P><OL><LI><STRONG>Better answer quality</STRONG> — The LLM receives both semantically similar text AND structured facts</LI><LI><STRONG>Multi-hop reasoning</STRONG> — Entity expansion allows discovering connected facts across different documents</LI><LI><STRONG>Production-grade reliability</STRONG> — Multi-agent pipeline with retries, validation, and graceful degradation</LI><LI><STRONG>Tight integration</STRONG> — Foreign key linking between embeddings and triplets enables seamless hybrid retrieval</LI><LI><STRONG>Agent-ready</STRONG> — MCP tools let any AI agent leverage the knowledge graph instantly</LI></OL><H3 id="key-takeaways-for-your-team" id="toc-hId-1310545481">Key Takeaways for Your Team</H3><UL><LI><STRONG>Start with LLM-based triplet extraction</STRONG> — it produces far better triplets than rule-based NLP</LI><LI><STRONG>Always validate</STRONG> — the multi-agent pipeline (analyze → clean → validate → aggregate) catches errors early</LI><LI><STRONG>Link your stores</STRONG> — the <CODE>ref_id</CODE> foreign key between embeddings and triplets is what makes hybrid retrieval possible</LI><LI><STRONG>Index aggressively</STRONG> — Subject, Predicate, Object, and composite indexes make graph queries fast</LI><LI><STRONG>Expand the graph</STRONG> — entity expansion is where the real magic happens, discovering connections the vector search alone would miss</LI></UL><P>The future of enterprise AI isn’t just “more data”, it’s <STRONG>more connected data</STRONG>. Knowledge Graphs give your AI agents not just <EM>context</EM>, but <EM>direction</EM>.</P><HR /><P><EM>Have questions or want to discuss further? Reach out to me&nbsp;</EM></P> 2026-03-05T11:37:21.007000+01:00 https://community.sap.com/t5/product-lifecycle-management-blog-posts-by-sap/quick-financial-plan-key-updates-in-sap-commercial-project-management-s-4/ba-p/14341095 Quick Financial Plan - Key Updates in SAP Commercial Project Management S/4 HANA 2025 FSP01 release 2026-03-06T11:59:26.620000+01:00 Udita_Dev_Roy https://community.sap.com/t5/user/viewprofilepage/user-id/1885345 <P>The S/4 HANA 2025 FSP01 release brings significant enhancements to Quick Financial Plan (QFP) capabilities within SAP S/4 HANA Commercial Project Management (CPM), reinforcing SAP`s commitment to smarter, faster and more integrated financial planning. In this blog, we explore the key innovations delivered in the S/4 HANA 2025 FSP01 release and how they help in streamlining the planning process using QFP`s capability as a comprehensive financial planning component in CPM.</P><P><FONT size="5"><STRONG>Enhanced Distribution with addition of Rate (Cost) and Rate (Revenue) fields for Quantity-Driven Resource Types</STRONG></FONT></P><P>The Distribute functionality has been enhanced for quantity-driven resource types. The pop-up dialog now allows planners to enter Rate (Cost) and Rate (Revenue) in addition along with Quantity.&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 1: Rate (Cost) and Rate (Revenue) added for Quantity-driven Resource Types in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379628i027D9C5EC34DFD51/image-size/large?v=v2&amp;px=999" role="button" title="Distribution_NewFields.png" alt="Fig 1: Rate (Cost) and Rate (Revenue) added for Quantity-driven Resource Types in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 1: Rate (Cost) and Rate (Revenue) added for Quantity-driven Resource Types in QFP</span></span></P><P>&nbsp;<SPAN>The system automatically applies the Rate across all planning periods along with the allocated Quantities. Financial values like Cost (Transaction), Cost (Plan), Revenue (Transaction), Revenue (Plan) are also calculated by the system, thereby eliminating manual intervention.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 2: Rate automatically distributed across all periods in Distribution" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379629i84D077C19E7A316B/image-size/large?v=v2&amp;px=999" role="button" title="AfterDistribution.png" alt="Fig 2: Rate automatically distributed across all periods in Distribution" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 2: Rate automatically distributed across all periods in Distribution</span></span></P><P><FONT size="5"><STRONG>Addition of Import Functionality</STRONG></FONT></P><P>The QFP page for planning now includes a dedicated Import button. Planning data can be brought in from associated Project System projects into Commercial Projects.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 3: Import button added in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380553i70DA02C06A13289E/image-size/large?v=v2&amp;px=999" role="button" title="Import_Button1.png" alt="Fig 3: Import button added in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 3: Import button added in QFP</span></span></P><P>Upon selecting&nbsp;<EM>Import Data</EM>&nbsp;under <EM>Import</EM> button, the system initiates the import of data from PS project. The import process is executed based on the import method and import strategies configured in SPRO. During execution, the system automatically determines the mapping between the PS elements and the corresponding CPM structure. The progress of the import can be monitored by clicking the <EM>Refresh</EM> button. By expanding the page header, the current status of the import can be viewed in the <EM>Import Status</EM>&nbsp;indicator.<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 4: Check status of Import using the Import Status indicator" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379673i6FE24829A6904C3A/image-size/large?v=v2&amp;px=999" role="button" title="Import_CheckProgress.png" alt="Fig 4: Check status of Import using the Import Status indicator" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 4: Check status of Import using the Import Status indicator</span></span></P><P>Once the <EM>Import Status</EM> indicator turns green, the imported data can be immediately accessed in the QPF page. The system updates the data in real time, so there is no need to reload the application to view the imported data.&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 5: Select structure element from Financial Summary to view Import results" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379681i91B7131D8DF2F465/image-size/large?v=v2&amp;px=999" role="button" title="Import Data.png" alt="Fig 5: Select structure element from Financial Summary to view Import results" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 5: Select structure element from Financial Summary to view Import results</span></span></P><P>The&nbsp;<EM>Import Log</EM>&nbsp;can be reviewed to access details of the imported data, including the import methods and strategies applied. Additionally, there is an option to simulate the import using <EM>Simulate Data</EM>&nbsp;function and review the details in <EM>Simulate Log</EM>. This allows verification of import process before executing it in live environment.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 6: Additional functionalities available under Import button in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380529i5F5A7C8300C3A6FA/image-size/large?v=v2&amp;px=999" role="button" title="Import_Buttons.png" alt="Fig 6: Additional functionalities available under Import button in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 6: Additional functionalities available under Import button in QFP</span></span></P><P><FONT size="5"><STRONG>Output Based Planning</STRONG></FONT></P><P>Output Based planning is now enabled on QFP for monthly and fiscal breakdown scenarios. Upon selecting the <EM>Plan</EM> button in Financial Plan, the system displays the Output Based planning option.&nbsp;</P><P><SPAN><SPAN class=""><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 7: Output Based Planning added under Plan in Financial Plan" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379973i23C3DB05B857BF85/image-size/large?v=v2&amp;px=999" role="button" title="OutputBased_FinancialPlan.png" alt="Fig 7: Output Based Planning added under Plan in Financial Plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 7: Output Based Planning added under Plan in Financial Plan</span></span></SPAN></SPAN></P><P>QFP allows output quantity to be planned to&nbsp;<SPAN>statistical key figure in Output Based planning. In addition, the standard options to <EM>Distribute</EM> and <EM>Delete</EM> the entries are present, similar to any other financial plan.</SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 8: Select statistical key figure to link output quantities in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379975iA50B33DAE1C042BF/image-size/large?v=v2&amp;px=999" role="button" title="Output Plan.png" alt="Fig 8: Select statistical key figure to link output quantities in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 8: Select statistical key figure to link output quantities in QFP</span></span></P><P><SPAN>Once the output quantity is planned, the corresponding&nbsp;<SPAN class="">Input Quantity</SPAN>,&nbsp;<SPAN class="">Cost</SPAN><SPAN class="">&nbsp;needed to achieve the planned outputs can be planned in <EM>Input Resource Plan</EM>. The standard options of <EM>Calculate Cost</EM>, <EM>Delete</EM> and <EM>Distribute</EM> are available in the input plan. Additionally, a new <EM>Calculate quantity</EM> button has been introduced to compute the input quantity based on Productivity (Plan) and planned output quantity. Once the input plan is entered, the system automatically populates the Unit price and Productivity fields in the output plan, streamlining the planning process and reducing manual effort.</SPAN></SPAN>&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 9: Define Input Resource Plan values to achieve the output quantities in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/379976i2EB2826DEC21C643/image-size/large?v=v2&amp;px=999" role="button" title="Input Plan.png" alt="Fig 9: Define Input Resource Plan values to achieve the output quantities in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 9: Define Input Resource Plan values to achieve the output quantities in QFP</span></span></P><P><FONT size="5"><STRONG>Cash Flow Planning</STRONG></FONT></P><P>Cash Flow planning is now embedded within QFP for monthly and fiscal breakdown scenarios, giving organizations a clear view of the incoming and outgoing payments of projects based on the financial plan. This makes it easier to manage resources efficiently and provides a comprehensive view of project financial health.&nbsp;</P><P>Consider the below monthly planning:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 10: Monthly planning to review Cash Flow" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380515i713E497814DAA5BE/image-size/large?v=v2&amp;px=999" role="button" title="Planning_Cash Flow.png" alt="Fig 10: Monthly planning to review Cash Flow" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 10: Monthly planning to review Cash Flow</span></span></P><P>In order to access Cash Flow in QFP, select <EM>Cash Flow</EM> under <EM>Plan</EM> in Financial Plan. When the QFP Page opens, a new button <EM>Cash Flow</EM> is displayed.&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 11: Cash Flow added under Plan in Financial Plan" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380517i77650077BA132D5E/image-size/large?v=v2&amp;px=999" role="button" title="CashFlow_Button.png" alt="Fig 11: Cash Flow added under Plan in Financial Plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 11: Cash Flow added under Plan in Financial Plan</span></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 12: Cash Flow button added in QFP" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380518iA3C3002B74440CA1/image-size/large?v=v2&amp;px=999" role="button" title="CashFlow_Button_QFP.png" alt="Fig 12: Cash Flow button added in QFP" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 12: Cash Flow button added in QFP</span></span></P><P>Once the <EM>Cash Flow</EM> button is executed, it triggers the valuation strategy and method associated to the resource type in planning. The incoming and outgoing payment values are then copied to the fields present in the valuation method configured in SPRO.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 13: Select structure element from Financial Summary to view Cash Flow results" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380519i68B5027F932903A9/image-size/large?v=v2&amp;px=999" role="button" title="CashFlow_Results1.png" alt="Fig 13: Select structure element from Financial Summary to view Cash Flow results" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 13: Select structure element from Financial Summary to view Cash Flow results</span></span></P><P>&nbsp;QFP also allows to adjust the timings of payments using period shifts. By specifying values in period shift field, the system can postpone incoming or outgoing payments to a later date.&nbsp;<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 14: Period shift in Cash Flow" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380522iBAD586A5BFB88629/image-size/large?v=v2&amp;px=999" role="button" title="CashFlow_PeriodShift.png" alt="Fig 14: Period shift in Cash Flow" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 14: Period shift in Cash Flow</span></span></P><P>In the above example the period shift for March is set to 31 and <EM>Cash Flow</EM> is executed again. <SPAN>The payments are shifted by 31 days i.e. to April.&nbsp;</SPAN>This feature helps prevent timing mismatches and ensures that cash flow projections remain accurate and up to date.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 15: Select structure element from Financial Summary to view Period Shift results" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/380527i16AB3FEF3CC256BE/image-size/large?v=v2&amp;px=999" role="button" title="CashFlow_PeriodShiftResults.png" alt="Fig 15: Select structure element from Financial Summary to view Period Shift results" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 15: Select structure element from Financial Summary to view Period Shift results</span></span></P><P><FONT size="5"><STRONG>Version Comparison Report</STRONG></FONT></P><P>The Version Comparison Report in QFP provides a comparative view of financial plan versions, including baseline, active plan, and forecast snapshots.&nbsp;This consolidated report assists in performing a rapid assessment of the project’s financial health and planning deviations.</P><P>In order to access Version Comparison Report in QFP, select <EM>ComparePlanVersions</EM>&nbsp;under&nbsp;<EM>Reports</EM>&nbsp;in Financial Plan.&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 16: Version Comparison Report added under Reports in Financial Plan" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/382228i87F6B15EB52E9CE4/image-size/large?v=v2&amp;px=999" role="button" title="VersionReportPath.png" alt="Fig 16: Version Comparison Report added under Reports in Financial Plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 16: Version Comparison Report added under Reports in Financial Plan</span></span><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 17: Compare versions in Financial Plan" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/382229iE27E3CDB4FAB0242/image-size/large?v=v2&amp;px=999" role="button" title="VersionReport.png" alt="Fig 17: Compare versions in Financial Plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 17: Compare versions in Financial Plan</span></span></P><P><FONT size="5"><STRONG>Additional information</STRONG></FONT></P><P>For more details, you may refer below links:</P><UL><LI><A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/d6c1ceb7e0074cd1a8f28dad8a1a649c/537df85b5bac45e394fd67dafed37af7.html?version=2025.001" target="_blank" rel="noopener noreferrer">SAP CPM S/4 HANA 2025 FSP01 release</A></LI><LI><A href="https://help.sap.com/whats-new/5fc51e30e2744f168642e26e0c1d9be1?Business_Area=Enterprise+Portfolio+and+Project+Management&amp;Product_Line=SAP+S/4HANA+and+SAP+S/4HANA+Cloud+Private+Edition;SAP+S/4HANA+Cloud+Private+Edition&amp;Version=2023+FPS03" target="_blank" rel="noopener noreferrer">What`s New Viewer -&nbsp;<SPAN>SAP S/4HANA and SAP S/4HANA Cloud Private Edition</SPAN></A></LI></UL> 2026-03-06T11:59:26.620000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/from-hana-studio-to-hana-cloud-the-pain-points-nobody-warns-you-about/ba-p/14342945 From HANA Studio to HANA Cloud: The Pain Points Nobody Warns You About 2026-03-06T12:00:00.026000+01:00 DEEPA_DORAIRAJ https://community.sap.com/t5/user/viewprofilepage/user-id/1752099 <P><STRONG>SAP COMMUNITY<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>TECHNICAL GUIDE |<SPAN>&nbsp; </SPAN>SAP HANA CLOUD MIGRATION</STRONG></P><P><STRONG>From HANA Studio to HANA Cloud: The Pain Points Nobody Warns You About</STRONG></P><P><EM>A practitioner's guide to the friction points that slow down on-premise to HANA Cloud migrations — covering tooling gaps, scripting compatibility, and security model changes that will catch your team off guard.</EM></P><P><STRONG>By Deepa Dorairaj<SPAN>&nbsp; </SPAN></STRONG><SPAN>&nbsp;</SPAN>| SAP AI Solution Architect<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>HANA Data Engineering<SPAN>&nbsp; </SPAN>|<SPAN>&nbsp; </SPAN>Published 2026</P><P><EM>TL;DR: Migrating from SAP HANA Studio on-premise to HANA Cloud is not a lift-and-shift. The tooling changes, the security model changes, and a significant portion of your existing stored procedures will need rework before they run. This article covers the three areas where teams consistently underestimate the effort — and exactly what to watch out for in each.</EM></P><H2 id="toc-hId-1791376044">The Migration Gap Nobody Talks About</H2><P>SAP HANA Cloud is the right destination for most organizations still running on-premise HANA landscapes. The scalability, the managed infrastructure, the BTP integration story — the strategic case is clear. What's less clear, until you're in the middle of it, is how much the day-to-day operational reality changes between HANA Studio on-premise and HANA Cloud Central.</P><P>Teams that have spent years building muscle memory in HANA Studio — navigating schemas, editing calculation views graphically, managing users through a familiar interface, writing stored procedures against a predictable execution context — will find that the cloud environment is not a 1:1 replacement. It is a different tool, with different constraints, different patterns, and different failure modes.</P><P>This article is not a migration how-to. It is a field guide to the specific pain points that will slow your team down, based on direct experience with this transition. The goal is to give you the information early enough to plan for it rather than discover it mid-project.</P><P><STRONG>PAIN POINT 1 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1594862539">HANA Studio Features That Don't Exist in HANA Cloud Central</H2><P><STRONG>The Schema Browser and Object Navigation Gap</STRONG></P><P>HANA Studio's system and schema browser is one of its most underappreciated features. Developers and architects who have used it for years navigate schema structures, object dependencies, and table definitions almost by instinct — expanding trees, right-clicking to view DDL, drilling into column definitions without leaving their context.</P><P>HANA Cloud Central's database explorer is capable, but the navigation paradigm is meaningfully different. The object hierarchy is presented differently, context menus behave differently, and the workflow for tasks as simple as viewing a table's full column list or checking an index definition requires more steps than Studio users are accustomed to. This sounds minor until you factor in the cumulative friction across a team of developers doing this hundreds of times a day.</P><P><EM>Watch out: Teams consistently underestimate the productivity dip in the first 4-6 weeks after cutover. Budget time for your team to rebuild navigation muscle memory in the new tool. What takes 30 seconds in Studio may take 3 minutes in Cloud Central until the team is fluent.</EM></P><P>Specific navigational differences to prepare for:</P><UL><LI>Schema-level object counts and quick overviews that were visible at a glance in Studio require explicit queries or additional clicks in Cloud Central.</LI><LI>The right-click 'Open DDL' shortcut that Studio developers rely on is not directly replicated — you will need to query SYS.OBJECTS or use the SQL console to retrieve DDL for many object types.</LI><LI>Cross-schema object browsing that was seamless in Studio requires explicit privilege grants to be visible in Cloud Central's explorer, which surfaces privilege gaps earlier but creates friction during initial setup.</LI></UL><P><STRONG>The Graphical Calculation View Editor</STRONG></P><P>This one causes more frustration than almost any other tooling difference. Teams that built complex calculation views in HANA Studio's graphical editor — star schemas, multi-join hierarchies, calculated columns, input parameters — are accustomed to a specific visual workflow that has been refined over many versions of the on-premise product.</P><P>The calculation view editor in SAP Business Application Studio (BAS), which replaces the Studio graphical editor for HANA Cloud, is architecturally different. It is web-based, operates through HDI containers rather than directly against the database schema, and has a different set of supported node types and behaviors.</P><P><EM>Critical: Calculation views that reference classic schema objects directly cannot be opened in the BAS graphical editor without first migrating them to HDI container-based deployment. This is not a minor adjustment — it affects the entire deployment and change management workflow for your calculation view landscape.</EM></P><P>What this means in practice:</P><UL><LI>Views that were deployed and edited directly against the HANA schema in Studio must be converted to HDI artifacts (.hdbcalculationview files) before they can be managed in the Cloud toolchain.</LI><LI>The visual editor in BAS handles most standard patterns well but has gaps for certain advanced features that Studio supported — particularly around some legacy join types and specific aggregation node configurations.</LI><LI>Teams using vibe coding approaches for script conversion will find calculation view migration the hardest area to automate reliably — the graphical representation does not translate cleanly to text-based artifacts without manual review.</LI></UL><P><EM>Practical recommendation: Before cutover, audit your calculation view landscape and categorize views by complexity. Simple star schema views migrate cleanly. Complex multi-join hierarchies with custom aggregation behaviors need hands-on migration effort, not automated conversion.</EM></P><P>&nbsp;</P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Watch Out For</STRONG></P></TD><TD width="416"><P><STRONG>Why It Matters</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>Schema browser navigation</STRONG></P></TD><TD width="416"><P>Muscle memory built in Studio does not transfer — plan for a productivity dip and explicit team training time</P></TD></TR><TR><TD width="208"><P><STRONG>DDL viewing workflow</STRONG></P></TD><TD width="416"><P>Right-click DDL access is not replicated — teams need to learn alternative query-based approaches</P></TD></TR><TR><TD width="208"><P><STRONG>Calculation view editor</STRONG></P></TD><TD width="416"><P>Studio graphical editor → BAS editor is not a direct replacement; HDI container migration required first</P></TD></TR><TR><TD width="208"><P><STRONG>Advanced view features</STRONG></P></TD><TD width="416"><P>Some Studio-supported node types and behaviors have gaps in BAS — audit before migrating complex views</P></TD></TR></TBODY></TABLE><P><STRONG>PAIN POINT 2 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1398349034">Stored Procedure and Scripting Compatibility Issues</H2><P><STRONG>Deprecated Syntax That Silently Worked On-Premise</STRONG></P><P>HANA Cloud runs on a more recent version of the HANA engine than most on-premise landscapes. Syntax and behaviors that were technically deprecated in earlier HANA versions but continued working on-premise — because the on-premise version hadn't yet enforced the deprecation — will fail in HANA Cloud with no grace period.</P><P>The challenge is that these failures are not always obvious during planning. A stored procedure that has run reliably for three years on-premise, never touched because it works, may fail immediately in HANA Cloud because it uses syntax that was deprecated in HANA 2.0 SPS04 and the on-premise landscape never enforced the removal.</P><P><EM>The most common deprecated patterns we encountered: EXEC statements used where EXECUTE IMMEDIATE is the correct modern syntax, certain FOR loop constructs that behave differently in the Cloud SQL engine, and ARRAY handling patterns that were partially supported on-premise but have stricter enforcement in Cloud.</EM></P><P>The vibe coding approach to manual script conversion helps significantly here — using AI-assisted code review to scan procedure bodies for known deprecated patterns before migration is faster than manual review at scale. But it requires a human review pass on the output because the conversion suggestions are not always context-aware enough to handle complex procedure logic correctly.</P><P><STRONG>Implicit Type Conversion Differences</STRONG></P><P>HANA Cloud is stricter about implicit type conversions than on-premise HANA in several specific scenarios. Code that relied on HANA silently casting between numeric types, or between date and string types in certain contexts, will produce errors or subtly different results in Cloud.</P><pre class="lia-code-sample language-sql"><code>-- On-premise: this worked with implicit STRING to DATE cast -- HANA Cloud: requires explicit CAST or TO_DATE conversion WHERE change_date &gt; '2024-01-01' -- HANA Cloud safe version: WHERE change_date &gt; TO_DATE('2024-01-01', 'YYYY-MM-DD')</code></pre><P>The more dangerous scenario is numeric type conversion. HANA Cloud's stricter enforcement of arithmetic precision in certain contexts means that calculations which produced correct results on-premise may produce different results in Cloud — not errors, but silently wrong numbers. This is harder to catch in testing because the procedure runs successfully.</P><P><EM>Watch out: Implicit conversion issues in financial calculations are particularly risky. Any procedure that performs arithmetic across mixed numeric types (INTEGER, DECIMAL, DOUBLE) should be explicitly reviewed and tested against known expected outputs before cutover.</EM></P><P><STRONG>Procedure Privileges and Execution Context Changes</STRONG></P><P>On-premise HANA procedures run with a well-understood privilege model that many teams have not had to think about deeply — the procedure owner's privileges, or explicitly granted execution rights, determine what the procedure can access. HANA Cloud introduces execution context behaviors that are different in the HDI container model and stricter about cross-container and cross-schema access.</P><P>Procedures that accessed objects in multiple schemas by relying on the executing user's broad schema privileges will break in HANA Cloud if those cross-schema access patterns are not explicitly replicated through the new privilege model.</P><UL><LI>DEFINER vs INVOKER rights behavior differs between on-premise and Cloud in specific scenarios — procedures that relied on implicit DEFINER rights behavior need explicit review.</LI><LI>Cross-schema procedure calls that worked on-premise through broad user grants will require explicit synonym or privilege configuration in the Cloud environment.</LI><LI>Procedures that called other procedures across schema boundaries need the full dependency chain mapped before migration — missing one intermediate privilege breaks the entire call chain.</LI></UL><P>&nbsp;</P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Watch Out For</STRONG></P></TD><TD width="416"><P><STRONG>Why It Matters</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>Deprecated syntax</STRONG></P></TD><TD width="416"><P>Procedures that ran for years on-premise may fail immediately in Cloud — scan all procedure bodies before migration</P></TD></TR><TR><TD width="208"><P><STRONG>Implicit type conversions</STRONG></P></TD><TD width="416"><P>Stricter enforcement means silent wrong results, not always errors — test financial calculations against known outputs</P></TD></TR><TR><TD width="208"><P><STRONG>FOR loop and EXEC patterns</STRONG></P></TD><TD width="416"><P>Common on-premise scripting patterns have changed behavior or stricter enforcement in HANA Cloud SQL engine</P></TD></TR><TR><TD width="208"><P><STRONG>Cross-schema procedure calls</STRONG></P></TD><TD width="416"><P>Privilege chains that worked on-premise need explicit remapping in the Cloud privilege model</P></TD></TR></TBODY></TABLE><P><STRONG>PAIN POINT 3 OF 3<SPAN>&nbsp; </SPAN></STRONG></P><H2 id="toc-hId-1201835529">Security and User Management Changes</H2><P><STRONG>The Role and Privilege Model Is Fundamentally Different</STRONG></P><P>On-premise HANA security administration through HANA Studio is direct and familiar — create users, assign roles, grant privileges, done. The privilege model is granular but the tooling is transparent. What you see in the security editor is what exists in the database.</P><P>HANA Cloud introduces a layered security model that operates differently depending on whether you are working in the HDI container context, the plain schema context, or through BTP's identity and access management layer. Teams that migrate assuming the privilege model works the same way will create security configurations that appear to work in testing but fail in production under specific access patterns.</P><P><EM>The most common mistake: Granting broad schema-level privileges in HANA Cloud the same way they were granted on-premise, then discovering that certain operations still fail because the HDI container security layer has separate privilege requirements that aren't covered by schema-level grants.</EM></P><P><STRONG>HDI Containers Replacing Classic Schema-Based Access</STRONG></P><P>If your on-premise landscape uses classic schema-based deployment — objects created directly in database schemas, accessed directly by users with schema privileges — the HDI container model is a significant conceptual shift, not just a technical one.</P><P>In HDI containers, database objects are owned by the container's technical user, not by the schema owner or the deploying user. Access to those objects by application users goes through a specific HDI access role pattern that is different from the direct privilege grants your team is used to managing.</P><UL><LI>Users and roles that worked on-premise cannot be directly imported into HANA Cloud — the privilege model differences mean a fresh role design is almost always necessary.</LI><LI>Application users that previously connected directly to the HANA schema need to be restructured to go through the appropriate HDI access pattern or plain schema equivalent in Cloud.</LI><LI>The tooling for managing HDI container privileges is in BTP Cockpit and HANA Cloud Central, not in a single location — teams need to understand which layer manages which aspect of the privilege model.</LI></UL><P><EM>Practical recommendation: Before migration, document every user, role, and privilege assignment in your on-premise landscape. Do not attempt a direct import or recreation — use that documentation as a reference to design a new privilege model that fits the HANA Cloud architecture. The effort is significant but unavoidable.</EM></P><P><STRONG>SSO and Identity Provider Configuration</STRONG></P><P>On-premise HANA landscapes often use database-native user authentication or LDAP integration managed directly in the HANA system. HANA Cloud on BTP uses SAML-based SSO through BTP's Identity Authentication Service (IAS) or a connected corporate identity provider.</P><P>Teams that relied on database-native authentication for both human users and technical system users will find that HANA Cloud's integration with BTP's identity layer requires a different configuration approach for each user type. Technical users — RFC connections, ETL tool connections, reporting tool connections — often cannot use SAML-based SSO and require specific certificate or password-based authentication configuration that is handled differently in Cloud.</P><P><EM>Watch out: Every system that connects to your on-premise HANA — ETL tools, reporting platforms, RFC connections, middleware — needs its authentication method reviewed against HANA Cloud's supported patterns before cutover. This is frequently the last item tested and the most common cause of post-cutover incidents.</EM></P><P><STRONG>Loss of OS-Level Access</STRONG></P><P>On-premise HANA gives administrators OS-level access to the underlying server — the ability to check file system paths, review trace files directly, restart services, and perform low-level diagnostics that go beyond what the database interface exposes.</P><P>HANA Cloud is a managed service. OS-level access does not exist. All diagnostics and administration must go through HANA Cloud Central, the SQL console, or the available monitoring views. Teams that have built operational runbooks around OS-level access — checking trace file directories, manually managing backup files, scripting OS-level health checks — need to rebuild those runbooks entirely against the Cloud-native toolset.</P><UL><LI>Trace file access moves to HANA Cloud Central's diagnostic tools — the content is the same but the access method is completely different.</LI><LI>Backup and recovery operations that were managed through OS-level scripts need to be rebuilt using HANA Cloud's backup catalog and BTP-native tooling.</LI><LI>Custom OS-level monitoring scripts used by your operations team will not function and need Cloud-native equivalents built before cutover.</LI></UL><P>&nbsp;</P><TABLE width="624"><TBODY><TR><TD width="208"><P><STRONG>Watch Out For</STRONG></P></TD><TD width="416"><P><STRONG>Why It Matters</STRONG></P></TD></TR><TR><TD width="208"><P><STRONG>Role and privilege model</STRONG></P></TD><TD width="416"><P>On-premise privilege assignments cannot be directly migrated — design a new model for Cloud from scratch</P></TD></TR><TR><TD width="208"><P><STRONG>HDI container access</STRONG></P></TD><TD width="416"><P>Classic schema-level grants do not cover HDI container object access — separate privilege configuration required</P></TD></TR><TR><TD width="208"><P><STRONG>SSO configuration</STRONG></P></TD><TD width="416"><P>Every connecting system needs authentication method reviewed against Cloud's supported patterns</P></TD></TR><TR><TD width="208"><P><STRONG>OS-level access</STRONG></P></TD><TD width="416"><P>All OS-level runbooks and monitoring scripts must be rebuilt using HANA Cloud Central and Cloud-native tools</P></TD></TR></TBODY></TABLE><H2 id="toc-hId-1005322024">What This Means for Your Migration Plan</H2><P>The three areas covered in this article — tooling gaps, scripting compatibility, and security model changes — are the ones most consistently underestimated in HANA Cloud migration projects. They share a common characteristic: they are invisible during the planning phase because the on-premise system works fine, and they surface as a cluster of friction points in the first weeks after cutover.</P><P>The teams that navigate this transition most successfully do two things differently. First, they run a structured pre-migration audit — cataloging calculation views by complexity, scanning stored procedures for deprecated patterns, and mapping every privilege assignment before touching the Cloud environment. Second, they treat the first month in Cloud Central as a productivity investment, not a productivity loss — explicitly budgeting time for the team to build new tooling fluency before expecting output at on-premise velocity.</P><P>The migration is worth doing. HANA Cloud's operational model, scalability, and BTP integration story are genuinely better than maintaining on-premise infrastructure. But it is not a lift-and-shift, and approaching it as one is the single most reliable way to turn a manageable transition into a painful one.</P><P><EM>If your team is in the planning phase of this migration and you want to discuss specific aspects of the audit process or privilege model redesign, drop a comment below. These are the conversations worth having before cutover, not after.</EM></P> 2026-03-06T12:00:00.026000+01:00 https://community.sap.com/t5/asunci%C3%B3n-blog-posts/sap-codejam-roadshow-2026-brazil-edition-la-comunidad-sap-paraguay-estuvo/ba-p/14344497 SAP CodeJam Roadshow 2026 – Brazil Edition: La Comunidad SAP Paraguay Estuvo Ahí 2026-03-09T04:46:28.639000+01:00 Zamichiei https://community.sap.com/t5/user/viewprofilepage/user-id/163835 <P>¡Hola comunidad SAP Paraguay! En marzo de 2026 tuvimos la oportunidad de participar en el <STRONG>SAP CodeJam Roadshow 2026 – Brazil Edition</STRONG>, dos jornadas intensas de aprendizaje hands-on en São Paulo, Brasil. Y como siempre, Paraguay no se quedó afuera.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CodeJam2026_8.jpeg" style="width: 764px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381340i7C86D57CD8320FCC/image-dimensions/764x573?v=v2" width="764" height="573" role="button" title="CodeJam2026_8.jpeg" alt="CodeJam2026_8.jpeg" /></span><BR /><BR /><STRONG>¿Qué es el SAP CodeJam?</STRONG><BR />Para quienes aún no lo conocen: el SAP CodeJam es un evento práctico de entre 5 y 6 horas, respaldado por SAP, donde los participantes exploran tecnologías reales en un ambiente colaborativo y sin presión. No se trata de escuchar presentaciones — se trata de hacer, experimentar y aprender juntos.</P><P class="">Esta edición fue parte de un roadshow por Brasil, con paradas en la oficina de SAP en São Paulo y Exed Consulting. Los instructores fueron los Developer Advocates Antonio Maradiaga y Kevin Riedelsheimer — un lujo para cualquier developer SAP.</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CodeJam2026_2.jpeg" style="width: 754px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381341iC6703B231103E7C2/image-dimensions/754x565?v=v2" width="754" height="565" role="button" title="CodeJam2026_2.jpeg" alt="CodeJam2026_2.jpeg" /></span></P><P class=""><BR /><STRONG>Los dos dias ¿Qué vivimos?</STRONG><BR /><span class="lia-unicode-emoji" title=":heavy_check_mark:">✔️</span>3 de marzo – Joule Studio: Create Joule Skills and Agents Without Coding<BR />El primer día nos introdujo a Joule Studio, la nueva herramienta dentro de SAP Build que permite extender a Joule, el asistente digital de SAP, con flujos propios.</P><P class=""><span class="lia-unicode-emoji" title=":heavy_check_mark:">✔️</span>&nbsp;4 de marzo – SAP Build: Create Event-Based Processes<BR />El segundo día fue igualmente intenso. En Exed Consulting, trabajamos con SAP Build Process Automation y SAP Integration Suite, advanced event mesh para construir procesos disparados por eventos.</P><P class=""><STRONG>Lo mas destacado</STRONG><BR />Más allá del contenido técnico, lo que hace especial a un CodeJam es la gente. Compartir mesa con profesionales de Brasil, resolver ejercicios juntos, debatir enfoques distintos — eso no tiene precio.<BR />Un agradecimiento especial a Kevin Riedelsheimer, Antonio Maradiaga y Antonia Zorn por la confianza y la oportunidad</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CodeJam2026_4.jpeg" style="width: 755px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381342i192AE9375232D716/image-dimensions/755x566?v=v2" width="755" height="566" role="button" title="CodeJam2026_4.jpeg" alt="CodeJam2026_4.jpeg" /></span></P><P class=""><BR /><STRONG>SAP Community = Conexiones Reales<BR /></STRONG>Eventos como este no llegan solos. Detrás de cada CodeJam hay una comunidad activa que lo hace posible: profesionales que publican en el SAP Community Blog, que comparten sus experiencias, que asisten a eventos en la región, que conectan con Developer Advocates y que demuestran que hay una comunidad con interés real.</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="CodeJam2026_5.jpeg" style="width: 733px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381343iE6507A5AE1D2058D/image-dimensions/733x549?v=v2" width="733" height="549" role="button" title="CodeJam2026_5.jpeg" alt="CodeJam2026_5.jpeg" /></span></P><P class="">No alcanza con querer que suceda: hay que ser parte activa. Eso significa:</P><UL class=""><LI><STRONG>Crear una cuenta en SAP Community</STRONG> y participar con preguntas, respuestas y blogs</LI><LI><STRONG>Seguir y conectar</STRONG> con los Developer Advocates en LinkedIn y SAP Community</LI><LI><STRONG>Compartir</STRONG> las experiencias de los eventos regionales para que Paraguay aparezca en el radar</LI><LI><STRONG>Asistir</STRONG> a CodeJams en países vecinos y llevar de vuelta el conocimiento.</LI></UL><P>Desde la SAP Community Paraguay seguimos trabajando para que Ciudad del Este y Asunción sean paradas reales en el mapa de la comunidad SAP de la región.<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Zamichiei_0-1773027752407.png" style="width: 707px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/381345i9BA2CEF891621ED9/image-dimensions/707x476?v=v2" width="707" height="476" role="button" title="Zamichiei_0-1773027752407.png" alt="Zamichiei_0-1773027752407.png" /></span></P><P><STRONG><BR />¿Qué tema te gustaría ver en un CodeJam en Paraguay?</STRONG> ABAP Cloud, Joule Studio, SAP Build, BTP, UI5 — cuéntanos en los comentarios. Cada opinión cuenta para hacerlo realidad.<BR /><BR /><STRONG>¡Sigamos construyendo comunidad! <span class="lia-unicode-emoji" title=":rocket:">🚀</span></STRONG></P> 2026-03-09T04:46:28.639000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/end-to-end-sap-gui-logon-automation-with-codex-and-python-customizing/ba-p/14342645 End-to-End SAP GUI Logon Automation with Codex and Python: Customizing + Reporting 2026-03-09T08:50:54.651000+01:00 alpikav https://community.sap.com/t5/user/viewprofilepage/user-id/893018 <P>&nbsp;</P><P><STRONG>Introduction</STRONG></P><P><SPAN>Many SAP users still perform repetitive tasks in SAP GUI manually — logging into systems, navigating transactions, adjusting customizing parameters, or extracting reports.</SPAN><SPAN>With the rise of AI-assisted development and automation frameworks, it is now possible to orchestrate these operations programmatically.</SPAN></P><P><SPAN>In this article, I present an end-to-end automation scenario that combines:</SPAN></P><UL><LI><SPAN>Python</SPAN></LI><LI><SPAN>SAP GUI Scripting</SPAN></LI><LI><SPAN>AI-assisted coding with Codex</SPAN></LI></UL><P>&nbsp;</P><P><SPAN>The solution demonstrates how an AI agent can:</SPAN></P><UL><LI><SPAN>Log into SAP GUI automatically</SPAN></LI><LI><SPAN>Execute customizing steps</SPAN></LI><LI><SPAN>Navigate transactions</SPAN></LI><LI><SPAN>Extract data and generate reports</SPAN></LI></UL><P><SPAN>By integrating AI-assisted scripting with SAP GUI automation, organizations can significantly accelerate operational tasks, reduce human error, and improve productivity.</SPAN></P><P><SPAN>This approach illustrates how AI agents can augment SAP consultants and administrators, enabling faster execution of routine activities and creating a foundation for more advanced intelligent automation in SAP landscapes.</SPAN></P><H3 id="toc-hId-1920455880"><SPAN><BR />Accelerate with AI Agents: Build, Adapt, and Report in SAP at Speed</SPAN><BR />&nbsp;</H3><P><A href="https://community.sap.com/source-Ids-list" target="1_0h75o2l3" rel="nofollow noopener noreferrer">&nbsp;</A></P><H4 id="toc-hId-1853025094">1.What You Can Automate</H4><P class=""><BR />First of all, I would like to thank you for inspiring me on this topic&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1440910">@Hrb24</a>&nbsp;<BR /><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/claude-code-python-sap-gui-scripting-your-ai-agent-for-any-sap-transaction/ba-p/14327865" target="_self">Claude Code + Python + SAP GUI Scripting: Your AI Agent for Any SAP Transaction</A>&nbsp;</P><P class="">With Codex + SAP GUI scripting, you can automate both daily operations and advanced SAP configuration work in a practical, scalable way:</P><UL class=""><LI>Perform SAP customizing tasks directly from scripts</LI><LI>Execute mass customizing by providing Excel-based input files to Codex and applying changes in bulk</LI><LI>Run operational and analytical reports automatically (for example via SE16N-driven workflows)</LI><LI>Validate existing configurations and compare system state against expected setup rules</LI><LI>Build guided simulations for training and decision support (for example, simulate and identify the best price scenario for a specific material)</LI></UL><P class="">In short, you can move from manual, screen-by-screen execution to controlled, repeatable, and auditable automation across reporting, configuration, validation, and simulation use cases.<BR /><BR /></P><H3 id="toc-hId-1527428870">Prerequisites (Before Any Connection)</H3><H4 id="toc-hId-1459998084">2.1 SAP-side prerequisites</H4><OL class=""><LI>SAP GUI for Windows must be installed</LI><LI>SAP Logon entry must exist and be tested manually</LI><LI>SAP GUI Scripting must be enabled:<UL class=""><LI><STRONG>SAP GUI → Options → Accessibility &amp; Scripting → Scripting</STRONG></LI><LI>Enable scripting (client-side)</LI></UL></LI><LI>Server-side scripting parameter must allow scripting:<UL class=""><LI><SPAN class="">sapgui/user_scripting = TRUE</SPAN><SPAN>&nbsp;</SPAN>(Basis side)</LI></UL></LI><LI>Your user needs authorization for:<UL class=""><LI>target transactions (SE16N, OVX4, OVX5, etc.)</LI><LI>customizing save + transport assignment (if doing config)</LI></UL></LI></OL><H4 id="toc-hId-1263484579">2.2 Local environment prerequisites</H4><OL class=""><LI>Python 3.10+ installed</LI><LI>Install required packages:<DIV class=""><DIV class=""><CODE><SPAN>python -m pip install pywin32 openpyxl pypdf </SPAN></CODE></DIV></DIV></LI><LI>Keep SAP Logon open before script execution (recommended)</LI><LI>Codex installed ( paid subscription required )</LI></OL><H3 id="toc-hId-937888355">3. Project Structure (Recommended)</H3><P class="">Use a simple folder structure:</P><DIV class=""><DIV class=""><CODE><SPAN>automation/ sap_connect.py&nbsp;</SPAN></CODE></DIV></DIV><UL class=""><LI><SPAN class="">sap_connect.py</SPAN>: connection/bootstrap layer</LI></UL><H3 id="toc-hId-741374850">4. Connection Script Design (Core)</H3><P class="">Your<SPAN>&nbsp;</SPAN><SPAN class="">sap_connect.py</SPAN><SPAN>&nbsp;</SPAN>should handle:</P><OL class=""><LI>Attach to running SAP GUI:<UL class=""><LI><SPAN class="">GetObject("SAPGUI")</SPAN></LI></UL></LI><LI>Start SAP Logon if not running</LI><LI>Open connection by<SPAN>&nbsp;</SPAN><STRONG>exact SAP Logon entry name</STRONG></LI><LI>Wait until session window is ready</LI><LI>Detect login screen vs already authenticated menu</LI><LI>Handle post-login popups (information/multiple logon)</LI><LI>Verify status bar for login errors</LI><LI>Return a reusable session wrapper<BR />Example Script:<BR /><BR /></LI></OL><pre class="lia-code-sample language-abap"><code>"""Open or create an SAP GUI session and authenticate (SSO or password). """ import argparse import getpass import logging import os import subprocess import sys import time from dataclasses import dataclass from typing import Any, Dict, List, Optional import win32com.client try: from sap_scripting import SapSession except Exception: class SapSession: def __init__(self): self.connection_index = 0 self.session_index = 0 self.application = None self.connection = None self.session = None def get_session_info(self) -&gt; Dict[str, Any]: info = self.session.Info return { "system": getattr(info, "SystemName", ""), "client": getattr(info, "Client", ""), "user": getattr(info, "User", ""), "transaction": getattr(info, "Transaction", ""), "response_time": getattr(info, "ResponseTime", ""), } logging.basicConfig( level=logging.INFO, format="%(asctime)s [%(levelname)s] %(message)s", datefmt="%H:%M:%S", ) log = logging.getLogger("sap_connect") @dataclass(frozen=True) class SapConnectConfig: default_system: str = os.getenv("SAP_SYSTEM", "") default_client: str = os.getenv("SAP_CLIENT", "") default_user: str = os.getenv("SAP_USER", "") default_language: str = os.getenv("SAP_LANGUAGE", "EN") default_sso: bool = os.getenv("SAP_SSO", "true").lower() in ("1", "true", "yes") login_wait: float = 2.0 popup_wait: float = 0.5 startup_timeout: int = 30 session_ready_timeout: float = 30.0 saplogon_candidates: tuple = ( r"C:\Program Files (x86)\SAP\FrontEnd\SAPgui\saplogon.exe", r"C:\Program Files\SAP\FrontEnd\SAPgui\saplogon.exe", r"C:\Program Files (x86)\SAP\SAPLogon\saplogon.exe", ) CONFIG = SapConnectConfig() DEFAULT_SYSTEM = CONFIG.default_system DEFAULT_CLIENT = CONFIG.default_client DEFAULT_USER = CONFIG.default_user DEFAULT_LANGUAGE = CONFIG.default_language DEFAULT_SSO = CONFIG.default_sso LOGIN_SCREEN_FIELDS = ( "wnd[0]/usr/txtRSYST-MANDT", "wnd[0]/usr/txtRSYST-BNAME", ) def _mask(text: str, enabled: bool) -&gt; str: if not enabled: return text if not text: return text if len(text) &lt;= 2: return "*" * len(text) return text[0] + ("*" * (len(text) - 2)) + text[-1] def _find_element(session: Any, element_id: str) -&gt; Optional[Any]: try: return session.findById(element_id) except Exception: return None def _attach_scripting_engine() -&gt; Any: rot_entry = win32com.client.GetObject("SAPGUI") return rot_entry.GetScriptingEngine def _ensure_saplogon_running() -&gt; Any: try: app = _attach_scripting_engine() log.info("SAP Logon already running. Active connections: %s", app.Children.Count) return app except Exception: log.info("SAP Logon not detected - attempting to start it ...") exe_path = next((p for p in CONFIG.saplogon_candidates if os.path.isfile(p)), None) if exe_path is None: raise ConnectionError( "SAP Logon is not running and saplogon.exe was not found in:\n" + "\n".join(f" {p}" for p in CONFIG.saplogon_candidates) ) log.info("Launching SAP Logon from: %s", exe_path) subprocess.Popen([exe_path]) deadline = time.time() + CONFIG.startup_timeout while time.time() &lt; deadline: try: app = _attach_scripting_engine() log.info("SAP Logon started successfully.") return app except Exception: time.sleep(1.0) raise ConnectionError( f"SAP Logon did not become ready within {CONFIG.startup_timeout} seconds. " "Check SAP GUI scripting settings." ) def _wait_for_session_ready(connection: Any) -&gt; Any: deadline = time.time() + CONFIG.session_ready_timeout while time.time() &lt; deadline: try: session = connection.Children(0) _ = session.findById("wnd[0]").Text return session except Exception: time.sleep(0.5) raise RuntimeError("SAP window did not become ready within timeout after OpenConnection().") def _detect_screen_state(session: Any) -&gt; str: if _find_element(session, LOGIN_SCREEN_FIELDS[0]) is not None: return "LOGIN" try: tcode = session.Info.Transaction.strip() if tcode and tcode != "LOGIN": return "MENU" except Exception: pass try: if session.findById("wnd[0]").Text.strip(): return "MENU" except Exception: pass return "UNKNOWN" def _do_login( session: Any, client: str, user: str, password: str, language: str, sso: bool, ) -&gt; None: mandt = _find_element(session, "wnd[0]/usr/txtRSYST-MANDT") if mandt and mandt.Changeable and client: mandt.text = client log.info("Client set.") if not sso: bname = _find_element(session, "wnd[0]/usr/txtRSYST-BNAME") bcode = _find_element(session, "wnd[0]/usr/pwdRSYST-BCODE") langu = _find_element(session, "wnd[0]/usr/txtRSYST-LANGU") if bname: bname.text = user if bcode: bcode.text = password if langu and langu.Changeable: langu.text = language log.info("Credentials filled (user/password mode).") else: log.info("SSO mode - skipping username/password fields") session.findById("wnd[0]").sendVKey(0) log.info("Login submitted (Enter)") def _handle_multiple_logon_popup(session: Any) -&gt; None: log.info("Multiple logon detected - selecting OPT2 (keep existing sessions).") opt2 = _find_element(session, "wnd[1]/usr/radMULTI_LOGON_OPT2") if opt2: opt2.select() confirm = _find_element(session, "wnd[1]/tbar[0]/btn[0]") if confirm: confirm.press() else: session.findById("wnd[1]").sendVKey(0) def _handle_post_login_popups(session: Any) -&gt; None: time.sleep(CONFIG.popup_wait) for _ in range(5): popup = _find_element(session, "wnd[1]") if popup is None: break log.info("Popup detected: '%s'", popup.Text.strip()) try: if _find_element(session, "wnd[1]/usr/radMULTI_LOGON_OPT1"): _handle_multiple_logon_popup(session) else: popup.sendVKey(0) except Exception as exc: log.warning("Popup handling failed: %s", exc) time.sleep(CONFIG.popup_wait) def _verify_login(session: Any) -&gt; None: sbar = _find_element(session, "wnd[0]/sbar") if not sbar: return msg_type = getattr(sbar, "MessageType", "") msg_text = getattr(sbar, "Text", "") if msg_type in ("E", "A"): raise RuntimeError(f"Login failed [{msg_type}]: {msg_text}") if msg_type == "W": log.warning("Login warning [%s]: %s", msg_type, msg_text) elif msg_type == "S" and msg_text: log.info("Login status [%s]: %s", msg_type, msg_text) def _wrap_existing_session(application: Any, conn_idx: int, session_index: int = 0) -&gt; SapSession: sap = object.__new__(SapSession) sap.connection_index = conn_idx sap.session_index = session_index sap.application = application sap.connection = application.Children(conn_idx) sap.session = sap.connection.Children(session_index) return sap def connect_to_system( system: str = DEFAULT_SYSTEM, client: str = DEFAULT_CLIENT, user: str = DEFAULT_USER, password: str = "", language: str = DEFAULT_LANGUAGE, sso: bool = DEFAULT_SSO, ) -&gt; SapSession: if not sso: if not user: user = input(f"SAP user for {system}/{client}: ").strip() if not password: password = getpass.getpass(f"Password for {user}@{system}: ") app = _ensure_saplogon_running() log.info("SAP Logon ready. Active connections before: %s", app.Children.Count) log.info("Opening connection to '%s' (SSO=%s) ...", system, sso) try: connection = app.OpenConnection(system, True) except Exception as exc: raise ConnectionError( f"Cannot open connection to '{system}': {exc}\n" "Check SAP Logon entry name (case-sensitive)." ) from exc session = _wait_for_session_ready(connection) log.info("Session opened.") state = _detect_screen_state(session) log.info("Screen state after OpenConnection: %s", state) if state == "LOGIN": _do_login(session, client, user, password, language, sso) time.sleep(CONFIG.login_wait) elif state == "MENU": log.info("SSO authenticated automatically - skipped login screen.") else: log.warning("Unexpected screen state '%s' - attempting to continue.", state) _handle_post_login_popups(session) _verify_login(session) conn_idx = app.Children.Count - 1 sap = _wrap_existing_session(app, conn_idx, session_index=0) info = sap.get_session_info() log.info( "Connected - system=%s, client=%s, user=%s, transaction=%s", info["system"], info["client"], _mask(info["user"], True), info["transaction"], ) return sap def list_logon_entries(mask_sensitive: bool = True) -&gt; List[Dict[str, Any]]: try: app = _attach_scripting_engine() active: List[Dict[str, Any]] = [] for i in range(app.Children.Count): conn = app.Children(i) for j in range(conn.Children.Count): sess = conn.Children(j) active.append( { "conn_idx": i, "sess_idx": j, "system": sess.Info.SystemName, "client": _mask(sess.Info.Client, mask_sensitive), "user": _mask(sess.Info.User, mask_sensitive), "tcode": sess.Info.Transaction, } ) return active except Exception as exc: log.warning("list_logon_entries failed: %s", exc) return [] def _build_parser() -&gt; argparse.ArgumentParser: parser = argparse.ArgumentParser( description="Open a new SAP session from SAP Logon and log in.", formatter_class=argparse.RawDescriptionHelpFormatter, epilog=( "Examples:\n" " python sap_connect.py\n" " python sap_connect.py --system MYSYS --client 100\n" " python sap_connect.py --no-sso --user YOUR_USER\n" " python sap_connect.py --list\n" " python sap_connect.py --list --no-mask" ), ) parser.add_argument("--system", default=DEFAULT_SYSTEM, help="SAP Logon entry description") parser.add_argument("--client", default=DEFAULT_CLIENT, help="SAP client number") parser.add_argument("--user", default=DEFAULT_USER, help="SAP username (ignored for SSO)") parser.add_argument("--language", default=DEFAULT_LANGUAGE, help="Logon language") parser.add_argument("--no-sso", dest="sso", action="store_false", help="Use password login") parser.set_defaults(sso=DEFAULT_SSO) parser.add_argument("--list", action="store_true", help="List active SAP sessions and exit") parser.add_argument("--no-mask", action="store_true", help="Do not mask user/client in output") return parser def main() -&gt; None: args = _build_parser().parse_args() if args.list: sessions = list_logon_entries(mask_sensitive=not args.no_mask) if not sessions: print("No active SAP sessions found.") return print(f"\nActive SAP sessions ({len(sessions)}):") for s in sessions: print( f" [{s['conn_idx']}:{s['sess_idx']}] " f"{s['system']}/{s['client']} user={s['user']} tcode={s['tcode']}" ) return mode = "SSO" if args.sso else f"password (user={args.user or 'prompt'})" print(f"\nConnecting to {args.system} / client {_mask(args.client, not args.no_mask)} [{mode}] ...") try: sap = connect_to_system( system=args.system, client=args.client, user=args.user, language=args.language, sso=args.sso, ) info = sap.get_session_info() print(f"\n[OK] Connected: {info['system']} / {_mask(info['client'], not args.no_mask)} / {_mask(info['user'], not args.no_mask)}") print(f" Transaction : {info['transaction']}") print(f" Server : {info['response_time']} ms response") except (ConnectionError, RuntimeError) as exc: log.error("%s", exc) sys.exit(1) if __name__ == "__main__": main() ​</code></pre><P><BR /><BR /></P><H3 id="toc-hId-544861345">5. How to Connect (Execution)</H3><H4 id="toc-hId-477430559">5.1 List active sessions</H4><DIV class=""><DIV class=""><CODE><SPAN>python sap_connect.py --list </SPAN></CODE></DIV></DIV><H4 id="toc-hId-280917054">5.2 Connect with SSO</H4><DIV class=""><DIV class=""><CODE><SPAN>python sap_connect.py --system <SPAN class="">"Your SAP Logon Entry Name"</SPAN> --client 100 </SPAN></CODE></DIV></DIV><H4 id="toc-hId--413313546">5.3 Connect with username/password</H4><DIV class=""><DIV class=""><CODE><SPAN>python sap_connect.py --system <SPAN class="">"Your SAP Logon Entry Name"</SPAN> --client 10</SPAN></CODE></DIV></DIV><P>&nbsp;</P><P class=""><SPAN>Important:&nbsp;</SPAN><SPAN class="">--system</SPAN><SPAN>&nbsp;must match SAP Logon entry description exactly (case-sensitive in many setups).</SPAN><BR /><BR /></P><H3 id="toc-hId--316424044">6. After Connection: Reporting Flow (SE16N)</H3><P class="">A standard reporting script should do this:</P><OL class=""><LI><SPAN class="">StartTransaction("SE16N")</SPAN></LI><LI>Set table (for example<SPAN>&nbsp;</SPAN><SPAN class="">VBAK</SPAN><SPAN>&nbsp;</SPAN>/<SPAN>&nbsp;</SPAN><SPAN class="">VBRK</SPAN>)</LI><LI>Set filters in selection fields (date/material/org/etc.)</LI><LI>Execute (<SPAN class="">F8</SPAN>)</LI><LI>Read ALV grid rows/columns</LI><LI>Save output to Excel</LI></OL><H4 id="toc-hId--806340556">Example use cases</H4><UL class=""><LI>Last 6 months sales report</LI><LI>Open request list (E070)</LI><LI>Material-level pricing checks</LI></UL><HR /><H3 id="toc-hId--709451054">7. Excel Output Best Practices</H3><P class="">For business users, generate:</P><UL class=""><LI>Sheet 1: summary table (monthly totals/KPIs)</LI><LI>Sheet 2: raw data extract</LI><LI>Charts:<UL class=""><LI>pie chart for distribution</LI><LI>bar chart for monthly trend</LI></UL></LI><LI>Metadata row:<UL class=""><LI>generation timestamp</LI><LI>source table</LI><LI>filter range</LI></UL></LI></UL><HR /><H3 id="toc-hId--905964559">8. After Connection: Customizing Flow</H3><P class="">For customizing automation scripts:</P><OL class=""><LI>Start transaction (for example<SPAN>&nbsp;</SPAN><SPAN class="">OVX4</SPAN>,<SPAN>&nbsp;</SPAN><SPAN class="">OVXI</SPAN>,<SPAN>&nbsp;</SPAN><SPAN class="">OVXB</SPAN>)</LI><LI>Enter<SPAN>&nbsp;</SPAN><STRONG>Change mode</STRONG><SPAN>&nbsp;</SPAN>if needed</LI><LI>Use<SPAN>&nbsp;</SPAN><SPAN class="">New Entries</SPAN><SPAN>&nbsp;</SPAN>/<SPAN>&nbsp;</SPAN><SPAN class="">Copy As</SPAN><SPAN>&nbsp;</SPAN>consistently</LI><LI>Fill required fields only</LI><LI>Save</LI><LI>Handle transport popup:<UL class=""><LI>assign existing request or create new one</LI></UL></LI><LI>Read status bar and validate success</LI></OL><H3 id="toc-hId--1102478064">9. Transport Handling Pattern</H3><P class="">During save, scripts should:</P><OL class=""><LI>Detect “Prompt for Customizing Request”</LI><LI>If request field is empty, set existing request ID</LI><LI>Confirm popup</LI><LI>Close follow-up information popup</LI><LI>Verify status = success (<SPAN class="">MessageType = S</SPAN>)</LI></OL><H3 id="toc-hId--1298991569">10. Validation Strategy</H3><P class="">Always validate after script execution:</P><UL class=""><LI>Re-open transaction and check entry exists</LI><LI>Cross-check table data in SE16N</LI><LI>Capture:<UL class=""><LI>status type</LI><LI>status text</LI><LI>key fields created/updated</LI></UL></LI><LI>Export validation result if needed</LI></UL><P class="">&nbsp;</P><H3 id="toc-hId--1495505074">Conclusion</H3><P class="">By combining<SPAN>&nbsp;</SPAN><STRONG>Codex</STRONG><SPAN>&nbsp;</SPAN>for rapid script generation and<SPAN>&nbsp;</SPAN><STRONG>SAP GUI Scripting</STRONG><SPAN>&nbsp;</SPAN>for execution, teams can standardize repetitive SAP work: from reporting to customizing. The key to reliability is a strong connection layer, deterministic popup handling, and strict post-run validation.</P> 2026-03-09T08:50:54.651000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/sap-hana-backup-and-recoverability-backup-completion-vs-recovery/ba-p/14328433 SAP HANA Backup and Recoverability: Backup Completion vs. Recovery 2026-03-11T17:00:00.028000+01:00 HakanHaslaman https://community.sap.com/t5/user/viewprofilepage/user-id/185386 <P><EM>A successful backup confirms data was written, it does not, by itself, confirm recovery readiness.</EM><STRONG><BR /><BR />Introduction</STRONG><BR />This article summarizes publicly available SAP documentation and SAP Knowledge Base Articles and reflects the documented behavior of SAP HANA backup and recovery.</P><P>In many SAP HANA landscapes, backup monitoring is treated as the primary indicator of data protection. Backup jobs run regularly, verification checks succeed, and storage systems confirm that backup files are readable and complete.</P><P>However, SAP documentation and SAP Knowledge Base Articles describe situations in which a database recovery cannot be performed even though:</P><P>• data backups exist<BR />• backup jobs reported success<BR />• backup files are accessible<BR />• backup integrity checks do not report errors</P><P>During a recovery attempt,&nbsp;<SPAN class="">the </SPAN><SPAN class="">restore </SPAN><SPAN class="">procedure </SPAN><SPAN class="">may </SPAN><SPAN class="">fail </SPAN><SPAN class="">with </SPAN><SPAN class="">errors </SPAN><SPAN class="">indicating </SPAN><SPAN class="">missing </SPAN><SPAN class="">encryption </SPAN><SPAN class="">material.</SPAN></P><P>SAP Knowledge Base Articles document recovery failures when required encryption root keys are not available in the target system environment.<BR /><BR /><SPAN class="">This behavior is not related to a malfunction of the backup operation itself and does not indicate a storage failure.&nbsp;</SPAN>It is consistent with the documented recovery procedures of SAP HANA.</P><P>This article does not provide configuration or implementation guidance.<BR />Its purpose is to consolidate SAP documentation and SAP KBAs to clarify an important technical distinction:</P><P><STRONG>In SAP HANA, a successful backup operation alone does not necessarily confirm that a database system is recoverable in a newly installed or rebuilt environment.</STRONG></P><P>Understanding this distinction helps interpret restore failures correctly and align recovery expectations with the documented system behavior.<BR /><BR /><STRONG>1. What SAP HANA Actually Backs Up</STRONG></P><P>According to the SAP HANA documentation, a data backup captures the database persistence at a specific point in time.</P><P>SAP Help Portal – Backups (SAP HANA Platform)<BR /><A class="" href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/158071e0c455487a89ea56ac53ad4b31.html" target="_new" rel="noopener noreferrer">https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/158071e0c455487a89ea56ac53ad4b31.html</A></P><P>The documentation explains that data backups store the database data pages, while log backups record subsequent database changes.</P><P>SAP Help Portal – Log Backups<BR /><A class="" href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c3bb7e33bb571014a03eeabba4e37541.html" target="_new" rel="noopener noreferrer">https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c3bb7e33bb571014a03eeabba4e37541.html</A></P><P>From the documentation perspective, backups capture database content.<BR />However, SAP documentation differentiates between database data, database metadata, and system security information.</P><P>The backup files contain the database persistence and its change history through log backups.<BR />Recovery, however, requires the database system to interpret that persistence correctly.<BR /><BR /><STRONG>2. The Restore Dependency Documented in SAP KBAs</STRONG></P><P>SAP Knowledge Base Articles document recovery failures even when backup files are valid and accessible.</P><P>KBA 3558019<BR />Recovering a database backup fails with error: encryption root keys and backup password are missing<BR /><A class="" href="https://userapps.support.sap.com/sap/support/knowledge/en/3558019" target="_new" rel="noopener noreferrer">https://me.sap.com/notes/3558019</A></P><P>KBA 3250470<BR />Recovery of tenant database fails due to missing encryption root keys<BR /><A href="https://me.sap.com/notes/3250470" target="_blank" rel="noopener noreferrer">https://me.sap.com/notes/3250470</A></P><P>In these documented situations:</P><P>• backup files are intact<BR />• storage is reachable<BR />• the backup catalog exists</P><P>The restore still fails because a required dependency is missing in the target system environment.</P><P>The missing dependency is the encryption root key material required for decryption during recovery.<BR /><BR /><STRONG>3. Encryption in SAP HANA Is Also a Recovery Dependency</STRONG></P><P>SAP HANA uses an internal key hierarchy for encryption.<BR />When encrypted persistence or encrypted backups are involved, recovery requires the database to decrypt stored content.</P><P>SAP documentation describes handling of encryption material and system operations in administrative procedures and system replication operations.</P><P>SAP HANA System Replication Guide<BR /><A class="" href="https://help.sap.com/doc/c81e9406d08046c0a118c8bef71f6bdc/2.0.07/en-US/SAP_HANA_System_Replication_Guide_en.pdf" target="_new" rel="noopener noreferrer">https://help.sap.com/doc/c81e9406d08046c0a118c8bef71f6bdc/2.0.07/en-US/SAP_HANA_System_Replication_Guide_en.pdf</A></P><P>From the documented restore behavior, recovery does not only read backup files.<BR />The system must reconstruct the database using security metadata belonging to the original database system.</P><P>Therefore, recovery typically requires multiple components:<BR />• backup data files<BR />• backup catalog<BR />• log chain (depending on recovery type)<BR />• encryption root key material</P><P>The encryption root key material required to decrypt encrypted backup content is not contained in the backup data files and must be available in the target system environment.<BR /><BR /><STRONG>4. Why a Valid Backup Can Still Not Be Restored</STRONG></P><P>This leads to a fundamental distinction.</P><P>A backup operation verifies that database persistence was successfully written.</P><P>A recovery operation verifies that the database system can interpret that persistence.</P><P>SAP HANA recovery errors documented in the KBAs show that the restore process depends on system security material outside the backup chain.</P><P>In these cases, restore failure is not caused by:</P><P>• backup corruption<BR />• storage failure<BR />• an unsuccessful backup job</P><P>It occurs because required system security state is not available in the recovery environment.<BR /><BR /><STRONG>5. Typical Recovery Scenarios</STRONG></P><P>Documented recovery problems appear when a new system environment is created, for example after:</P><P>• host rebuild<BR />• new system installation<BR />• landscape re-provisioning<BR />• disaster recovery activation<BR />• ransomware recovery</P><P>Administrators correctly preserve backups but the target system is not identical to the original database system from a security perspective.</P><P>Even if SID, instance number, and database name are identical, the required encryption root key material is not available in the target system environment.</P><P>As a result, the recovery process cannot decrypt the persistence and the database cannot be reconstructed.<BR /><BR /><STRONG>6. Interpreting the Documented Behavior</STRONG></P><P>SAP documentation and SAP KBAs together indicate an important technical understanding:</P><P>The documented recovery behavior shows that successful backup creation alone is not sufficient to guarantee that recovery can be executed in a different or newly installed system environment.</P><P>The backup protects the stored data.<BR />The encryption material enables the database to interpret that data.</P><P>Without the corresponding system security material, database persistence cannot be reconstructed even though valid backups exist.<BR /><BR /><STRONG>Conclusion</STRONG></P><P>SAP documentation and SAP Knowledge Base Articles document that encrypted backup recovery requires the corresponding encryption material to be present in the restore environment.</P><P>Therefore:</P><P>A successful SAP HANA backup confirms that the backup operation completed successfully.</P><P>It does not, by itself, confirm that a database recovery can be performed in a newly installed or rebuilt system.</P><P>Understanding this distinction helps correctly interpret restore failures and reduces recovery analysis time.</P><P>For implementation or configuration procedures, always follow the official SAP documentation and SAP Notes.<BR /><BR /><STRONG>What This Means for Administrators</STRONG></P><P>The documented recovery behavior shows that backup monitoring alone does not fully represent recovery readiness.</P><P>When planning disaster recovery procedures, administrators should ensure that recovery tests validate the complete restore procedure and not only the existence or readability of backup files.</P><P>Only a successfully executed recovery test confirms that the documented recovery dependencies are fulfilled in the target environment.</P><P>For operational procedures and configuration details, always follow official SAP documentation and SAP Notes.<BR /><BR /><STRONG>Disclaimer</STRONG></P><P>This article is an interpretation of publicly available SAP documentation and SAP Knowledge Base Articles.<BR />Only official SAP documentation and SAP Notes are authoritative.</P> 2026-03-11T17:00:00.028000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/real-time-architecture-in-sap-cap-mastering-asynchronous-jobs-gateways-with/ba-p/14347669 Real-Time Architecture in SAP CAP: Mastering Asynchronous Jobs, Gateways with WebSockets 2026-03-12T21:26:14.975000+01:00 rgadirov https://community.sap.com/t5/user/viewprofilepage/user-id/151360 <P>In my&nbsp;<A class="" title=" SAP CAP on HANA XSA" href="https://community.sap.com/t5/technology-blog-posts-by-members/sap-cap-on-hana-xsa-scaling-industrial-data-applications-beyond-the-cloud/bc-p/14346514#M177965" target="_blank">previous article (SAP CAP on HANA XSA)</A>, we explored how the <STRONG>SAP Cloud Application Programming Model (CAP)</STRONG> on <STRONG>SAP HANA 2.0 XSA</STRONG> provides the raw power needed for high-performance industrial applications, specifically focusing on mass data ingestion via native Stored Procedures.</P><P>However, processing high volumes of data in real-tme is only half the battle. In a mission-critical Industrial Data Application, the user interface must stay in sync with these background processes. If a mass import takes several minutes, a standard synchronous request will lead to <STRONG>Gateway Timeouts (HTTP 502)</STRONG> and a frustrated user.</P><P>This second part of our series focuses on how to close this gap by combining CAP, the SAP Job Scheduler, and <STRONG>Express WebSockets</STRONG> into a responsive real-time architecture.</P><P>While the <STRONG>SAP Cloud Application Programming Model (CAP)</STRONG> is most commonly associated with the SAP Business Technology Platform (BTP), it offers immense productivity on-premise as well. However, the challenge lies in harmonizing long-running backend jobs (e.g., mass data imports via Stored Procedures) with the need for real-time updates in the frontend (<STRONG>Angular UI</STRONG>) under XSA security and routing constraints (<STRONG>Approuter</STRONG>). Relying on purely synchronous requests in this context risks gateway timeouts and a poor user experience.</P><P>In this post, I present a responsive real-time architecture that closes these gaps by combining CAP, SAP Job Scheduler, Angular WebSockets, and native HANA Stored Procedures.</P><HR /><H2 id="toc-hId-1791522182">Architecture at a Glance</H2><P>Our high-level architecture diagram divides the Industrial Data Application into logical layers. The central data flow is categorized into color-coded zones representing the Client, Gateway, Service, Data, and External Systems.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rgadirov_0-1773345967471.jpeg" style="width: 841px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/382761i539459543FA15B73/image-dimensions/841x459?v=v2" width="841" height="459" role="button" title="rgadirov_0-1773345967471.jpeg" alt="rgadirov_0-1773345967471.jpeg" /></span></P><H3 id="toc-hId-1724091396">Core Components of the Architecture:</H3><UL><LI><P><STRONG>Client Layer (Purple):</STRONG> An Angular UI providing real-time charting dashboards. It communicates with the backend via HTTPS OData APIs and WSS (WebSockets) Event subscriptions.</P></LI><LI><P><STRONG>Gateway Layer (Green):</STRONG> The <STRONG>SAP Approuter</STRONG> serves as the central entry point and gateway. It manages API destinations, handles JWT tokens for authentication (linked to the XSUAA/UAA provider), and routes both HTTPS (OData/Actions) and WebSocket traffic.</P></LI><LI><P><STRONG>Service Layer (Blue):</STRONG> An SAP CAP Node.js backend extended with <CODE>express-ws</CODE>. It contains the standard OData CRUD layer, business logic, and specialized mass data handlers. Central to this is the <STRONG>Express WebSocket Endpoint</STRONG> with a client registry to manage push notifications. This layer is also integrated with the <STRONG>SAP Job Scheduler</STRONG>.</P></LI><LI><P><STRONG>Data Layer (Orange):</STRONG> The SAP HANA 2.0 XSA on-premise platform. An <STRONG>HDI Container</STRONG> stores the primary application data in the Column Store. <STRONG>Native HANA Stored Procedures (SQLScript)</STRONG> are used to execute the actual mass data calls and imports.</P></LI><LI><P><STRONG>External Systems (Amber):</STRONG> External industrial APIs, historians, or sensor interfaces that act as the source for massive data streams. Native HANA procedures communicate with these systems for data ingestion.</P></LI></UL><HR /><H2 id="toc-hId-1398495172">Technical Implementation: WebSocket Registry and Broadcasting</H2><P>To transmit real-time events efficiently to the Angular UI, we use <CODE>express-ws</CODE> within the CAP Node.js backend. This requires a structured registry to manage WebSocket clients based on their specific interests (event types or channels).</P><P>In our project, we defined the WebSocket channels as follows. This registry utilizes <CODE>Sets</CODE> to store unique WebSocket connections for each business event type:</P><pre class="lia-code-sample language-javascript"><code>// db/src/ws/wsRegistry.js (Example location) 'use strict'; // Business-specific registration of WebSocket channels const wsClients = { 'validation-status': new Set(), // Critical for the numbered workflow 'import-status': new Set(), 'committing-status': new Set(), 'industrial-trade': new Set(), 'systemStatusChanged': new Set(), 'UserChanged': new Set(), 'process_step': new Set(), 'job_scheduler': new Set(), 'program-input-mode': new Set(), 'batch-job-status': new Set(), }; module.exports = { wsClients };</code></pre><H2 id="toc-hId-1201981667">The Asynchronous Job Workflow</H2><P>The most critical part of this architecture is the asynchronous job flow, numbered 1–6 in the diagram. This flow is a best practice for industrial scenarios where jobs cannot be completed synchronously. It demonstrates how CAP and WebSockets prevent gateway timeouts.</P><H3 id="toc-hId-1134550881">Let’s break down the steps:</H3><OL><LI><P><STRONG>Angular UI triggers CAP Action (Client -&gt; Service):</STRONG> The process starts when the Angular UI triggers a CAP Action (or service call) via HTTPS. This follows the standard HTTP OData/Actions path.</P></LI><LI><P><STRONG>Fast HTTP 202 Response (Service -&gt; Client):</STRONG> This is a decisive step for UX and timeout prevention. Instead of waiting for the job to finish, the CAP service responds immediately with an <STRONG>HTTP 202 (Accepted)</STRONG>. This signals to the frontend that the job has been queued and is running in the background.</P></LI><LI><P><STRONG>CAP Logic calls Stored Procedure (Service -&gt; Data):</STRONG> The CAP backend hands the heavy lifting off to the native HANA layer. This is done via a <CODE>CALL proc</CODE> (SQL/ODBC) that triggers the native HANA Stored Procedure (SQLScript).</P></LI><LI><P><STRONG>Native HANA Job Execution (Data -&gt; External -&gt; Data):</STRONG> The Stored Procedure executes the native job. It connects to external industrial APIs, fetches measurement data, processes it, and saves it into the HDI container’s Column Store. This can take seconds or minutes without blocking the UI.</P></LI><LI><P><STRONG>Job Completion &amp; Event Push (Data -&gt; Service) -&gt; WebSocket Trigger:</STRONG> Once the HANA procedure finishes, it sends a status trigger back to the CAP service. The CAP handler then triggers the WebSocket event.</P></LI></OL><P>Below is the code snippet showing how the business logic triggers the event using the <CODE>wsClients</CODE> registry. In this example, we send a status update to the <CODE>validation-status</CODE> channel:</P><pre class="lia-code-sample language-javascript"><code>// srv/handlers/validationHandler.js (Example location) const { wsClients } = require('../ws/wsRegistry'); // Import the registry // ... Inside business logic, after successful job completion ... const jobId = result.jobId; const final = true; const isValid = true; // Trigger wsSend with business payload wsSend("validation-status", { jobId, status: final ? "SUCCESS" : "IN_PROGRESS", request_Id: result?.CheckRequest || result?.Request || result?.request || null, isValid: final ? isValid : null, detailStatus: final ? 200 : 202, resultStatus: final ? (isValid ? 'VALIDATED' : 'NOT_VALIDATED') : 'PENDING' });</code></pre><P>6.&nbsp;&nbsp;<STRONG>'JobCompleted' Event Sent (Service -&gt; Client):</STRONG> The Express WebSocket endpoint receives the trigger and uses the registry (<CODE>wsClients['validation-status']</CODE>) to actively push the <CODE>JobCompleted</CODE> event to the connected Angular UI via the WSS path.</P><P>Here is the core <CODE>wsSend</CODE> broadcasting method. Note how it cleans up closed connections automatically:</P><pre class="lia-code-sample language-javascript"><code>// srv/ws/wsBroadcaster.js const { wsClients } = require('./wsRegistry'); async function wsSend(type, payload) { try { const clients = wsClients[type]; const idx = Number(process.env.CF_INSTANCE_INDEX || process.env.INSTANCE_INDEX || 0); console.log(`[wsSend] broadcast on instance ${idx}, channel ${type}`); if (!clients || clients.size === 0) return false; const msg = JSON.stringify({ event: 'statusEvent', data: { type, ...payload } }); for (const ws of clients) { try { if (ws.readyState === ws.OPEN) { ws.send(msg); } else { clients.delete(ws); // Cleanup } } catch (err) { clients.delete(ws); } } return true; } catch (err) { return false; } }</code></pre><H3 id="toc-hId-938037376">The UI Reaction (Ensuring Consistency)</H3><P>In response to the incoming <CODE>JobCompleted</CODE> event, the Angular UI performs an <STRONG>OData Refresh</STRONG>. This ensures the most recent data is pulled from the CAP service via the standard OData CRUD path, keeping the UI perfectly in sync.</P><HR /><H2 id="toc-hId-612441152">Technical Notes: package.json Setup</H2><P>To enable this in your CAP Node.js backend, you'll need <CODE>express-ws</CODE>. This allows you to integrate WebSockets directly into the Express app that CAP sits on:</P><pre class="lia-code-sample language-javascript"><code>{ "dependencies": { "express-ws": "5.0.2", "express": "^4" } }</code></pre><H2 id="toc-hId-415927647">Conclusion</H2><P>By combining the productivity of <STRONG>SAP CAP</STRONG> with the raw power of <STRONG>HANA Stored Procedures</STRONG> and the push capabilities of <STRONG>WebSockets</STRONG>, we’ve built a resilient real-time architecture for on-premise industrial applications. This asynchronous workflow with <STRONG>HTTP 202</STRONG> effectively kills gateway timeouts and ensures a fluid user experience.</P><H3 id="toc-hId-348496861">Key Benefits:</H3><UL><LI><P><STRONG>Peak Performance:</STRONG> Native HANA procedures handle massive data imports without Node.js overhead.</P></LI><LI><P><STRONG>Enhanced UX:</STRONG> Immediate 202 responses and real-time status updates prevent UI lockups.</P></LI><LI><P><STRONG>Maintainability:</STRONG> A structured WebSocket registry allows for a clean separation of business channels.</P></LI><LI><P><STRONG>Data Sovereignty:</STRONG> The entire stack remains securely on your on-premise HANA XSA infrastructure.</P></LI></UL><P>How do you handle mass data jobs and real-time requirements in your CAP apps? Are you already utilizing asynchronous patterns? I’d love to hear your thoughts in the comments!</P> 2026-03-12T21:26:14.975000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/free-migration-assessments-for-modernization-with-sap-bdc/ba-p/14359209 Free migration assessments for modernization with SAP BDC 2026-03-27T14:59:14.022000+01:00 VenkatG https://community.sap.com/t5/user/viewprofilepage/user-id/152354 <P>SAP is currently offering a limited-time complimentary migration assessment service for modernization with SAP Business Data Cloud. All customers running the applications listed below, either on-premises or in a private cloud, are eligible to take advantage of this free service.</P><UL><LI>SAP BW 7.x (all databases) and all versions of BW/4HANA.</LI><LI>SAP BusinessObjects (BOBJ), all versions</LI><LI>SAP Business Planning and Consolidation (BPC), all versions</LI><LI>SAP HANA and SAP IQ databases running custom-built (non-SAP) applications</LI></UL><P>These assessments provide deep insights into active and inactive objects, offer guidance on what can be modernized, and provide an actionable modernization plan with best practices.&nbsp;</P><P><SPAN>To better understand the process, diagram below illustrates the process.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="AssessmentProcessImage.jpg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/389485i91DE4FB0315FA7C9/image-size/large?v=v2&amp;px=999" role="button" title="AssessmentProcessImage.jpg" alt="AssessmentProcessImage.jpg" /></span></P><P>Key illustrated steps include</P><UL><LI>Submit an online request, which takes a couple of minutes. SAP will review the requests and approve them on a first-come, first-served basis. Customers will be provided with a few scripts to gather necessary information, which usually takes a few minutes to run on the production systems.</LI><LI>Upon sharing with SAP, expert teams will review the information and generate an assessment report.</LI><LI>Then, the SAP expert team will deliver a readout session, sharing insights into the system's usage patterns and a personalized transformation journey with modernization options to build a scalable solution and prepare for the era of AI with SAP BDC.</LI></UL><P>Additionally, the video below walks through the steps to request the free assessment service.</P><P><A href="https://dam.sap.com/mac/embed/public/vp/a/tU8Suc2?rc=10&amp;doi=SAP1289501&amp;includeSapBrandedWraper=true" target="_self" rel="noopener noreferrer"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Video Thumbnail.jpg" style="width: 640px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/389481i0D9937355E0F3C69/image-dimensions/640x405?v=v2" width="640" height="405" role="button" title="Video Thumbnail.jpg" alt="Video Thumbnail.jpg" /></span></A></P><P>Refer to the links below for additional information and to request free assessments.</P><UL><LI><A href="https://www.sap.com/registration/sap-bdc-modernization-assessment.html" target="_self" rel="noopener noreferrer">Assessment Request Form</A></LI><LI><A href="https://www.sap.com/products/data-cloud/sap-migration-assessment.html" target="_blank" rel="noopener noreferrer">Modernize with SAP Business Data Cloud</A></LI><LI><A href="https://www.sap.com/products/data-cloud/sap-bw-migration.html" target="_blank" rel="noopener noreferrer">Evolve your SAP Business Warehouse</A></LI><LI><A title="Readiness assessment service for SAP Business Data Cloud" href="https://dam.sap.com/mac/embed/public/pdf/a/9KCtiGp?rc=10&amp;doi=SAP1271624" target="_self" rel="noopener noreferrer">Readiness assessment service for SAP Business Data Cloud</A></LI></UL><P>Join the <A href="https://community.sap.com/t5/data-professionals/gh-p/data-professionals" target="_blank">SAP Data Professional Community</A>&nbsp;for the latest on Data &amp; AI topics.&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P> 2026-03-27T14:59:14.022000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736 Good to know: "SQL Statement Collection" the Swiss army knife for SAP HANA database administrators 2026-04-04T11:21:44.192000+02:00 Laszlo_Thoma https://community.sap.com/t5/user/viewprofilepage/user-id/170406 <P><ul =""><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-1664227788">Why was this blog post created?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-1467714283">Where can I find the most important information about SQL Statement Collection reports?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-1271200778">Where to find learning materials?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-1074687273">Other articles</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-878173768">Do you have further questions?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-quot-sql-statement-collection-quot-the-swiss-army-knife-for/ba-p/14365736#toc-hId-681660263">Contribution</a></li></ul></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Banner_2026.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393265iDD2CDF063060F099/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Banner_2026.png" alt="SAP_Community_Blog_Banner_2026.png" /></span></P><P class="lia-align-center" style="text-align: center;"><span class="lia-unicode-emoji" title=":graduation_cap:">🎓</span><FONT color="#FF0000">The blog contains SAP Learning references. </FONT><span class="lia-unicode-emoji" title=":television:">📺</span></P><P class="lia-align-right" style="text-align : right;"><FONT color="#FF0000">last updated: 2026-04-04</FONT></P><H1 id="toc-hId-1664227788">Why was this blog post created?</H1><P>The SAP HANA database comes with a set of tools that support the work of database administrators, developers, and other support teams. The tools are available both internally (SAP Employees) and externally. Knowledge of this set of tools is essential for SAP HANA database operators.</P><H1 id="toc-hId-1462018528" id="toc-hId-1467714283">Where can I find the most important information about SQL Statement Collection reports?</H1><P>The following SAP Note is the central source of the toolkit <span class="lia-unicode-emoji" title=":wrench:">🔧</span> called "SQL Statement Collection".</P><P><span class="lia-unicode-emoji" title=":blue_book:">📘</span>&nbsp;<A href="https://me.sap.com/notes/1969700" target="_blank" rel="noopener noreferrer">1969700</A> - SQL Statement Collection for SAP HANA</P><P>There are huge number of reports available for different purposes. There is a SAP Knowledge Base Article created to explain the reports one-by-one in details.</P><P><SPAN><span class="lia-unicode-emoji" title=":closed_book:">📕</span></SPAN>&nbsp;<A href="https://me.sap.com/notes/3311408" target="_blank" rel="noopener noreferrer">3311408</A> - Bookmark of SQL Statement Collection reports for SAP HANA</P><P>The SAP Knowledge Base Article is under construction and will contain more reports in the future.</P><P>SAP Help Portal -&nbsp;<SPAN>SAP HANA Troubleshooting and Performance Analysis Guide -&nbsp;<A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/bed8c14f9f024763b0777aa72b5436f6/69d0f22dee8f4c0e947e4b6327a51a7b.html" target="_blank" rel="noopener noreferrer">Using the SQL Statement Collection for Analysis and Health Checks</A></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Image_SQLStatementCollection.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393268i085B7E53B835C69E/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Image_SQLStatementCollection.png" alt="SAP_Community_Blog_Image_SQLStatementCollection.png" /></span></P><H1 id="toc-hId--35400413" id="toc-hId-1271200778"><STRONG>Where to find learning materials?</STRONG></H1><UL><LI>SAP Learning -&nbsp;<A href="https://learning.sap.com/courses/sap-hana-installation-and-administration" target="_blank" rel="noopener noreferrer">SAP HANA - Installation and Administration</A> -&nbsp;<A href="https://learning.sap.com/courses/sap-hana-installation-and-administration/using-the-sap-hana-statement-library" target="_blank" rel="noopener noreferrer">Using the SAP HANA Statement Library</A></LI></UL><H1 id="toc-hId-675964508" id="toc-hId-1074687273"><SPAN>Other articles</SPAN></H1><P><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/03/29/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/" target="_blank" rel="noopener noreferrer">Where can I find knowledge and information belongs to SAP HANA?</A><BR /><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/06/02/where-can-i-find-information-about-the-available-tools-for-sap-hana-all-types-of-use/" target="_blank" rel="noopener noreferrer">Where can I find information about the available tools for SAP HANA (all types of use)?</A></P><H1 id="toc-hId-479451003" id="toc-hId-878173768">Do you have further questions?</H1><P>Please do not hesitate to contact me if you have question or observation regarding the article.<BR />Q&amp;A link for SAP HANA:<SPAN>&nbsp;</SPAN><A href="https://answers.sap.com/tags/73554900100700000996" target="_blank" rel="noopener noreferrer">https://answers.sap.com/tags/73554900100700000996</A>&nbsp;</P><H1 id="toc-hId-282937498" id="toc-hId-681660263">Contribution</H1><P>If you find any missing information belongs to the topic, please let me know. I am happy to add the new content. My intention is to maintain the content continuously to keep the info up-to-date.</P><P><FONT color="#999999"><STRONG>Release Information</STRONG></FONT></P><TABLE width="100%" cellspacing="1"><TBODY><TR><TD height="58px"><FONT color="#999999">Release Date</FONT></TD><TD height="58px"><FONT color="#999999">Description</FONT></TD></TR><TR><TD height="30px"><FONT color="#999999">2026.04.04</FONT></TD><TD height="30px"><FONT color="#999999">First/initial Release of the SAP Blog Post documentation (Technical Article).</FONT></TD></TR></TBODY></TABLE> 2026-04-04T11:21:44.192000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/strategies-for-downtime-optimized-dmo-dodmo-doc-dmove2s4-for-large/ba-p/14366512 Strategies for Downtime-Optimized DMO : doDMO, doC, DMOVE2S4 for large databases 2026-04-06T14:14:14.510000+02:00 sumitjais https://community.sap.com/t5/user/viewprofilepage/user-id/651658 <P><STRONG>Context and Challenges:</STRONG><BR />Recently I got to migrate a very large non-HANA database of 50+ TB to HANA at AWS using Downtime-Optimized DMO (doDMO)<BR />While this was a shift from on-premise to AWS , the customer sought to migrate on HANA at AWS first i.e. to validate data compression, operations at 32TB U7inh EC2 instance of AWS etc, and decided to change the Suite subsequently later. Thus, the plan for using DMO move to SAP S/4HANA (DMOVE2S4) approach along with Downtime-Optimized Conversion (doC) had to be dropped</P><P>Operating in R3load pipeline mode ,doDMO is much faster compared to the usual DMO with system move approach that operates at file mode only. However, since downtime-optimized options (doDMO and doC) do not support System Move, thus, in order to migrate only the database using doDMO , a customer confronts the challenges:<BR />1. No Application migration : Customer has to install separate SAP application, and make necessary changes and connect the new SAP application at migrated database.<BR />2. No usability of Source DB upon migration : Customer has to restore the Source database or scrap it as the database cannot be used for an SAP system, since DMO artifacts remain in it even upon Database migration is completed.</P><P><STRONG>Do DMOVE2S4 and DMO with System Move solve these problems?</STRONG><BR />DMOVE2S4 is just an approach that can be incorporated with DMO, doDMO and doC based migrations. It is not a "Selectable" choice in the SUM UI ,but a method that combines DMO without letting SUM to know that this approach is actually used. With DMOVE2S4, there is no change in application-specific tasks such Simplification Item-Check and activities such as FIN Data Conversion and they are performed in uptime/downtime based on the chosen options i.e. Standard or Downtime-Optimised.<BR />Unlike usual SUM DMO, DMOVE2S4 lets SUM operate on the application, installed in Target environment (which initially is connected on Source database).<BR />With the completion of DMOVE2S4 based migration, ASCS is also moved to target environment where application finds itself already connected to new database of DMO.</P><P>While DMOVE2S4 approach tries to solve the first problem to some extent, it comes with its own technical and infrastructure requirements&nbsp;such as Latency &lt; 20 ms &amp; Bandwidth &gt; 400 MBit/s, and limitation that target must be S/4HANA.<BR />DMO with System move option was incorporated in DMO to solve both the above problems, but since it operates at file mode, it is not very fast and downtime friendly.<BR />Even parallel mode (using rsync for files) of DMO with System move may not match downtime-optimised DMO or even the normal DMO which also uses memory pipes for database migration.<BR />Thus, for a data-center migration of a large database &amp; application aiming no-S/4HANA,we still need to make some necessary compromises e.g. install and connect SAP applications, restore the source database (if needed for any use) etc.</P><P><STRONG>Available choices for a customer</STRONG></P><P>With the introduction of doDMO and doC, along with DMOVE2S4 approach, customers running SAP ERP or S/4HANA have multiple paths to transition their ERP to S/4HANA on hyperscalers such as AWS and Azure.<BR />To the journey of S/4HANA, multiple options focused on reducing the Conversion or migration-downtime brings us the situations to decide between the two questions : possible and optimal.<BR />The diagram below demonstrates the optimal choice that is possible for the different cases of migrations, keeping an eye on business-downtime.<BR />Factors such as cost and complexity can often be inversely proportional to the downtime-optimized by different ways, and they must be traded-off well.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="HANA Roadmap" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/395557iAC21DDE1EE8AB39F/image-size/large?v=v2&amp;px=999" role="button" title="S4HANA.png" alt="HANA Roadmap" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">HANA Roadmap</span></span></P><P>&nbsp;In case SUM DMO is not supported, we always have option for classical - export import of flat files for both homogeneous and heterogeneous migration using SWPM.</P> 2026-04-06T14:14:14.510000+02:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/transaction-cannot-be-revived-after-delayed-abortion-rap-runtime-019-must/ba-p/14346818 Transaction cannot be revived after delayed abortion(RAP_RUNTIME 019) - Must ROLLBACK After Failures 2026-04-08T11:52:21.989000+02:00 Xavier_Newbie https://community.sap.com/t5/user/viewprofilepage/user-id/1492998 <H2 id="toc-hId-1791494157">1. Symptom (ST22 / Dump Keywords)</H2><P>A typical dump looks like this (same category as in your case):</P><UL><LI>Runtime Error:<SPAN>&nbsp;</SPAN><CODE>RAISE_SHORTDUMP</CODE></LI><LI>Exception:<SPAN>&nbsp;</SPAN><CODE>CX_SADL_DUMP_APPL_MODEL_ERROR</CODE></LI><LI>T100:<SPAN>&nbsp;</SPAN><CODE>RAP_RUNTIME</CODE><SPAN>&nbsp;</SPAN>/<SPAN>&nbsp;</SPAN><CODE>019</CODE></LI><LI>Short Text:<SPAN>&nbsp;</SPAN><CODE>Transaction cannot be revived after delayed abortion (BO: I_ENTERPRISEPROJECTTP_2)</CODE></LI></UL><P>When debugging you may also notice RAP internal state like:</P><UL><LI><CODE>CL_RAP_BHV_PROCESSOR-&gt;IF_RAP_LEGACY_TRANSACTION~MV_ABORT = abap_true</CODE></LI></UL><P>Meaning: RAP has already marked the logical transaction as<SPAN>&nbsp;</SPAN><STRONG>aborted</STRONG><SPAN>&nbsp;</SPAN>(delayed abortion). It must not be reused.<BR /><BR /></P><H2 id="toc-hId-1594980652">2. Root Cause: RAP Transaction State + Delayed Abortion</H2><P>A RAP EML “logical transaction” typically consists of two phases:</P><OL><LI><STRONG>Transaction phase</STRONG></LI></OL><UL><LI><CODE>MODIFY ENTITIES</CODE><SPAN>&nbsp;</SPAN>runs validations/determinations/checks.</LI><LI>Errors are usually returned in<SPAN>&nbsp;</SPAN><CODE>FAILED</CODE><SPAN>&nbsp;</SPAN>/<SPAN>&nbsp;</SPAN><CODE>REPORTED</CODE></LI></UL><OL><LI><STRONG>Save phase</STRONG></LI></OL><UL><LI><CODE>COMMIT ENTITIES</CODE><SPAN>&nbsp;</SPAN>triggers the save sequence (finalize/save, etc.).</LI><LI>Errors again come back via<SPAN>&nbsp;</SPAN><CODE>FAILED</CODE><SPAN>&nbsp;</SPAN>/<SPAN>&nbsp;</SPAN><CODE>REPORTED</CODE><SPAN>&nbsp;</SPAN>(in the commit response this is the<SPAN>&nbsp;</SPAN><STRONG>LATE</STRONG><SPAN>&nbsp;</SPAN>response).</LI></UL><P>If anything fails in either phase, RAP can enter<SPAN>&nbsp;</SPAN><STRONG>delayed abortion</STRONG>:<BR />Your ABAP code continues running,but the RAP logical transaction becomes<SPAN>&nbsp;</SPAN><STRONG>unusable</STRONG>.<BR />Any subsequent EML using that aborted transaction context may lead to<SPAN>&nbsp;</SPAN><CODE>RAP_RUNTIME 019</CODE>.</P><pre class="lia-code-sample language-abap"><code>MODIFY ENTITIES OF &lt;BO&gt; ... FAILED ls_failed_tx REPORTED ls_reported_tx. COMMIT ENTITIES ... FAILED ls_failed_save REPORTED ls_reported_save. " If commit failed but you don't rollback here, " next EML call can dump (RAP_RUNTIME 019) MODIFY ENTITIES OF &lt;BO&gt; ...</code></pre><P><SPAN>Key point:&nbsp;</SPAN><STRONG>Never continue with another EML step after a failure without rollback.<BR /><BR /></STRONG></P><H2 id="toc-hId-1398467147">5. Correct Example&nbsp;Using<SPAN>&nbsp;</SPAN><CODE>I_EnterpriseProjectTP_3</CODE></H2><pre class="lia-code-sample language-abap"><code>*&amp;---------------------------------------------------------------------* *&amp; Report zdemo_rap_eml_epp3 *&amp;---------------------------------------------------------------------* *&amp; *&amp;---------------------------------------------------------------------* REPORT zdemo_rap_eml_epp3. PARAMETERS: p_uuid TYPE sysuuid_c36 OBLIGATORY, p_desc TYPE string DEFAULT 'Demo update from EML'. START-OF-SELECTION. DATA lv_uuid_x16 TYPE sysuuid_x16. TRY. cl_system_uuid=&gt;convert_uuid_c36_static( EXPORTING uuid = p_uuid IMPORTING uuid_x16 = lv_uuid_x16 ). CATCH cx_uuid_error. WRITE: / 'Invalid UUID format. Use 36-char UUID (xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx).'. RETURN. ENDTRY. DATA: ls_failed_tx TYPE RESPONSE FOR FAILED i_enterpriseprojecttp_3, ls_reported_tx TYPE RESPONSE FOR REPORTED i_enterpriseprojecttp_3, ls_failed_save TYPE RESPONSE FOR FAILED LATE i_enterpriseprojecttp_3, ls_reported_save TYPE RESPONSE FOR REPORTED LATE i_enterpriseprojecttp_3. TRY. " Transaction phase MODIFY ENTITIES OF i_enterpriseprojecttp_3 ENTITY EnterpriseProject UPDATE FIELDS ( ProjectDescription ) WITH VALUE #( ( %key-ProjectUUID = lv_uuid_x16 ProjectDescription = p_desc ) ) FAILED ls_failed_tx REPORTED ls_reported_tx. IF ls_failed_tx IS NOT INITIAL. WRITE: / 'MODIFY ENTITIES failed (transaction phase) -&gt; ROLLBACK ENTITIES'. ROLLBACK ENTITIES. RETURN. ENDIF. " Save phase COMMIT ENTITIES BEGIN RESPONSE OF i_enterpriseprojecttp_3 FAILED ls_failed_save REPORTED ls_reported_save. COMMIT ENTITIES END. IF ls_failed_save IS NOT INITIAL. WRITE: / 'COMMIT ENTITIES failed (save phase) -&gt; ROLLBACK ENTITIES'. ROLLBACK ENTITIES. RETURN. ENDIF. WRITE: / 'COMMIT ENTITIES OK.'. CATCH cx_root INTO DATA(lx_root). WRITE: / 'Unexpected exception:', lx_root-&gt;get_text( ). " Keep the EML contract: if anything goes wrong, end the logical transaction. ROLLBACK ENTITIES. ENDTRY.</code></pre><H3 id="toc-hId-1331036361">Two important details in this example</H3><OL><LI><CODE>COMMIT ENTITIES</CODE><SPAN>&nbsp;</SPAN>requires<SPAN>&nbsp;</SPAN><STRONG>LATE</STRONG><SPAN>&nbsp;</SPAN>response types</LI></OL><UL><LI><CODE>RESPONSE FOR FAILED LATE i_enterpriseprojecttp_3</CODE></LI><LI><CODE>RESPONSE FOR REPORTED LATE i_enterpriseprojecttp_3</CODE></LI></UL><OL><LI>“Fail fast” with rollback</LI></OL><UL><LI>If<SPAN>&nbsp;</SPAN><CODE>FAILED</CODE><SPAN>&nbsp;</SPAN>is not initial in either phase, rollback and end the logical transaction.</LI><LI>Start the next business step in a<SPAN>&nbsp;</SPAN><STRONG>new</STRONG><SPAN>&nbsp;</SPAN>logical transaction context.</LI></UL><P>&nbsp;</P><H2 id="toc-hId-1005440137">8. FAQ: Why not<SPAN>&nbsp;</SPAN><CODE>ROLLBACK WORK</CODE>?</H2><P>In RAP EML, use the EML-compatible statement:</P><UL><LI>Errors in EML logical transaction →<SPAN>&nbsp;</SPAN><CODE>ROLLBACK ENTITIES</CODE></LI></UL><P><CODE>ROLLBACK WORK</CODE><SPAN>&nbsp;</SPAN>is lower-level LUW handling and can lead to semantic mismatches with RAP’s logical transaction handling.<SPAN>&nbsp;</SPAN><SPAN>The dump you saw is fundamentally about the RAP logical transaction being aborted and still being reused.</SPAN></P> 2026-04-08T11:52:21.989000+02:00 https://community.sap.com/t5/technology-blog-posts-by-sap/exploring-model-lifecycle-management-with-python-machine-learning-client/ba-p/14363264 Exploring Model Lifecycle Management with Python Machine Learning Client for SAP HANA 2026-04-09T08:42:17.110000+02:00 xinchen https://community.sap.com/t5/user/viewprofilepage/user-id/712820 <P>&nbsp;</P><H1 id="toc-hId-1664163492">1. Introduction</H1><P class="lia-align-justify" style="text-align : justify;">This blog post introduces key model lifecycle management features in the <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/sap-hana-cloud-sap-hana-database-predictive-analysis-library-pal-sap-hana-cloud-sap-hana-database-predictive-analysis-library-pal-c9eeed7" target="_blank" rel="noopener noreferrer">SAP HANA Predictive Analysis Library (PAL)</A> through the <A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/latest/en-US/hana_ml.html" target="_blank" rel="noopener noreferrer">Python Machine Learning Client for SAP HANA (hana_ml)</A>. PAL provides rich in-database machine learning capabilities, while hana_ml exposes these capabilities in Python so that users can build, execute, and operate model workflows directly on SAP HANA data with minimal data movement.</P><P class="lia-align-justify" style="text-align : justify;">Model lifecycle development usually starts from a fit task, followed by some scoring tasks to evaluate the performance of a derived model. After repeated cycles of fitting and scoring tasks to get an acceptable model, the model can be delivered for continual prediction tasks and optional drift detection tasks. Drift detection is used to answer the question of whether the currently deployed model is still performing acceptably, since patterns in prediction data may change from time to time. Once drift is confirmed, a fit task on the latest data is required, and therefore a new cycle of model development begins.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>Experiment tracking</STRONG> is a feature that automatically traces and persists some information for the input and output of PAL procedure execution. Each PAL procedure call is identified by a tracking ID. The tracking ID consists of two parts: an experiment ID and a run ID, representing one experiment and one run respectively. Logically, the relationship between experiment and run is one-to-many. One experiment usually targets a single task such as fitting, predicting, or scoring, with a selected algorithm. Users may have multiple runs for the same experiment while exploring different hyperparameters and/or datasets.</P><P class="lia-align-justify" style="text-align : justify;">The persisted information for experiment tracking consists of two parts: tracking metadata and tracking logs. Different APIs are available to retrieve them. Tracking metadata records information about the invoked PAL procedure and its current running status. Tracking logs are persisted in chronological order, each of them has a log type, log value and log timestamp.</P><P class="lia-align-justify" style="text-align : justify;">The available log types include:</P><UL><LI>Parameter: PAL configuration for each procedure. This is essential information for rerunning the procedure.</LI><LI>Dataset Metadata: Metadata of the input data, including the dataset source and descriptions of the fields in the dataset.</LI><LI>Model Signature: Information about how to use the model, including model input data format, accepted configuration parameters, and output data structure.</LI><LI>Metric: Metric data that records statistical information for the model.</LI><LI>Figure: Discrete and continuous figure data used for plotting and analysis.</LI></UL><P class="lia-align-justify" style="text-align : justify;">Reproducing the exact same result with a given dataset and configuration is a fundamental requirement in model development. It can be used for validating findings, debugging models, and ensuring consistent behavior over time. The logs of input parameters, dataset metadata, and tracking metadata are essential information for achieving this purpose.</P><P class="lia-align-justify" style="text-align : justify;">The model signature log can guide users in executing prediction and scoring tasks with a derived model. Metric and figure logs can be used for analyzing model performance.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>Model Storage</STRONG> is another important feature for model development. Users can identify a model by model name and model version. Model storage serves as the standard source for machine learning models when committing prediction, scoring and evaluation tasks. Besides, newly generated models can be pushed into model storage with a selected model name and model version.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>Drift detection</STRONG> is a turning point in model development and runtime operations. It alerts users that model degradation may happen or has happened. This can be achieved with the features of drift detection combined with metric and/or figure track entities described above. Currently hana_ml provides visual charts to compare metrics between different experiment runs to verify whether such a drift has happened.</P><P class="lia-align-justify" style="text-align : justify;">Finally, <STRONG>automation</STRONG> drives the workflow across model development activities. With the Scheduled Execution feature, users can not only automate single tasks such as model fit, model prediction, and model scoring, but also orchestrate related tasks in a defined order or in parallel. All automated tasks can be scheduled as background jobs either immediately or by time-frequency settings.</P><P class="lia-align-justify" style="text-align : justify;">With all these features in hana_ml, users can construct flexible model development workflows, including model fitting, model testing, model storage, model serving, and drift detection.</P><P class="lia-align-justify" style="text-align : justify;">&nbsp;</P><H1 id="toc-hId-1467649987"><SPAN>2. Use Case: End-to-End Model Lifecycle Management</SPAN></H1><P class="lia-align-justify" style="text-align : justify;"><SPAN>This use case shows a compact end-to-end model lifecycle workflow based on the public Pima Indians Diabetes dataset. This example is intended for demonstration and teaching purposes only. It uses <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/sap-hana-cloud-sap-hana-database-predictive-analysis-library-pal-sap-hana-cloud-sap-hana-database-predictive-analysis-library-pal-c9eeed7" target="_blank" rel="noopener noreferrer">SAP HANA Predictive Analysis Library (PAL)</A> together with the <A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/latest/en-US/hana_ml.html" target="_blank" rel="noopener noreferrer">Python Machine Learning Client for SAP HANA (hana_ml)</A> to illustrate how a model is trained, tracked, stored, operationalized, and monitored over time. The notebook example for this use case can be downloaded from the <A href="https://github.com/SAP-samples/hana-ml-samples/tree/main/Python-API/usecase-examples/ml-lifecycle-examples" target="_blank" rel="noopener noreferrer nofollow">ml-lifecycle-examples folder</A>.</SPAN></P><P class="lia-align-justify" style="text-align : justify;"><SPAN>Lifecycle phases covered:</SPAN></P><UL><LI><SPAN>Baseline Training &amp; Experiment Tracking:​ Train an initial model and log all parameters, metrics, and artifacts.</SPAN></LI><LI><SPAN>Model Storage &amp; Management:​ Save, version, load, and manage the model.</SPAN></LI><LI><SPAN>Scheduled Inference:​ Automate predictions on a regular schedule.</SPAN></LI><LI><SPAN>Monitoring &amp; Retraining:​ Simulate monitoring for data drift and retraining decisions.</SPAN></LI></UL><P class="lia-align-justify" style="text-align : justify;"><SPAN>Dataset note: this example uses the Pima Indians Diabetes dataset, a binary classification task based on clinical measurements such as glucose concentration, BMI, and age.</SPAN></P><H2 id="toc-hId-1400219201"><SPAN>Step 0: Environment and Data Preparation</SPAN></H2><P class="lia-align-justify" style="text-align : justify;"><SPAN>We start by importing the core libraries and creating a connection to SAP HANA. This connection context is the entry point for loading data, running PAL algorithms, and managing lifecycle artifacts in the database. It is the required first step because all later actions reuse it.</SPAN></P><pre class="lia-code-sample language-python"><code>from hana_ml import dataframe from hana_ml.algorithms.pal.utility import DataSets from hana_ml.algorithms.pal.unified_classification import UnifiedClassification from hana_ml.artifacts.tracking.tracking import MLExperiments, delete_experiment_log, get_tracking_log from hana_ml.visualizers.tracking import ExperimentMonitor, ScheduledTaskMonitor from hana_ml.model_storage import ModelStorage # Establish a connection to SAP HANA conn = dataframe.ConnectionContext(url='&lt;host&gt;', port=&lt;port&gt;, user='&lt;user&gt;', password='&lt;pwd&gt;')</code></pre><P class="lia-align-justify" style="text-align : justify;"><SPAN>Replace ‘host’, ‘port’, ‘user’, and ‘pwd’ with your SAP HANA instance details.</SPAN></P><P class="lia-align-justify" style="text-align : justify;"><SPAN>Next, use train_test_val_split to create the main working subsets: <STRONG>df_train</STRONG> for baseline fitting, <STRONG>df_score</STRONG> for holdout evaluation, and <STRONG>df_inference</STRONG> for scheduled batch prediction. This keeps training, evaluation, and operational inference clearly separated.</SPAN></P><pre class="lia-code-sample language-python"><code>from hana_ml.algorithms.pal.partition import train_test_val_split df_full, _, _, _ = DataSets.load_diabetes_data(conn) df_train, df_score, _ = train_test_val_split(data=df_full, partition_method='random', random_seed=23, training_percentage=0.8, testing_percentage=0.2, validation_percentage=0.0, id_column="ID") # Save tables in HANA df_train.save("PIMA_INDIANS_DIABETES_TRAIN_TBL") df_score.save("PIMA_INDIANS_DIABETES_SCORE_TBL") print("Train sample shape : ", df_train.shape) print("The first 3 rows: ") print(df_train.head(3).collect()) print("The first 3 rows of inference sample (no label column):") df_inference = df_score.deselect("CLASS") print(df_inference.head(3).collect())</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig.1 Data samples" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393773i6A9EC8D6BE5197FA/image-size/medium?v=v2&amp;px=400" role="button" title="fig1.png" alt="Fig.1 Data samples" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig.1 Data samples</span></span></P><P><SPAN>To mimic a simple monitoring scenario, we then construct weekly batches with different label distributions. These batches keep the label column so later scoring runs can be compared for drift.</SPAN></P><pre class="lia-code-sample language-python"><code># Simulated weekly batches for drift observation (different label mix). week20_hdf = df_score.filter('ID &lt; 100') week21_hdf = df_score.filter('CLASS = 1') week22_hdf = df_score.filter('CLASS = 0') week20_hdf.save('DIABETES_WEEK20') week21_hdf.save('DIABETES_WEEK21') week22_hdf.save('DIABETES_WEEK22') print(week20_hdf.shape) print(week21_hdf.shape) print(week22_hdf.shape)</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig 2. Shape of simulated data" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393774iB033DBF333320F2E/image-size/large?v=v2&amp;px=999" role="button" title="fig2.png" alt="Fig 2. Shape of simulated data" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig 2. Shape of simulated data</span></span></P><H2 id="toc-hId-1203705696">Step 1. Baseline Training and Experiment Tracking</H2><P class="lia-align-justify" style="text-align : justify;"><SPAN>This step establishes the initial reference model. We train a Hybrid Gradient Boosting Tree classifier, use <EM>MLExperiments</EM> to log parameters and metrics automatically, and keep the run history auditable. Training and tracking are created together.</SPAN></P><P class="lia-align-justify" style="text-align : justify;">We begin by creating a dedicated tracking session identified by a unique EXPERIMENT_ID. Within that experiment, training and scoring are recorded as separate runs.</P><pre class="lia-code-sample language-python"><code># Constants for the workflow EXPERIMENT_ID = "BLOG_DIABETES_TRACKING" MODEL_NAME = "BLOG_DIAB_HGBT" TASK_ID = "DIABETES_WEEKLY_PREDICT" # Optional, clear previous tracking logs so each run starts from a clean state. delete_experiment_log(conn, EXPERIMENT_ID) # Initialize the experiment tracker tracker = MLExperiments( connection_context=conn, experiment_id=EXPERIMENT_ID, experiment_description="diabetes experiment") # Define the hyperparameter grid for model search. param_values = { "learning_rate": [0.1, 0.4], "n_estimators": [5, 10], "split_threshold": [0.1, 0.3]}</code></pre><P class="lia-align-justify" style="text-align : justify;"><SPAN>We configure the model with grid search and cross-validation, then enable autologging. This captures the hyperparameters, source dataset, and generated artifacts under the run name </SPAN><SPAN>“Diagnosis_classifier-fit”.</SPAN><SPAN> The key value is reproducibility: the tracked baseline can later be compared with monitoring runs.</SPAN></P><pre class="lia-code-sample language-python"><code>uhgbt = UnifiedClassification( func="HybridGradientBoostingTree", param_search_strategy="grid", resampling_method="cv", evaluation_metric="error_rate", ref_metric=["auc"], fold_num=5, random_state=123, param_values=param_values) # Enable automatic tracking of parameters, metrics, and artifacts. tracker.autologging( model=uhgbt, run_name="Diagnosis_classifier-fit", dataset_name="diabetes", dataset_source="PIMA_INDIANS_DIABETES_TRAIN_TBL") # Train the model using stratified partitioning uhgbt.fit(data=df_train, key="ID", label="CLASS", partition_method="stratified", partition_random_state=5, stratified_column="CLASS") # Log a separate scoring run tracker.autologging( model=uhgbt, run_name="Diagnosis_classifier-score", dataset_name="diabetes", dataset_source="PIMA_INDIANS_DIABETES_SCORE_TBL") score_pred, score_stats, score_cm, score_metrics = uhgbt.score( data=df_score, key="ID", label="CLASS")</code></pre><P class="lia-align-justify" style="text-align : justify;"><SPAN>After execution, we can retrieve and inspect the tracking artifacts for the current run:</SPAN></P><pre class="lia-code-sample language-python"><code>tracking_id = tracker.get_current_tracking_id() print(f"tracking id: {tracking_id}") print(tracker.get_tracking_metadata_for_current_run().collect()) print(get_tracking_log(connection_context=conn, tracking_id).head(5).collect())</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig. 3 Tracking metadata and log" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393775i48B015D6C321675E/image-size/large?v=v2&amp;px=999" role="button" title="fig3.png" alt="Fig. 3 Tracking metadata and log" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig. 3 Tracking metadata and log</span></span></P><P class="lia-align-justify" style="text-align : justify;">We also provide a dashboard view of tracked runs, metrics, and artifacts.</P><pre class="lia-code-sample language-python"><code>experiment_monitor = ExperimentMonitor(connection_context=conn, experiment_ids=[EXPERIMENT_ID]) experiment_monitor.start()</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig. 4 Experiement dashboard" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393776i2C28BECC4E84B1AC/image-size/large?v=v2&amp;px=999" role="button" title="fig4.png" alt="Fig. 4 Experiement dashboard" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig. 4 Experiement dashboard</span></span></P><P class="lia-align-justify" style="text-align : justify;">In the Experiment Monitor, the experiment named “BLOG_DIABETES_TRACKING” contains two runs. Opening a run shows its tracked details, including parameters, metrics, and visual artifacts. For example, the run labeled “Diagnosis_classifier-score” shows charts such as ROC and cumulative gains plots.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig.5 Continuous figure" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393777iE109754A6ECDCEBA/image-size/large?v=v2&amp;px=999" role="button" title="fig5.png" alt="Fig.5 Continuous figure" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig.5 Continuous figure</span></span></P><H2 id="toc-hId-1007192191">Step 2. Model Storage &amp; Management</H2><P class="lia-align-justify" style="text-align : justify;">In this step, we persist the selected model in SAP HANA. This creates a versioned model artifact that can be loaded for later use. The ModelStorage class handles save, list, load, and delete operations. In the notebook, this step shows how the tracked baseline is promoted into a reusable deployment artifact.</P><pre class="lia-code-sample language-python"><code># Example decision: choose uhgbt as the baseline operational model. candidate_model = uhgbt # Assign model identity fields before persisting. candidate_model.name = MODEL_NAME candidate_model.version = 1 model_storage = ModelStorage(connection_context=conn) # if_exists='replace' overwrites an existing model with the same name/version. model_storage.save_model(model=candidate_model, if_exists="replace") # List the models model_storage.list_models(name=MODEL_NAME) deployed_model = model_storage.load_model(name=MODEL_NAME) print(f"Deployed model from storage: {MODEL_NAME}")</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig. 6 Model list" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393767i484263E6B1685072/image-size/large?v=v2&amp;px=999" role="button" title="fig6.png" alt="Fig. 6 Model list" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig. 6 Model list</span></span></P><P>&nbsp;</P><P class="lia-align-justify" style="text-align : justify;"><SPAN>The following command is optional cleanup for demo artifacts.</SPAN></P><pre class="lia-code-sample language-python"><code>model_storage.delete_models(name=MODEL_NAME)</code></pre><H2 id="toc-hId-810678686">Step 3. Operationalize with Scheduler</H2><P class="lia-align-justify" style="text-align : justify;">This step moves the stored model into an operational workflow by creating scheduled inference. The scheduler runs predictions on a defined cadence, such as weekly, without manual execution. In other words, deployment is not only about storing a model, but also about defining how it will run repeatedly.</P><pre class="lia-code-sample language-python"><code># Import and initialize the scheduler from hana_ml.algorithms.pal.scheduler import ScheduledExecution sexec = ScheduledExecution(conn) # Define a prediction task using the deployed model sexec.create_predict_task( obj=deployed_model, predict_params={"data": df_inference, "key": "ID"}, task_id=TASK_ID, force=True) # Schedule the task to run automatically weekly_cron = "* * * mon 8 0 0" schedule_info = sexec.create_task_schedule( task_id=TASK_ID, cron=weekly_cron, force=True) schedule_info.collect()</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig.7 Schedule information" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393769i6FC3B22897C41DCC/image-size/large?v=v2&amp;px=999" role="button" title="fig7.png" alt="Fig.7 Schedule information" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig.7 Schedule information</span></span></P><pre class="lia-code-sample language-python"><code># Launch the scheduler monitoring dashboard scheduled_task_monitor = ScheduledTaskMonitor(connection_context=conn, task_ids=[TASK_ID]) scheduled_task_monitor.start()</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig.8 Scheduled Task Monitor" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393770i94A96082A3E14093/image-size/large?v=v2&amp;px=999" role="button" title="fig8.png" alt="Fig.8 Scheduled Task Monitor" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig.8 Scheduled Task Monitor</span></span></P><P>&nbsp;</P><P class="lia-align-justify" style="text-align : justify;"><SPAN>The following commands are included for reference only. In the notebook, they are optional inspection and validation utilities rather than required steps.</SPAN></P><UL><LI><STRONG><SPAN>Query Schedule &amp; Logs:</SPAN></STRONG> The function <EM><SPAN>query_task_schedule(task_id)</SPAN></EM><SPAN> returns two DataFrames. The first describes the schedule definition, such as the cron pattern, and the second contains execution logs for historical runs, which are useful for auditing and debugging.</SPAN></LI><LI><STRONG><SPAN>Trigger a Manual Validation Run:</SPAN></STRONG> The function <EM><SPAN>create_one_off_task_schedule(task_id)</SPAN></EM><SPAN> triggers an immediate out-of-schedule execution of the task. This is useful for validating the setup before relying on the automated schedule.</SPAN></LI><LI><STRONG><SPAN>Remove the Schedule:</SPAN></STRONG> The function <EM><SPAN>remove_task_schedule(task_id)</SPAN></EM><SPAN> removes the schedule binding for the specified task ID.</SPAN></LI></UL><H2 id="toc-hId-614165181"><SPAN>Step 4. Monitor Drift Signals and Decide Whether to Retrain</SPAN></H2><P class="lia-align-justify" style="text-align : justify;"><SPAN>This final step closes the model lifecycle loop by simulating production monitoring. We reuse the weekly batches prepared earlier, score them with the deployed model, and compare whether the tracked metrics remain stable or start to drift. The main idea is not the exact metric values in this toy example, but the monitoring pattern: repeated scoring of later batches against the same deployed model.</SPAN></P><pre class="lia-code-sample language-python"><code># Optional WEEKLY_EXPERIMENT_ID = "WEEKLY_HGBT_TRACK" delete_experiment_log(conn, WEEKLY_EXPERIMENT_ID) # Create a dedicated experiment for production-like weekly monitoring. MLModel_weekly_tracking = MLExperiments( connection_context=conn, experiment_id=WEEKLY_EXPERIMENT_ID, experiment_description="Monitor of HGBT model weekly" ) # Reuse the weekly slices prepared earlier and log one score run per week. weekly_batches = [ ("week20-score", week20_hdf, "diabetes_week20", "DIABETES_WEEK20"), ("week21-score", week21_hdf, "diabetes_week21", "DIABETES_WEEK21"), ("week22-score", week22_hdf, "diabetes_week22", "DIABETES_WEEK22"), ] for run_name, weekly_batch, dataset_name, dataset_source in weekly_batches: MLModel_weekly_tracking.autologging( model=deployed_model, run_name=run_name, dataset_name=dataset_name, dataset_source=dataset_source ) score_pred, score_stats, score_cm, score_metrics = deployed_model.score( weekly_batch, key="ID", label="CLASS" )</code></pre><P class="lia-align-justify" style="text-align : justify;">You can visualize these trends, such as accuracy or AUC, in the Experiment Monitor dashboard. For example, in the simulated scenarios for Weeks 20, 21, and 22, you can select the accuracy metric, choose the three corresponding weekly runs, and click Compare to open a detailed comparison view.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig. 9 Weekly monitor" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393771i45EB93209E4CDD37/image-size/large?v=v2&amp;px=999" role="button" title="fig8.png" alt="Fig. 9 Weekly monitor" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig. 9 Weekly monitor</span></span></P><P class="lia-align-justify" style="text-align : justify;">In the figure below, you can observe fluctuations in accuracy across the weeks, for example from <STRONG>0.78</STRONG> to <STRONG>0.61</STRONG> and then to <STRONG>0.81</STRONG>. A sharp shift such as the drop in Week 21 would be a reasonable drift signal. In a real workflow, that signal would trigger a review and could lead to actions such as retraining the model.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Fig. 10 Model Drift" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/393772i55146B1D2E7C45C4/image-size/large?v=v2&amp;px=999" role="button" title="fig10.png" alt="Fig. 10 Model Drift" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Fig. 10 Model Drift</span></span></P><H1 id="toc-hId-288568957"><SPAN>3. Summary</SPAN></H1><P class="lia-align-justify" style="text-align : justify;">This blog post shows the key features and an end-to-end model lifecycle workflow use case with SAP HANA PAL and hana_ml. Conceptually, model lifecycle management in SAP HANA PAL with hana_ml is a closed-loop process that connects model development, model registration, model operations, and post-deployment monitoring in one traceable workflow. In this article, we use a compact scenario to explain that progression from baseline training to operational monitoring.</P><H3 id="toc-hId-350220890">References</H3><P class="lia-align-justify" style="text-align : justify;"><STRONG>Notebook download folder:</STRONG> <A href="https://github.com/SAP-samples/hana-ml-samples/tree/main/Python-API/usecase-examples/ml-lifecycle-examples" target="_blank" rel="noopener noreferrer nofollow">ml-lifecycle-examples</A></P><P class="lia-align-justify" style="text-align : justify;"><STRONG>Product Documentation:</STRONG></P><UL><LI><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/pal-track" target="_blank" rel="noopener noreferrer">PAL: Tracking</A></LI><LI><A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/latest/en-US/hana_ml.artifacts.html#module-hana_ml.artifacts.tracking.tracking" target="_blank" rel="noopener noreferrer">hana_ml: Track</A></LI><LI><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/calling-pal-with-schedule" target="_blank" rel="noopener noreferrer">PAL: Schedule</A></LI><LI><A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/latest/en-US/pal/algorithms.html#pal-scheduler" target="_blank" rel="noopener noreferrer">hana_ml: schedule</A></LI></UL><P class="lia-align-justify" style="text-align : justify;"><STRONG>Most Relevant Blog Posts:</STRONG></P><UL><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/model-storage-with-python-machine-learning-client-for-sap-hana/ba-p/13483099" target="_blank">Model Storage</A></LI><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/new-machine-learning-features-in-sap-hana-cloud/ba-p/13671778" target="_blank">New Machine Learning Features in SAP HANA Cloud</A></LI><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/fairness-in-machine-learning-a-new-feature-in-sap-hana-cloud-pal/ba-p/13580185" target="_blank">Fairness in Machine Learning</A></LI></UL><P class="lia-align-justify" style="text-align : justify;"><STRONG>Other Related Blog Posts:</STRONG></P><UL><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/a-multivariate-time-series-modeling-and-forecasting-guide-with-python/ba-p/13517004" target="_blank">A Multivariate Time Series Forecasting Guide</A></LI><LI><A href="https://blogs.sap.com/2020/12/18/identification-of-seasonality-in-time-series-with-python-machine-learning-client-for-sap-hana/" target="_blank" rel="noopener noreferrer">Identification of Seasonality in Time Series</A></LI><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/global-explanation-capabilities-in-sap-hana-machine-learning/ba-p/13620594" target="_blank">Global Explanation Capabilities</A></LI><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/exploring-ml-explainability-in-sap-hana-pal-classification-and-regression/ba-p/13681514" target="_blank">ML Explainability for Classification and Regression</A></LI><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/exploring-ml-explainability-in-sap-hana-pal-time-series/ba-p/13719609" target="_blank">ML Explainability for Time Series</A></LI></UL> 2026-04-09T08:42:17.110000+02:00