https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-Integration-Strategy-blog-posts.xml SAP Community - SAP Integration Strategy 2026-02-18T12:12:48.724457+00:00 python-feedgen SAP Integration Strategy blog posts in SAP Community https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/idocs-are-still-safe-for-sap-s-4hana-sap-clean-core-extensibility-level-b/ba-p/14225439 IDOCs are Still Safe for SAP S/4HANA - SAP Clean Core Extensibility Level B 2025-09-23T11:35:28.820000+02:00 MichalKrawczyk https://community.sap.com/t5/user/viewprofilepage/user-id/45785 <H2 id="toc-hId-1760984392">Intro&nbsp;</H2><P>IDOCs are safe to use in SAP S/4HANA programs—and if anyone tells you otherwise, point them to this blog to learn more. IDOCs are now officially part of <STRONG>SAP Clean Core Extensibility!</STRONG> That’s the big news from updated SAP Guidelines (OSS Note <STRONG>3578329 – Frameworks, Technologies and Development Patterns in Context of Clean Core Extensibility</STRONG>).</P><P>This ends the myth that IDOCs are “dead.” They are not. They’re preserved as <STRONG>SAP Clean Core Level B&nbsp;Extensibility</STRONG>, which means your existing IDOC investments are safe.&nbsp;Let’s explore four perspectives—<STRONG>integration, monitoring and error handling, performance, and testing</STRONG>—to see where APIs are recommended first, and where IDOCs still bring value.</P><H2 id="toc-hId-1564470887">1. Integration Perspective</H2><P>SAP’s recommendation for new development in S/4HANA is clear: use <STRONG>modern integration technologies</STRONG> such as APIs, events, or OData services. They are the strategic direction and best fit for greenfield or transformation projects. That said, not all projects require reinvention. If your business processes remain largely the same as in ECC, and the goal is to <STRONG>reduce cost and risk</STRONG>, IDOCs can still be used. They are <STRONG>not prohibited</STRONG>, and as long as an IDOC type is not listed as deprecated in the <STRONG>SAP Simplification List</STRONG>, it is safe:<BR /><A href="https://help.sap.com/docs/btc/lean-sdt-best-practices/sap-s-4hana-simplification-analysis?locale=en-US" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/btc/lean-sdt-best-practices/sap-s-4hana-simplification-analysis?locale=en-US&nbsp;</A>Because IDOCs are now Clean Core Level B&nbsp;Extensibility, continuing with them is fully supported in <STRONG>SAP S/4HANA on-premise</STRONG> and <STRONG>SAP S/4HANA Private Cloud</STRONG>. They offer a pragmatic path to de-risk projects and leverage existing know-how. </P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="integration.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/318770i4D4EF460ACC92791/image-size/large?v=v2&amp;px=999" role="button" title="integration.png" alt="integration.png" /></span></P><H2 id="toc-hId-1367957382">2. Monitoring and Error Handling Perspective</H2><P>For new interfaces, SAP recommends using <STRONG>AIF (Application Interface Framework)</STRONG> to handle monitoring, error management, and business validations. AIF provides a powerful, unified monitoring layer, and in many cases it comes <STRONG>license-free</STRONG> (for standard SAP interfaces delivered in namespaces like /SDSLS or /LEEDI, or in CFIN, Document Compliance, or S/4HANA Public Cloud). Where APIs are deployed, AIF bridges the gap between technical payloads and business-friendly monitoring.</P><P>IDOCs still have a strong card here: their monitoring capabilities are mature, well-known, and integrated into standard SAP transactions. Functional users know how to work with WE02, BD87, and standard reprocessing flows. This means less training, less overhead, and faster resolution in day-to-day operations.</P><P>In other words: <STRONG>AIF is recommended</STRONG>, but IDOCs deliver proven stability in monitoring and error handling.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="monitoring.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/318761i46911CE9A0AC3FD9/image-size/large?v=v2&amp;px=999" role="button" title="monitoring.png" alt="monitoring.png" /></span></P><H2 id="toc-hId-1171443877">3. Performance Perspective</H2><P>APIs are the recommended way forward, but they come with a structural tradeoff: their payloads are larger and typically handle transactions individually. This makes them more verbose and sometimes less efficient for high-volume use cases.&nbsp;IDOCs, in contrast, have long supported <STRONG>bundling, batching, and parallel processing</STRONG> at scale. They’ve been the backbone of high-volume EDI and SAP-to-SAP integrations for decades, and their ability to handle millions of documents daily is field-proven. So while APIs are the strategic choice for new designs, <STRONG>IDOCs remain a safe performance option for scenarios with heavy data throughput</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="performance.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/318764i5B1281FABD0691EC/image-size/large?v=v2&amp;px=999" role="button" title="performance.png" alt="performance.png" /></span></P><H2 id="toc-hId-974930372">4. Testing Perspective</H2><P>Testing support for APIs exists (for example, with <STRONG>SXI_MONITOR</STRONG>), but it is more technical and often requires additional training for functional teams. For IDOCs, simulation and reprocessing are familiar to almost every SAP functional consultant. Tools like WE19 and BD87 make IDOC testing and troubleshooting accessible, reducing dependency on specialist skills.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="testing.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/318765i9A532F171BFD6DE4/image-size/large?v=v2&amp;px=999" role="button" title="testing.png" alt="testing.png" /></span></P><P>For projects that need to <STRONG>standardize and automate testing across both APIs and IDOCs</STRONG>, solutions like <STRONG>Int4 Suite</STRONG> provide simulation and test automation capabilities that unify all SAP integration scenarios. You can learn more on how to test and simulate SAP integrations in this SAP learning Course: <A href="https://learning.sap.com/courses/avoid-sap-s-4hana-project-delays-with-third-party-systems-service-virtualization?url_id=text-former-openSAP-course" target="_self" rel="noopener noreferrer">Avoid SAP S/4HANA Project Delays with Third-Party Systems Service Virtualization</A>&nbsp;So while <STRONG>APIs are recommended for new developments</STRONG>, IDOCs still offer <STRONG>ease of testing and proven tooling</STRONG>, especially in brownfield projects.</P><H2 id="toc-hId-778416867">Conclusion</H2><P>The strategic path forward in SAP S/4HANA is built on APIs, events, and frameworks like AIF. That’s <STRONG>the official recommendation</STRONG>, and it should guide greenfield and transformation projects. But IDOCs are far from obsolete. They are <STRONG>officially part of Clean Core extensibility</STRONG>, safe to use in both on-premise and private cloud, and carry unique advantages in performance, monitoring, and ease of testing. That makes them not only supported, but still a pragmatic and safe choice when the situation calls for stability, cost reduction, and risk mitigation.</P> 2025-09-23T11:35:28.820000+02:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/int4-suite-agents-empowers-functional-consultants-to-test-integrated-sap-s/ba-p/14234100 Int4 Suite Agents Empowers Functional Consultants To Test Integrated SAP S/4HANA Business Processes 2025-10-03T11:41:59.491000+02:00 MichalKrawczyk https://community.sap.com/t5/user/viewprofilepage/user-id/45785 <H2 id="toc-hId-1761875137">Introduction&nbsp;</H2><P>Integrated business processes are the bloodstream of SAP systems. Every Sales Order, Purchase Order, Delivery, and Invoice has to flow smoothly, not just within SAP, but across EDI partners (customer, vendors, 3PL partners) banks, warehouses, tax portals.</P><P>Here’s the paradox: SAP S/4HANA projects have plenty of sophisticated automation tools, but they <STRONG>rarely help functional consultants in their manual tests</STRONG>. Instead, those tools get pushed into the narrow niche of automation testers. Functional consultants treat them like mythical dragons, complicated, dangerous, and likely to drag them away from their real work into procedural swamps.</P><P>The result? Slow testing cycles, dependency on integration specialists, and endless waiting for external partners to provide messages or confirmations.</P><H2 id="toc-hId-1565361632"><STRONG>Changing the story with simulation agents</STRONG></H2><P>The better path is not to force functional consultants into scripting or automation frameworks, but to give them <STRONG>simulation agents</STRONG> that mimic the system environment.</P><P>Instead of saying: <EM>“learn a test framework and run automated scripts,”&nbsp;</EM>we can say: <EM>“here are agents that simulate your missing EDI partner, your unavailable 3rd party/legacy system and you can test with them right now.”</EM></P><P>This changes the game:</P><UL><LI>No competition with automation teams.</LI><LI>No learning curve with complex frameworks and procedural delays.</LI><LI>Consultants get something they instantly understand: <STRONG>realistic test conditions on demand, using actual historical production data.</STRONG></LI></UL><H2 id="toc-hId-1368848127"><STRONG>How Int4 Suite Agents Work</STRONG></H2><P>With Int4 Suite, simulation agents provide a simple interface: the consultant performs the transaction in SAP, the agent feeds in authentic historical test data, and then automatically checks whether the newly generated EDI or non-EDI message matches what was sent in production.</P><P>Below are examples of key agents and how they fit into typical integrated OTC and P2P processes</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Agents.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322568iEB56C9B840B5AB9A/image-size/large?v=v2&amp;px=999" role="button" title="Agents.png" alt="Agents.png" /></span></P><P>Figure - meet the Int4 Suite Agents&nbsp;</P><H3 id="toc-hId-1301417341"><STRONG>EDI Partner Agent (based on historical data)</STRONG></H3><P><STRONG>Role:</STRONG> Replays authentic production EDI messages from trading partners (ORDERS, DESADV, INVOIC).</P><P><STRONG>How it works:</STRONG></P><UL><LI>Consultant performs the transaction in SAP (e.g., creates delivery, sends invoice).</LI><LI>Agent provides historical test data from previously exchanged documents.</LI><LI>Agent automatically compares the newly generated EDI message with the production one for a similar case.</LI></UL><P><STRONG>OTC examples:</STRONG></P><UL><LI>Consultant in the OTC team replays historical <STRONG>ORDERS</STRONG> from the largest customer and verifies whether, after pricing condition changes, the system still calculates correctly.</LI><LI>Consultant tests goods receipt with historical <STRONG>DESADV</STRONG> data; agent compares the new EDI message against the production one.</LI><LI>Consultant issues a sales invoice (<STRONG>INVOIC</STRONG>) and agent validates it against the original production invoice, checking VAT rules.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Select_Message_with_MatNR.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322531i577899AB75CC39FB/image-size/large?v=v2&amp;px=999" role="button" title="Select_Message_with_MatNR.png" alt="Select_Message_with_MatNR.png" /></span></P><P>Figure - Select Historical EDI messages from Production system which need to be rerun on Test System&nbsp;</P><P><STRONG>P2P examples:</STRONG></P><UL><LI>Consultant creates a purchase order; agent provides a historical <STRONG>ORDRSP</STRONG> where the supplier delivered partially, then compares the new outbound message.</LI><LI>Agent simulates a supplier <STRONG>INVOIC</STRONG> and verifies whether workflows behave the same after configuration changes.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="change_data_for_Each_message.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322532iA3CB0B92921AF310/image-size/large?v=v2&amp;px=999" role="button" title="change_data_for_Each_message.png" alt="change_data_for_Each_message.png" /></span></P><P>Figure - manipulate the historical/production landscape EDI message data before sending that to the test environment</P><H3 id="toc-hId-1104903836"><STRONG>Unavailable System Agent (Non-SAP)</STRONG></H3><P><STRONG>Role:</STRONG> Simulates external systems (banks, customs, WMS/TMS, tax portals) with historical production communications.</P><P><STRONG>How it works:</STRONG></P><UL><LI>Consultant runs the business process in SAP.</LI><LI>Agent injects historical test data from the external system.</LI><LI>Agent compares the new outbound message with the original production one.</LI></UL><P><STRONG>OTC examples:</STRONG></P><UL><LI>Consultant tests e-invoicing using a historically rejected invoice; agent checks whether the new output matches the original and if the new rules handle it.</LI><LI>Consultant tests shipment confirmations with historical WMS responses.</LI></UL><P><STRONG>P2P examples:</STRONG></P><UL><LI>Consultant tests bank payments; agent supplies historical payment files and checks the new output structure.</LI><LI>Consultant tests tax submissions; agent provides historical records and compares new vs. old messages.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Display_the_EDI_Data_for_test.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322533i24936AE9F9E8CEDC/image-size/large?v=v2&amp;px=999" role="button" title="Display_the_EDI_Data_for_test.png" alt="Display_the_EDI_Data_for_test.png" /></span></P><P>Figure - Display the historical EDI data used on production landscape before rerunning that on the test environment&nbsp;</P><H3 id="toc-hId-908390331"><STRONG>Historical Data Agent</STRONG></H3><P><STRONG>Role:</STRONG> The production message librarian, replays large volumes or special cases directly from production.</P><P><STRONG>How it works:</STRONG></P><UL><LI>Consultant triggers transactions in SAP (bulk orders, invoices, returns).</LI><LI>Agent provides the historical payloads.</LI><LI>Agent verifies test messages against the production equivalents.</LI></UL><P><STRONG>OTC examples:</STRONG></P><UL><LI>Consultant replays a “Black Friday” scenario with 10,000 <STRONG>ORDERS</STRONG>; agent validates each new EDI message against its production twin.</LI><LI>Consultant tests credit memo flows from historical complaint cases.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="run_all_test_cases.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322534i9D036A8DB1F8EC62/image-size/large?v=v2&amp;px=999" role="button" title="run_all_test_cases.png" alt="run_all_test_cases.png" /></span></P><P>Figure – run many historical messages on the test environment for bulk testing purposes&nbsp;</P><P><STRONG>P2P examples:</STRONG></P><UL><LI>Consultant tests bulk supplier invoices; agent validates the outputs against production.</LI><LI>Consultant tests blocked spare-parts orders with historical references.</LI></UL><H3 id="toc-hId-711876826"><STRONG>Integration Consultant Agent</STRONG></H3><P><STRONG>Role:</STRONG> A technical assistant that retrieves and compares messages from middleware layers (PI/PO, CPI).</P><P><STRONG>How it works:</STRONG></P><UL><LI>Consultant executes the business process in SAP.</LI><LI>Agent fetches the historical integration payload.</LI><LI>Agent highlights differences between new and historical messages.</LI></UL><P><STRONG>OTC examples:</STRONG></P><UL><LI>Consultant creates a sales order; agent compares the IDoc/XML message with the historical SAP Integration Suite payload.</LI><LI>Agent highlights mapping differences at field level after configuration changes.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="validate_EDI.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/322535i531DC430CE0A96BA/image-size/large?v=v2&amp;px=999" role="button" title="validate_EDI.png" alt="validate_EDI.png" /></span></P><P>Figure – fetch an EDI payload produced by the integration platform (SAP Integration Suite, etc.) from a newly created business document and compare with the historical one from the production landscape without asking SAP Integration Consultant for help&nbsp;</P><P><STRONG>P2P examples:</STRONG></P><UL><LI>Consultant enters a supplier invoice; agent pulls historical Ariba-CPI payload and checks consistency.</LI><LI>Agent validates purchase order messages across production and test runs.</LI></UL><H2 id="toc-hId-386280602"><STRONG>Why it matters for SAP S/4HANA projects</STRONG></H2><P>In S/4HANA transformations, the external world doesn’t care about your internal redesigns. Customers, suppliers, and banks still expect exactly the same messages they used to get. Outbound and inbound interfaces are fragile bridges that must remain stable.</P><P>By equipping functional consultants with Int4 Suite agents:</P><UL><LI>Test cycles shorten dramatically.</LI><LI>Reliance on external partners and scarce integration resources drops.</LI><LI>Confidence in end-to-end quality rises.</LI></UL><P>This isn’t about replacing automation experts or integration teams. It’s about enabling functional consultants to independently confirm that what leaves SAP (or comes into it) is still what the outside world expects.</P><P>It’s the missing puzzle piece for smooth, low-friction testing of integrated business processes in SAP transformations.</P><H2 id="toc-hId-189767097">More information:&nbsp;</H2><DIV class=""><A class="" href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/agentic-ai-testing-for-greenfield-s-4hana-outbound-interfaces-part-1/ba-p/14232427" target="_blank">Agentic AI Testing for Greenfield S/4HANA Outbound Interfaces - Part 1</A></DIV><DIV class=""><A class="" href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/int4-suite-your-sap-joule-testbed-and-skills-builder/ba-p/14229790" target="_blank">Int4 Suite — your SAP Joule testbed and skills builder</A></DIV><DIV class=""><A class="" href="https://community.sap.com/t5/technology-blog-posts-by-members/process-aware-agentic-testing-of-sap-with-int4-suite/ba-p/14196856" target="_blank">Process-Aware Agentic Testing of SAP with Int4 Suite</A></DIV><DIV class=""><A class="" href="https://community.sap.com/t5/technology-blog-posts-by-members/agentic-testing-and-simulation-with-int4-suite-s-sap-business-knowledge/ba-p/14076453" target="_blank">Agentic Testing and Simulation with Int4 Suite's SAP Business Knowledge Graph</A></DIV><DIV class="">&nbsp;</DIV><P>&nbsp;</P> 2025-10-03T11:41:59.491000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/integration-between-sap-cpi-and-sap-datasphere-jdbc-connection/ba-p/14256236 Integration Between SAP CPI and SAP DataSphere (JDBC Connection) 2025-10-31T08:17:01.679000+01:00 MUGILAN_KANAGARAJ https://community.sap.com/t5/user/viewprofilepage/user-id/2190179 <P><STRONG>Integration Between SAP CPI and SAP DataSphere (JDBC Connection)</STRONG> <SPAN><BR /><BR /></SPAN>JDBC – JAVA DATABASE CONNECTIVITY<SPAN><BR /><BR /></SPAN>Why Recommendation for JDBC Over OData API :<SPAN><BR /></SPAN>JDBC is recommended over OData when consuming large-scale records (e.g., 100,000+) because JDBC streams data directly from the database with better performance and less overhead, while OData is optimized for lightweight, paginated, service-based access.<SPAN><BR /><BR /></SPAN>Problem statement: <SPAN><A href="https://userapps.support.sap.com/sap/support/knowledge/en/3337495" target="_blank" rel="noopener noreferrer">3337495 - OData API returns less records than expected due paging<BR /></A></SPAN>Pagination limits in OData and Ariba APIs can be handled in SAP CPI using a looping process call. I’ll cover this with a clear explanation in an upcoming post.<SPAN><BR /><BR /></SPAN>G<STRONG>oal:</STRONG> Connect CPI to a database used by DataSphere (JDBC) and run a simple read data from the (Analytical Model / Table /View). <SPAN><BR /><BR /></SPAN>For the Write / Delete / Update method, the attached SAP Help Portal Link has syntax in the reference section.<SPAN><BR /><BR /></SPAN><STRONG>Prerequisites:</STRONG></P><OL><LI><STRONG>SAP DataSphere</STRONG> – Subscribed account (⚠ Trial has limited features, JDBC not supported)</LI><LI><STRONG>SAP Integration Suite</STRONG> – Subscribed or Trial (JDBC actions supported)</LI></OL><P>&nbsp;</P><P><STRONG>SAP DataSphere Step by Step Guide :</STRONG><SPAN><BR /><BR /></SPAN></P><TABLE><TBODY><TR><TD width="301"><P><STRONG><SPAN>Step</SPAN></STRONG></P></TD><TD width="301"><P><STRONG><SPAN>Action / Notes</SPAN></STRONG></P></TD></TR><TR><TD width="301"><P><SPAN>1. Create a Space</SPAN></P></TD><TD width="301"><P><SPAN>DataSphere → Space Management → <EM>New Space</EM> → Name it → Create.</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>2. Create Table / Analytical Model</SPAN></P></TD><TD width="301"><P><SPAN>Data Builder → In your Space → <EM>New</EM> → Table or Analytical Model → define fields &amp; data types → Save &amp; Publish.<BR />*Verify Table/Model Deployed Successfully. *</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>3. Prepare / Load Data</SPAN></P></TD><TD width="301"><P><SPAN>Load data manually for testing cases. Otherwise load CSV/import to table via Data Builder/Data Integration.</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>4. Note Schema &amp; Object Names</SPAN></P></TD><TD width="301"><P><SPAN>Record schema name, table name, and view names for JDBC SQL use.<BR />* Created space name is the SCHEMA name and Collect Table / Model name *</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>5. Decide Where to Create DB User</SPAN></P></TD><TD width="301"><P><SPAN>If HANA Cloud → use HANA Cockpit/DB Explorer. If on-prem DB → use DB admin tools or contact DB Admin.<BR />* We are using the HANA cloud system for practical session*</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>6. Create JDBC DB User</SPAN></P></TD><TD width="301"><P><SPAN>DB admin tool → Security/Users → <EM>New User</EM> → set username &amp; strong password → Save.<BR />*Check Active status of User*</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>7. Grant Privileges for the DB user</SPAN></P></TD><TD width="301"><P><SPAN>Assign only required privileges (e.g., <STRONG>SELECT</STRONG> for read; add <STRONG>INSERT/UPDATE/DELETE</STRONG> for CRUD). Best practice: create role </SPAN>JDBC_ROLE<SPAN> and assign.</SPAN></P></TD></TR><TR><TD width="301"><P><SPAN>8. Prepare JDBC Connection Details</SPAN></P></TD><TD width="301"><P><SPAN>Gather JDBC URL (e.g. &nbsp;sample URL from datasphere: z*********-abc.hana.prod-eu10.hanacloud.ondemand.com<BR />Format for CPI JDBC Material:<BR /></SPAN>jdbc:sap://&lt;host&gt;:&lt;port&gt;/?encrypt=true&amp;validateCertificate=true</P></TD></TR></TBODY></TABLE><P><SPAN><BR /></SPAN><STRONG>SAP Integration Suite Step by Step Guide :</STRONG><SPAN><BR /><BR /></SPAN><STRONG>Create a Package &amp; Artifact</STRONG></P><UL><LI>In CPI → <EM>Design</EM> → Create a new package → Add an integration flow artifact.</LI><LI><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Make sure your CPI user has the required roles to create and access design-time artifacts.<SPAN><BR /><BR /></SPAN></LI></UL><P><STRONG>Go to Monitoring → JDBC Material</STRONG></P><UL><LI>In CPI → <EM>Monitor</EM> → Integrations and APIs → Manage Security → JDBC Material<span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_9-1761743104479.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333936i6D0241B617F00B77/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_9-1761743104479.png" alt="MUGILAN_KANAGARAJ_9-1761743104479.png" /></span><P>&nbsp;</P></LI><LI>→ Add <EM>JDBC Data Source. </EM>→ Select HANA cloud <SPAN><BR /></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_10-1761743104489.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333937iE85BFA171AC5354A/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_10-1761743104489.png" alt="MUGILAN_KANAGARAJ_10-1761743104489.png" /></span><P>&nbsp;</P><SPAN><BR /><BR /></SPAN></LI><LI>Provide JDBC URL in the correct format (e.g., jdbc:sap://&lt;hana-host&gt;:443/?encrypt=true&amp;validateCertificate=true).<SPAN><BR /></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_11-1761743104501.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333938i78679B85543ED506/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_11-1761743104501.png" alt="MUGILAN_KANAGARAJ_11-1761743104501.png" /></span><P>&nbsp;</P></LI><LI>Enter DB username and password (use the dedicated JDBC user created earlier in DataSphere).<SPAN><BR /></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_12-1761743104508.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333940iFF24BD979E951623/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_12-1761743104508.png" alt="MUGILAN_KANAGARAJ_12-1761743104508.png" /></span><P>&nbsp;</P></LI><LI>Save and deploy the JDBC material.</LI></UL><P><STRONG>Apply JDBC Material in iFlow</STRONG></P><UL><LI>In your integration flow, configure the JDBC receiver adapter → select the JDBC data source created.</LI><LI>Use SQL queries (SELECT) in the <EM>Processing tab</EM> or provide XML query body. <SPAN>Here, I’m using SQL </SPAN>SELECT * to<SPAN> fetch all records from the table.</SPAN></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_13-1761743104513.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333939i0C4A23EA2DD7435A/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_13-1761743104513.png" alt="MUGILAN_KANAGARAJ_13-1761743104513.png" /></span></P><P>&nbsp;</P><P><SPAN><BR /><STRONG>Step 1: Timer Start </STRONG><BR />&nbsp;In this iFlow, the Start Timer is configured with a Simple Schedule → None → On Deployment, which means the integration flow automatically triggers immediately after deployment.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_14-1761743104520.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333941iB3F7F36AA2A7DE16/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_14-1761743104520.png" alt="MUGILAN_KANAGARAJ_14-1761743104520.png" /></span></P><P>&nbsp;</P><P><SPAN><BR /><STRONG>Step 2: Content Modifier</STRONG><BR />Use this SQL query to fetch all records with the body operation.<BR />&nbsp;SELECT * FROM "&lt;Schema&gt;"."&lt;Model/TableName&gt;"<BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_15-1761743104527.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333943i58EEBA8EA1D3F197/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_15-1761743104527.png" alt="MUGILAN_KANAGARAJ_15-1761743104527.png" /></span></P><P>&nbsp;</P><P><SPAN><BR /><BR /><STRONG>Step 3: Request Reply &amp; JDBC Receiver Adapter</STRONG><BR />&nbsp;→ Use the deployed JDBC Data Source alias in the JDBC Material in the previous step and set Max records count based on your requirement.<BR />→ JDBC Maximum Records per call:&nbsp; 2,147,483,647<BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_16-1761743104534.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333944i6290D296884CAE5A/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_16-1761743104534.png" alt="MUGILAN_KANAGARAJ_16-1761743104534.png" /></span></P><P>&nbsp;</P><P><SPAN><BR />Sample data Response from JDBC Connection:<BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MUGILAN_KANAGARAJ_17-1761743104538.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/333942iC7431EDC092521C8/image-size/medium?v=v2&amp;px=400" role="button" title="MUGILAN_KANAGARAJ_17-1761743104538.png" alt="MUGILAN_KANAGARAJ_17-1761743104538.png" /></span></P><P>References :<SPAN><BR />same blog by me for clear picture quality:&nbsp;<A href="https://community.sap.com/t5/technology-q-a/integration-of-sap-cpi-and-sap-datasphere-using-jdbc/qaq-p/14256172" target="_blank">Integration of SAP CPI and SAP DataSphere using JD... - SAP Community</A><BR /></SPAN>CPI JDBC – XML Query in Body for CRUD Operations (Syntax Guide)<SPAN><BR /></SPAN>&nbsp;link:<SPAN><BR /><A href="https://help.sap.com/docs/cloud-integration/sap-cloud-integration/payload-and-operation" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/cloud-integration/sap-cloud-integration/payload-and-operation</A></SPAN></P> 2025-10-31T08:17:01.679000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/accelerating-sap-btp-integrations-with-gpt-5-introducing-the-sap/ba-p/14258240 Accelerating SAP BTP Integrations with GPT-5: Introducing the SAP Integration AI Assistant 2025-11-01T05:32:22.246000+01:00 RameshK_Varanganti https://community.sap.com/t5/user/viewprofilepage/user-id/51927 <P><STRONG>Introduction</STRONG></P><P>AI is transforming how we work. Over the last few years, tools like ChatGPT have evolved from simple chat interfaces to intelligent partners that can understand complex technical problems.</P><P>As an SAP Integration professional, I started thinking — what if we could bring that same level of intelligence into the world of SAP BTP Integration Suite?</P><P>Having worked with SAP CPI, API Management, and Event Mesh, I’ve often seen teams spending countless hours writing Groovy scripts, debugging mappings, and testing iFlows. These tasks are essential but repetitive — and that’s exactly where AI can help.</P><P>This thought led me to create something new: the <A title="SAP Integration AI Assistant " href="https://chatgpt.com/g/g-6905731791688191be5626a497a3637f-sap-integration-ai-assistant" target="_self" rel="nofollow noopener noreferrer"><STRONG>SAP Integration AI Assistant</STRONG> </A>— an intelligent support tool built using GPT-5, designed specifically for SAP Integration developers and consultants.</P><P>&nbsp;</P><P><STRONG>Challenges in SAP Integrations</STRONG></P><P>Working with <STRONG>SAP Cloud Integration (CPI)</STRONG> can be both rewarding and demanding. Integration developers often face common challenges such as:</P><UL><LI><P>Debugging and formatting Groovy scripts</P></LI><LI><P>Managing complex mappings and error handling</P></LI><LI><P>Designing reusable iFlows that align with SAP standards</P></LI><LI><P>Searching through documentation for the right solution</P></LI></UL><P>While <STRONG>SAP provides excellent documentation</STRONG>, finding exactly what you need under project pressure can take time. AI can make this process faster and smarter — assisting developers with real-time insights.</P><P>Meet the <A href="https://chatgpt.com/g/g-6905731791688191be5626a497a3637f-sap-integration-ai-assistant" target="_self" rel="nofollow noopener noreferrer"><STRONG>SAP Integration AI Assistant</STRONG></A></P><P>The SAP Integration AI Assistant is built on GPT-5 and trained with integration-specific knowledge. It helps developers solve real project problems efficiently.</P><P>Here’s what it can do:</P><UL><LI>Code Support: Generate, clean, and format Groovy/XSLT/JavaScript scripts with consistent structure.</LI><LI>Design Guidance: Suggest iFlow layouts, adapter configurations, and error-handling approaches aligned with best practices.</LI><LI>Troubleshooting: Analyze error messages or mapping issues and suggest probable fixes.</LI><LI>Learning Tool: Act as a companion for those new to SAP Integration Suite, explaining concepts and configurations clearly.</LI></UL><P>It’s like having a virtual SAP Integration mentor — available anytime you need it.</P><P><STRONG>Built from Real Project Experience</STRONG></P><P>This idea came from years of working on integration projects across global landscapes — building, optimizing, and reviewing iFlows in SAP BTP.</P><P>The assistant is designed to share this accumulated experience. It transforms real-world learnings into actionable insights that any developer can use directly in their daily work.</P><P><STRONG>Getting Started</STRONG></P><P>You can begin experimenting with the SAP Integration AI Assistant right away.</P><P>Use it to:</P><UL><LI>Generate or optimize Groovy/XSLT code</LI><LI>Validate iFlow designs and configurations</LI><LI>Troubleshoot mapping or adapter errors</LI><LI>Explore integration scenarios with API Management and Event Mesh</LI></UL><P>&nbsp;Try it here:&nbsp; <A href="https://chatgpt.com/g/g-6905731791688191be5626a497a3637f-sap-integration-ai-assistant" target="_self" rel="nofollow noopener noreferrer">SAP Integration AI Assistant</A>&nbsp;&nbsp;&nbsp;</P><P>&nbsp;</P><P>Sample Examples</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2025-10-31 233848.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/334866i655BEE29E5CEC46B/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Screenshot 2025-10-31 233848.png" alt="Screenshot 2025-10-31 233848.png" /></span><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2025-11-01 004112.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/334867iDC307C0070383E53/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Screenshot 2025-11-01 004112.png" alt="Screenshot 2025-11-01 004112.png" /></span><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2025-11-01 004415.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/334868i865583BD080C84D3/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2025-11-01 004415.png" alt="Screenshot 2025-11-01 004415.png" /></span></P><P>&nbsp;</P><P><STRONG>Conclusion</STRONG></P><P>AI is no longer a future trend — it’s an everyday tool for developers.<BR />By combining SAP Integration expertise with AI capabilities, we can build integrations faster, reduce manual effort, and improve quality.</P><P>&nbsp;</P><P>&nbsp;</P><P>#SAPBTP,#IntegrationSuite,#CPI,#SAPIntegration,#AI,#Automation,#GPT5,#CloudIntegration,#Groovy,#XSLT,#SAPDevelopers, <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/51927">@RameshK_Varanganti</a></P><P>&nbsp;</P> 2025-11-01T05:32:22.246000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/integrations-as-part-of-the-sap-leanix-enterprise-architecture/ba-p/14270517 Integrations as part of the SAP LeanIX (Enterprise) Architecture? 2025-11-16T17:39:36.351000+01:00 stevang https://community.sap.com/t5/user/viewprofilepage/user-id/7643 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="GLUE _v3.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341702i5FD045789E9DA9A6/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="GLUE _v3.png" alt="GLUE _v3.png" /></span></P><P>We may have our state-of-the-art <STRONG>Applications</STRONG>. But how do we make them work together? We may have all our <STRONG>Date Objects</STRONG> properly designed and governed. But how do we unlock it for the Users and Agents?</P><P>Integration – key is in the<STRONG> Integration Services</STRONG>!</P><P>We have to integrate Applications, Users and Agents – in order to unlock our Data and make our Business Processes run seamlessly and efficiently, cross whole Enterprise.</P><P>I’ve talked about importance of recognizing the work on integrations in the Enterprise Architecture domain (my article in SAP EA Community: <A href="https://community.sap.com/t5/enterprise-architecture-discussions/enterprise-architecture-for-integration/td-p/13937459" target="_blank">Enterprise Architecture for integration?</A>), but now let’s go with few practical examples…</P><H2 id="toc-hId-1765453939">Integrations @ EAM</H2><P>If we use <a href="https://community.sap.com/t5/c-khhcw49343/SAP+LeanIX+solutions/pd-p/73554900100700003401" class="lia-product-mention" data-product="1290-1">SAP LeanIX solutions</a>[1] as EAM (Enterprise Architecture Management) tool, and we want to model our Enterprise Architecture – how do we manage it, if we do not “connect” the dots</P><P>Well, we cannot!</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 1. Integration Services" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341704i3992DE60163DCC7D/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 1. Integration Services.jpg" alt="Figure 1. Integration Services" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 1. Integration Services</span></span></P><P>Of course, integrations are far more complex than just “connecting” dots. In this example, I’ve used an example where we replicate Business Partner Customers to two order taking channels – fairly common scenario in Omnichannel Sales.</P><UL><LI>Source is <EM>Master Data System</EM> (i.e. SAP MDG) – this is clearly marked as an <STRONG>Application</STRONG> in SAP LeanIX.(of course, we may have multiple <STRONG>Applications</STRONG> within one “box”, but overall object modeling is a different story, not covered by this article…).</LI><LI>Target Systems are <EM>B2B Channel</EM> and <EM>Field Sales</EM> – again I am color-coding those as <STRONG>Applications</STRONG>.</LI><LI>From Source to Target Systems, we are replicating <EM>Customer</EM> – this is <STRONG>Data Object</STRONG>.</LI><LI>Finally, we have an <STRONG>Interface</STRONG> (if we use SAP LeanIX notation), or <STRONG>Integration Service</STRONG> (in a broader sense). We may be “happy” in seeing “only” one line, representing <EM>BP Customer Replication</EM>. connecting Source and Target Systems… This is probably sufficient for high-level C4[2] diagramming and modeling.</LI><LI>However, if we want more details on <STRONG>Interfaces</STRONG> (or dots “connecting” <STRONG>Applications</STRONG>) – i.e. how it actually works, what are the underlying technology and capabilities, which patters were used etc. – then we must go few levels “below” in the integration hierarchy. Each “parent” <STRONG>Interface</STRONG> (or <STRONG>Integration Service</STRONG>). has one or more “child” <STRONG>Interface Components</STRONG> (or <STRONG>Integration Service Components</STRONG>) – i.e. APIs, IFlows, Queues etc.; and those “child” <STRONG>Interface Components</STRONG> “uses” some <STRONG>IT Components</STRONG> – i.e. SAP CPI, API-M, Azure Service Bus etc.</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 2. SAP LeanIX Interface Fact Sheet" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341705iBB59A5B73D01AF6D/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 2. LeanIX Interface Fact Sheet.jpg" alt="Figure 2. SAP LeanIX Interface Fact Sheet" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 2. SAP LeanIX Interface Fact Sheet</span></span></P><P>In SAP LeanIX each object is modeled with its own <STRONG>Fact Sheet</STRONG>, thus each <STRONG>Interface</STRONG>, whether “parent” or “child”, has its <STRONG>Fact Sheet</STRONG>. There are some powerful features when defining <STRONG>Fact Sheets</STRONG>:</P><UL><LI>We are defining descriptions, attributes, and we also have a possibility to add links to external resources – i.e. we may have external repositories with functional and technical details in Azure DevOps Wikis or GitHub..</LI><LI>We can link Interface with its “parent” or “child:</LI><LI>We can link Interfaces among themselves into “left/right” flow.</LI><LI>We can link with appropriate <STRONG>Application(s)</STRONG>, <STRONG>Data Object(s)</STRONG> and <STRONG>IT Component(s)</STRONG></LI></UL><P>Interesting though is to note, general recommendation in SAP LeanIX is to use <STRONG>Applications </STRONG>for Business Applications, while Technical Applications (like SAP Integration Suite etc.), enabling specific <STRONG>Interfaces</STRONG>, we should model as <STRONG>IT Components</STRONG>. Although there is no strict limitation preventing us to set i.e. SAP Integration Suite as an <STRONG>Application</STRONG> (this will of course create different pictures in C4 diagrams), the important thing is to choose the approach “once” and stay consistent.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 3. Integration model" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341706iDEBD9B0D6E4584A0/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 3. Integration model.jpg" alt="Figure 3. Integration model" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 3. Integration model</span></span></P><P>Object models with connected <STRONG>Interfaces</STRONG> can very easily become very complex. But do we really have a choice – not to model full integrations? Is it sufficient to have only high-level Enterprise Architecture in EAM tool?</P><P>I guess it all depends on the preferences, but I would say – no, it is not sufficient.</P><P>And it’s not about – “hey SAP LeanIX give us possibility to model <STRONG>Interfaces</STRONG> in more granularity” – this is about overall management of the Enterprise Architecture. How can we manage something we do not see? And yes, <STRONG>integration is really a key, or a backbone, of daily operations</STRONG> of any Enterprise:</P><UL><LI>The Architect View(point) must include sufficient level of details on all Architectural components – otherwise we are “blind” in seeing AS-IS state.</LI><LI>Building new or reusing existing integrations cannot be made without appropriate visibility on the underlaying layers of the <STRONG>Interface(s)</STRONG> or <STRONG>Integration Service(s)</STRONG> – otherwise we cannot make educated decision for TO-BE state</LI><LI>If we do not have adequate view on AS-IS and TO-BE, we cannot properly understand the Gap and build roadmap, planning and strategy.</LI></UL><H2 id="toc-hId-1568940434">Integrations @ EA Frameworks</H2><P>Ups looks like no Integration Architecture domain in TOGAF©[2]?</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 4. Where is Integration Architecture" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341712i7A2F7465AAA13876/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 4. Where is Integration Architecture_v2.jpg" alt="Figure 4. Where is Integration Architecture" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 4. Where is Integration Architecture</span></span></P><P>Where is Integration Architecture domain?</P><UL><LI>Is it part of B. Business Architecture? Probably not, but aren’t we saying that integrations are enabling Integrated Business Processes…</LI><LI>Maybe it’s part of C. Information System Architecture, also (usually) split into Application Architecture and Data Architecture? Looks reasonable to be “somewhere” here, but still, it’s not “formally” listed here…</LI><LI>Or is it D. Technology Architecture? Integrations are often observed as “technology” domain, but this is not entirely correct perception…</LI></UL><P>Hold on, nothing is “carved in stone”…</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 5. LeanIX Application and Data Architecture" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341711i446A68918434021F/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 5. LeanIX Application and Data Architecture_v2.jpg" alt="Figure 5. LeanIX Application and Data Architecture" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 5. LeanIX Application and Data Architecture</span></span></P><P><STRONG>Interfaces </STRONG>(or <STRONG>Integration Services</STRONG>) should be observed on the same level as <STRONG>Applications</STRONG> and <STRONG>Data Objects</STRONG> – and in SAP LeanIX metamodel[3], indeed we do see that <STRONG>Interface(s)</STRONG> are part of Application and Data Architecture.</P><UL><LI><STRONG>Application</STRONG> “provides” and/or “consumes” <STRONG>Interface</STRONG>.</LI><LI><STRONG>Interface</STRONG> “transfers” <STRONG>Data Object.</STRONG></LI></UL><P>Integrations are not just dots or lines – it’s much more, no matter if we observe some underlaying IPaaS or Event Broker as an <STRONG>IT Component</STRONG> or an <STRONG>Application</STRONG>. And if we accept these guidelines, then we have to “pay” attention to model integrations correctly and fully.</P><P>Simple and clear…</P><H2 id="toc-hId-1372426929">Integrations @ AI use-cases?</H2><P>First “reply” would be – yes of course. But let’s think where we are heading with AI…</P><P>MCP Serves can be built on tables – accessing Data Objects directly. It can be done, yes, but do we want this or would we prefer some sort of governance by exposing Data Objects via OData and API Management in between? That is the question yet to be answered… But I know my answer.&nbsp;Of course, in the "world" of Microsoft Dynamic365, MCP Servers are actually based on Dataverse object or tables (not APIs) but this is different, I guess…</P><P>Now, what about A2A (Agent-to-Agent)? This is also some sort of integration… Well, if it is integration, it is an integration – “things” are not being integrated on its own...&nbsp;</P><P>Why am I bringing AI? AI is now “mixing and matching” with everything. While AI Agents and Agentic AI will most certainly impact integration technology, the need to precisely “craft” <STRONG>Integration Services</STRONG> will not disappear. It may change, but it will not disappear!</P><P>Putting this in the EAM perspective – yes of course we do want to manage all out AI investments, ideally in one EAM tool, and SAP LeanIX now comes with AI agent hub[4] – equipped to discover, manage and govern all AI agents. Of course, A2A and MCP are somehow inevitable part of it. Another reason to model integrations correctly and fully...&nbsp;</P><H2 id="toc-hId-1175913424">Integration is the key!</H2><P>Integration, whether it is fully Decoupled Integration based on Event-Driven Architecture principles supporting Compostable Business (Architecture)[5], or it is some more legacy SOA[6] – in all cases, it is <STRONG>Integration Services</STRONG> which are making things “happen”.</P><P>So, it’s a glue connecting Application Blocks, right?</P><P>No, no&nbsp;– <STRONG>it’s not “just” a glue</STRONG>! It used to be a “real” glue – well at least in legacy solutions, connecting monolithic “boxes”. But now we are Agile, we are building Systems to be “Compossible”. If Applications are Blocks (one can imagine its LEGO©[7] or equivalent), we don’t want to glue Blocks, because we cannot “recompose” our System of Blocks into something different, we cannot change Blocks, and or try new Blocks..</P><P>This “glue” we use is invisible it is by design – <STRONG>its Blocks itself – the way how we build them to “fit”</STRONG> – no matter of their particular shape, or size or color, they can always fit. They <STRONG>can fit “tight” but can be “disassembled”</STRONG> at any time we need to “recompose” the System of Blocks.</P><P>Sounds simple, but it’s not.</P><P>If we want that flexibility, if we want to <STRONG>avoid “fixing” with glue</STRONG> – we need to <STRONG>design and build with precision</STRONG>. And this is <STRONG>more than just technology stack</STRONG> – this is <STRONG>also about overall strategy</STRONG> – what kind of Blocks we need to connect now, or in future. How do we “craft” integrations so everything can fit – not only now. but in foreseeable future as well.</P><H2 id="toc-hId-979399919">Who are we?</H2><P>We are Developers – yes (this is our usual background)! We are Engineers – yes (most certainly)! But we are seasoned Architects – yes as well, very seasoned (if I may add)!</P><P>We are versatile Architects who need to know, or at least understand, so many different “things” – not only integration technologies, but Data Models, Applications, even Business Processes. After all, we are not integrating Applications and Systems, we are integrating Business (Processes).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 6. Work of EA for Integration" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/341709i8D2382C98625EB8A/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 6. Work of EA for Integration.jpg" alt="Figure 6. Work of EA for Integration" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 6. Work of EA for Integration</span></span></P><P>We can be Solution Architects or Enterprise Architect, just as in any domain – but we must be able to understand much more than “just” integration technology.</P><P>Is our work perceived that way? Well, this is a different story…</P><H2 id="toc-hId-782886414">And again – Integration Services…</H2><P>I am being subjective – yes. But haven’t I proved how important integration is?</P><P>Is the integration work seen and valued as any other Enterprise Architecture work? Very rarely, as integration is usually seen as very technical discipline, part of solutioning, not strategy… There are exceptions (obviously)…&nbsp;</P><P>Maybe we need to raise the awareness and change the mindset – proper integration modeling and design is the key for the success in many initiatives&nbsp;– and proper modeling has to come as a result of the <STRONG>Integration Strategy</STRONG>, not just "technical" solutioning on case-to-case basis.&nbsp;&nbsp;</P><P>Am I giving too much “credit” to integrations? Yes, but with a reason… Just think, how many projects failed or were delayed because integrations were not properly designed or implemented? And this (usually) has nothing to do with “bad” integration developers and integration engineers…&nbsp;</P><P>Does this resonate?</P><H2 id="toc-hId-586372909">Acknowledgment</H2><P>*) Into image generated by AI.</P><H2 id="toc-hId-389859404">References</H2><P>[1] SAP LeanIX: <A href="https://www.leanix.net/en/" target="_blank" rel="noopener nofollow noreferrer">https://www.leanix.net/en/</A></P><P>[2] C4 model: <A href="https://c4model.com/" target="_blank" rel="noopener nofollow noreferrer">https://c4model.com/</A></P><P>[3] SAP LeanIX Interface Modeling Guidelines: <A href="https://help.sap.com/docs/leanix/ea/interface-modeling-guidelines" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/leanix/ea/interface-modeling-guidelines</A></P><P>[4] SAP LeanIX AI agent hub: <A href="https://www.leanix.net/en/ai-agent-hub-in-sap-leanix" target="_blank" rel="noopener nofollow noreferrer">https://www.leanix.net/en/ai-agent-hub-in-sap-leanix</A></P><P>[5] Composable Architecture: <A href="https://community.sap.com/t5/technology-blog-posts-by-members/what-is-composable-architecture/ba-p/13889670" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-members/what-is-composable-architecture/ba-p/13889670</A></P><P>[6] SOA: <A href="https://community.sap.com/t5/enterprise-architecture-blog-posts/agile-ea-from-soa-to-interoperability/ba-p/225234" target="_blank">https://community.sap.com/t5/enterprise-architecture-blog-posts/agile-ea-from-soa-to-interoperability/ba-p/225234</A></P><P>[7] LEGO©: <A href="https://www.lego.com/" target="_blank" rel="noopener nofollow noreferrer">https://www.lego.com/</A></P> 2025-11-16T17:39:36.351000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/partnering-for-progress-accelerate-your-move-to-sap-integration-suite/ba-p/14271374 Partnering for Progress: Accelerate Your Move to SAP Integration Suite 2025-11-17T21:08:53.578000+01:00 AutumnM https://community.sap.com/t5/user/viewprofilepage/user-id/44465 <P>When I first started working with SAP’s <STRONG>Migration Factory program</STRONG>, I quickly realized just how big of a shift many organizations were facing.<BR />For years, SAP Process Integration (PI) and SAP Process Orchestration (PO) have been the backbone of countless businesses’ integration landscapes. But with mainstream maintenance ending in 2027, the question everyone’s asking is: <EM>“What’s next?”</EM></P><P>And the answer is clear — <STRONG>SAP Integration Suite</STRONG> on <STRONG>SAP Business Technology Platform (BTP)</STRONG>.<BR />It’s modern, scalable, and built to meet the needs of today’s connected, cloud-first world. But let’s be honest — migration isn’t just about technology. It’s about people, collaboration, and having the right partners by your side.</P><P>That’s where <STRONG>Migration Factory 2.0</STRONG> comes in.</P><HR /><H3 id="toc-hId-1894564710"><STRONG>Building a Stronger Ecosystem — Together</STRONG></H3><P>What excites me most about Migration Factory 2.0 is that it’s not just a framework — it’s a <EM>movement</EM>.<BR />We’re bringing together SAP and our partners under one unified goal: to help customers modernize with <STRONG>speed, confidence, and shared expertise</STRONG>.</P><P>Through this program, partners get the enablement and resources they need to deliver migrations effectively, and customers get access to a trusted ecosystem that’s been trained, certified, and supported directly by SAP.</P><P>It’s truly a “win-win-win” — for SAP, our partners, and most importantly, our customers.</P><HR /><H3 id="toc-hId-1698051205"><STRONG>Behind the Program: Enablement, Execution, and Engagement</STRONG></H3><P>At its core, Migration Factory 2.0 is built on three simple but powerful pillars:</P><UL><LI><P><STRONG>Enablement:</STRONG> We equip partners with the latest training, certifications, and migration tooling — including our Integration Suite Black Belt 2.0 program.</P></LI><LI><P><STRONG>Execution:</STRONG> We support them with proven methodologies, automation frameworks, and best practices developed alongside SAP experts.</P></LI><LI><P><STRONG>Engagement:</STRONG> We celebrate their success through co-marketing, events, and storytelling opportunities that amplify their impact across the SAP ecosystem.</P></LI></UL><P>Every one of these pillars is about empowering collaboration — because success in integration isn’t achieved in isolation.</P><HR /><H3 id="toc-hId-1501537700"><STRONG>Why Partners Are the Game Changers</STRONG></H3><P>Working closely with our partners, I’ve seen firsthand the dedication, innovation, and creativity they bring to every customer engagement.<BR />Migration Factory 2.0 gives them more than just visibility — it gives them a <EM>platform to shine</EM>.</P><P>Through the program, partners can:</P><UL><LI><P>Earn SAP’s <STRONG>Partner Badge for SAP PO/SAP Integration Suite Modernization</STRONG></P></LI><LI><P>Be featured on our <STRONG>Migration Factory Partner Listing</STRONG> on SAP.com</P></LI><LI><P>Participate in <STRONG>co-marketing campaigns, webinars, and video series</STRONG> like <EM>Integration Situation</EM></P></LI><LI><P>Receive <STRONG>qualified migration assessments</STRONG> from SAP</P></LI><LI><P>Connect with other partners and SAP experts through <STRONG>enablement sessions and community collaboration spaces</STRONG></P></LI></UL><P>It’s inspiring to see how quickly our partner community has embraced this journey — not just as a business opportunity, but as a shared mission to help customers move forward.</P><HR /><H3 id="toc-hId-1305024195"><STRONG>Helping Customers Choose the Right Partner</STRONG></H3><P>For customers, we’ve made it easy to find the right expertise through the <STRONG>Migration Factory Partner Listing</STRONG> on SAP.com.<BR />Each partner listed has been enabled through SAP’s framework and offers migration assessments, readiness checks, or implementation services tailored to customer needs.</P><P>Whether you’re in Europe, Asia, or North America — you can now connect directly with a partner who understands your regional and industry-specific challenges.</P><HR /><H3 id="toc-hId-1108510690"><STRONG>My Favorite Part: The Collaboration</STRONG></H3><P>One of my favorite things about being part of this initiative is watching new partnerships form — partners collaborating with each other, learning from SAP experts, and even co-creating new migration approaches together.<BR />It’s a reminder that technology may power transformation, but <STRONG>people make it happen</STRONG>.</P><P>Every week, I see stories from partners who have taken what they learned from Migration Factory enablement sessions and turned it into real customer impact. That’s what keeps me passionate about this program — it’s not theory, it’s <EM>transformation in action</EM>.</P><HR /><H3 id="toc-hId-911997185"><STRONG>Let’s Move Forward — Together</STRONG></H3><P>If you’re a <STRONG>partner</STRONG>, now’s the time to get involved.<BR />Join the Migration Factory 2.0 program, earn your certifications, and showcase your success stories to the global SAP community.</P><P>If you’re a <STRONG>customer</STRONG>, explore the Migration Factory Partner Listing on SAP.com to connect with certified experts who can help you plan your move from SAP PI/PO to SAP Integration Suite.</P><P>The future of integration is here — and it’s collaborative, connected, and cloud-driven.<BR />Together, we’re <STRONG>accelerating integration modernization — one connection at a time.</STRONG></P> 2025-11-17T21:08:53.578000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/maximising-ai-potential-a-blueprint-for-business-success/ba-p/14281058 Maximising AI Potential: A Blueprint for Business Success 2025-12-01T13:11:19.101000+01:00 MIKE210 https://community.sap.com/t5/user/viewprofilepage/user-id/1952764 <P class="lia-align-justify" style="text-align : justify;"><U><STRONG>Introduction</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">In an era defined by rapid technological advancement, businesses face both significant challenges and valuable opportunities as they pursue digital transformation and integrate AI-driven solutions. As organisations work to maintain a competitive edge, AI’s impact on business strategy, customer experience, and operational efficiency has become increasingly pivotal.</P><P class="lia-align-justify" style="text-align : justify;">As part of my recent doctoral studies in Digitalisation, specialising in Technology Adoption and AI Integration at IAE Nice, Graduate School of Management, Université Côte d’Azur, I explored this evolving landscape in depth through research focused on SAP customers. I am Happy to share the key findings through a series of insightful and practical articles, each offering guidance for both SAP customer leadership and SAP executives navigating the complexities of AI adoption and technology transformation.</P><OL><LI>Maximising AI Potential: A Blueprint for Business Success <STRONG>(THIS ARTICLE)</STRONG></LI><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/roadblocks-to-ai-adoption/ba-p/14281074#M186776" target="_self">Roadblocks to AI Adoption</A></LI><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/is-sap-business-data-cloud-the-answer-to-your-ai-ambitions/ba-p/14268363" target="_blank">Is SAP Business Data Cloud the Answer to Your AI Ambitions?</A></LI></OL><P class="lia-align-justify" style="text-align : justify;">In an era marked by rapid technological advancements, businesses face the dual challenge and opportunity of integrating AI-driven solutions. As organisations strive for competitive advantage, the role of AI in transforming business strategies, enhancing customer experiences, and driving operational efficiency cannot be overstated. This blog explores effective strategies for helping customers navigate the fast-changing AI world, drawing insights from my doctorate reproach aiming to develop a robust theoretical model and managerial blueprint for executive technology transformation and change management.</P><P class="lia-align-justify" style="text-align : justify;"><U><STRONG>The Role of AI in Business</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">Over recent decades, AI has revolutionised how businesses operate, driving personalised marketing, strategic planning, and data-driven insights. Companies now rely on AI-driven chatbots for customer interaction, automated processes for operational efficiency, and data analytics for strategic decisions. The increasing digitization leads to the generation of vast amounts of data, making advanced digital technology indispensable for extracting value from this data. However, this shift also presents challenges, as many businesses struggle to adopt new technologies swiftly enough to remain competitive. Furthermore, technology vendors face the challenge of marketing their innovative solutions effectively and bridging the gap between innovation and adoption.</P><P class="lia-align-justify" style="text-align : justify;"><U><STRONG>Key Factors for Successful AI Adoption</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">Key findings highlight the importance of strategic alignment, collaborative change management, cost considerations, customer experience, and vendor support in the successful adoption of AI technologies. Businesses must ensure AI solutions are in line with their strategic goals to add value and sustainability. Handling change effectively is crucial, involving stakeholders early and prioritising leadership support to cultivate an innovative culture. Financial implications are essential in decision-making, making it vital for organisations to perform detailed cost-benefit analyses to assure profitable returns on AI investments. Enhancing customer experience through AI-driven solutions with user-friendly and personalised interfaces can significantly boost business success. Finally, robust vendor support and training are necessary to facilitate a smooth transition and maximise the advantages of AI technologies, aligning with organisational needs and challenges.</P><P class="lia-align-justify" style="text-align : justify;"><U><STRONG>Bridging the Gap: The Fast-Track Model</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">The Robust Technology Adoption and Contract Retention Model (Fast-Track) is a theoretical framework developed as part of my doctoral studies to provide insights into the multifaceted aspects of technology adoption and retention from both technology vendors' and users' perspectives. The model emphasises understanding and addressing barriers and enablers in three key categories: organisational, personal, and external.</P><P class="lia-align-justify" style="text-align : justify;">Drawing on established theories like the Technology Acceptance Model (TAM), Diffusion of Innovations Theory, Task-Technology Fit, and the Unified Theory of Acceptance and Use of Technology (UTAUT), the model examines how organisational vision, planning, and stakeholder management influence successful technology adoption. Additionally, it considers strategic management theories such as Kaplan &amp; Norton's Balanced Scorecard framework, which aligns technology strategies with business objectives. The Fast-Track model comprises two primary lifecycle outlines: the Technology Utilisation Lifecycle (for customers) and the Technology Deployment Lifecycle (for SAP/vendors). The former progresses through strategic vision setting, planning, implementation, continuous improvement, and renewal, ensuring technology aligns with evolving business needs. The latter involves market research, sales, implementation, and continuous support, highlighting the importance of understanding customer needs and maintaining engagement.</P><P class="lia-align-justify" style="text-align : justify;">Furthermore, the model highlights the significance of external factors like industry standards, adoption costs, and vendor support in shaping technology deployment strategies. By addressing these factors, both vendors and users can enhance satisfaction, competitive advantage, and the lifecycle value of technology. Personal attitudes, user training, and change management are additional aspects considered vital in mitigating resistance and enhancing technology adoption. Integrating concepts from technology acceptance models, the framework details how perceived usefulness, ease of use, and associated personal and organisational barriers impact technology uptake and retention. In essence, the Fast-Track model provides a comprehensive strategic pathway for navigating technology adoption and contract retention complexities. It encourages continuous engagement, adaptation, and alignment of technology with organisational goals, thus facilitating successful integration and sustained competitive advantage.</P><DIV class="">&nbsp;</DIV><P class="lia-align-justify" style="text-align : justify;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Fast Track Model.jpg" style="width: 850px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/347019i1B6B3F76FBDD3999/image-size/large?v=v2&amp;px=999" role="button" title="Fast Track Model.jpg" alt="Fast Track Model.jpg" /></span></P><P class="lia-align-justify" style="text-align : justify;"><U><STRONG>Key Findings and Analyses:</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">My doctorate research delves into the multifaceted dynamics of technology adoption and contract retention among SAP and various user industries by studying 29 SAP Customers and their Partners across diverse sectors. Utilizing Lexical analysis with NVivo software, the research categorizes and quantifies essential terms, phrases, and themes from interview transcripts, providing a structured and numerical understanding of the content with an emphasis on explicit aspects such as words and visual elements. Key findings include the pivotal role of technology perception in adoption, where strategic alignment with company goals and operational enhancement is crucial, influenced by both internal and external factors like market competition and regulatory requirements. Effective technology adoption requires strategic planning, cost management, and user acceptance, while organizational and personal barriers such as resistance to change and fear of job loss need addressing through robust change management and leadership support. Externally, factors like industry norms and regulatory compliance shape adoption strategies. The cost of technology emerges as a critical determinant, with companies needing to consider both direct and ancillary expenses versus strategic benefits. Positive end-user experiences significantly influence IT contract renewals, with organizations valuing user satisfaction highly. Evaluating alternative vendors involves a detailed analysis of various metrics including solution capability and cost-effectiveness. The study proposes a Fast-TRACK model emphasizing strategic vision, comprehensive planning, and continuous improvement to navigate technology adoption and retention challenges effectively. Practical implications suggest that companies should adopt a holistic approach centered on strategic alignment and robust change management, with SAP urged to demonstrate the strategic value of their solutions and engage customer stakeholders early in the adoption phase to ensure technology implementations meet long-term goals and achieve higher satisfaction and retention rates.</P><DIV class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Paichart.jpg" style="width: 850px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/347028i0C1DF2C446D2B859/image-size/large?v=v2&amp;px=999" role="button" title="Paichart.jpg" alt="Paichart.jpg" /></span></DIV><P class="lia-align-justify" style="text-align : justify;"><STRONG>Conclusion</STRONG></P><P class="lia-align-justify" style="text-align : justify;">This study explores technology adoption and contract retention using semi-structured interviews and NVivo's Lexical analysis, highlighting the importance of aligning technology with business needs, effective change management, cost considerations, and strategic implementation. Key themes include the influence of perception, organizational alignment, cost, complexity, ease of use, effective change management, and vendor support on technology adoption and retention. The research, set within a leading software corporation, provides valuable insights and validates findings with additional data. It introduces the Fast-Track model, emphasizing strategic considerations and practical implementations for leveraging technology for competitive advantage. The study also underscores the importance of understanding external barriers and enablers, such as cost, industrial norms, competition, and vendor support. The contributions of the research include extending the Balanced Scorecard approach to incorporate digital transformation, offering managerial recommendations, and providing a comprehensive framework (Fast-Track model) for navigating technology adoption and retention. Managerial recommendations include adopting a structured approach to technology integration, fostering a culture of continuous learning, and investing in robust knowledge management systems. Future research topics suggested include the impact of customer confusion on technology adoption, testing the FUTARE model across different regions and company sizes, and its effectiveness on upselling or expanding service contracts. The study provides valuable insights and practical recommendations for organizations aiming to enhance their technology adoption strategies and retain contracts effectively.</P><P class="lia-align-justify" style="text-align : justify;">The Robust Technology Adoption and Contract Retention Model (Fast-Track) highlights the need for strategic alignment between technology vendors and users within organizational, personal, and external domains for successful technology adoption and retention. Vendors should deeply understand client structures to tailor technologies that integrate seamlessly into client workflows, enhancing operational effectiveness while maintaining compliance with industry regulations. Prospective clients must ensure these technologies align with strategic goals and existing systems. Proper training for users and insightful adjustment to external factors such as competitive trends are paramount. Broadly, managing the dynamics at the vision to planning phase involving pre-sales to sales within both organizational users and technology vendors is critical. This structured approach will help navigate through barriers and enablers, setting a solid base for adopting new technologies and retaining contracts effectively.</P><DIV class="">&nbsp;</DIV><P class="lia-align-justify" style="text-align : justify;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Digital Transformation.jpg" style="width: 850px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/347021i60813795090983EE/image-size/large?v=v2&amp;px=999" role="button" title="Digital Transformation.jpg" alt="Digital Transformation.jpg" /></span></P><P class="lia-align-justify" style="text-align : justify;">If you'd like to explore further, my full research article with reproach is<SPAN>&nbsp;</SPAN><A class="" title="available here" href="https://sap-my.sharepoint.com/:b:/p/mike_popal/EWOViZXA_oNNlQ5_Pg1UgpkBw5wG65O0nkKOcKH4NCnfdg?e=WXW90w" target="_blank" rel="noopener nofollow noreferrer">available here</A>. Please don't hesitate to contact me if you wish to discuss the research findings.</P><P class="lia-align-justify" style="text-align : justify;">Find my Linked Profile:&nbsp;<A href="https://www.linkedin.com/in/mi4po/" target="_blank" rel="noopener nofollow noreferrer">https://www.linkedin.com/in/mi4po/</A></P> 2025-12-01T13:11:19.101000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/high-volume-data-handling-in-sap-integration-suite-our-journey-to-10/ba-p/14282170 High-Volume Data Handling in SAP Integration Suite - Our Journey to 10 Million Records 2025-12-02T15:49:20.601000+01:00 naveen4796 https://community.sap.com/t5/user/viewprofilepage/user-id/13527 <P><FONT face="arial,helvetica,sans-serif"><SPAN><EM>“In the middle of difficulty lies opportunity.”</EM> – Albert Einstein</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>Handling </SPAN><STRONG>large-volume data integration</STRONG><SPAN> is something every integration specialist meets sooner or later. Recently, our team faced one of those “opportunities”—a requirement to transfer </SPAN><STRONG>around 10 million records</STRONG><SPAN> from a source system to a target application using </SPAN><SPAN>SAP <STRONG>Integration Suite (IS) </STRONG>formerly called as SAP Cloud Platform Integration (CPI)</SPAN><SPAN>.</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>At first glance, the mapping requirements were straightforward. But as always, the devil hides in the details: <STRONG>the </STRONG></SPAN><STRONG>pagination behavior</STRONG><SPAN> of the OData V4 adapter, the Integration Suite processing limits, and unforeseen behavioral quirks pushed us through several attempts before landing on a robust solution.</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif">This blog walks through:</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>What approaches we tried</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Why they failed</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>What finally worked</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Key learnings for anyone building high-volume integrations on SAP IS</SPAN></FONT></LI></UL><P><FONT face="arial black,avant garde"><STRONG>Context: The Scenario</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>The source system exposed data through an </SPAN><SPAN>OData V4 service</SPAN><SPAN>. A key limitation we discovered early on:</SPAN></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Pagination is fully server-controlled</SPAN><SPAN> in OData V4 adapter</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>The page size cannot be customized in SAP Integration Suite (e.g., you cannot set it to 1,000 or 10,000) when using the standard OData V4 adapter, it&nbsp;</SPAN><SPAN>always respects the server-side limit</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Example: If the server page size is </SPAN><SPAN>50,000</SPAN><SPAN>, SAP IS will always retrieve chunks of </SPAN><SPAN>50,000 -&nbsp;</SPAN><SPAN>nothing less</SPAN></FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif">This limitation became the center of the problem.</FONT></P><P><FONT face="arial black,avant garde"><STRONG>Approach 1 — Standard OData V4 Adapter + Looping Process Call</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>(Expected to work… reality had other plans)</SPAN></FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>How it worked:</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Used the OData V4 adapter as-is with server-side pagination</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Used </SPAN><SPAN>Looping Process Call</SPAN><SPAN> to iterate through pages</SPAN></FONT></LI></UL><P><STRONG><FONT face="arial,helvetica,sans-serif">W<FONT face="comic sans ms,sans-serif">hat happened:</FONT></FONT></STRONG></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>If the total data was below 50K</SPAN><SPAN>, everything worked smoothly</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Above 50K</SPAN><SPAN>, the iFlow got stuck in </SPAN><SPAN>Processing</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>No errors</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>No termination</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Messages remained frozen for days</SPAN></FONT></LI></UL><P><FONT face="comic sans ms,sans-serif"><STRONG>Example:</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Server page size → 50K</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Data set = 40K → Success</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Data set = 150K → iFlow stalled indefinitely</SPAN></FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><SPAN>Lesson: Looping + uncontrolled server pagination = unpredictable behavior.</SPAN></FONT></P><P><FONT face="arial black,avant garde"><STRONG>Approach 2 — OData V4 Adapter + Custom Pagination Logic</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>(Better… but still not scalable enough)</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif">This time we added:</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Custom counter checks</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Pagination logic in iFlow</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Looping until full completion</SPAN></FONT></LI></UL><P><FONT face="comic sans ms,sans-serif"><STRONG>Result:</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Worked better than Approach 1</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Successfully processed volumes </SPAN><SPAN>under ~1 million</SPAN><SPAN> records</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Beyond that threshold → </SPAN><SPAN>iFlow stuck again</SPAN><SPAN>, still showing </SPAN><SPAN>Processing</SPAN><SPAN> but not progressing</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>No runtime error, no bottleneck in MPL — simply a silent stall</SPAN></FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><SPAN>Lesson: The bottleneck was structural, not logical.</SPAN></FONT></P><P><FONT face="arial black,avant garde"><STRONG>Approach 3 — Final Working Solution (and the hero of the story)</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>“Simplicity is the ultimate sophistication.” — Leonardo da Vinci</SPAN></FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Key idea:</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>Avoid OData V4 adapter entirely.</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>Use the Classic HTTP Adapter with explicit control over pagination (via top &amp; skip).</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif">This allowed us to fully control the data fetch size and execution flow.</FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Our Winning Architecture</STRONG></FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Flow 1: Master Controller iFlow</STRONG></FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Call the source system </SPAN><SPAN>via HTTP adapter</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Retrieve </SPAN><SPAN>total record count</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Calculate the required number of batches</SPAN></FONT></LI><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Batch size = server-side max (e.g., 50K)</SPAN></FONT></LI></UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>For each batch, create a message containing only the </SPAN><SPAN>skip value</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Store skip values in </SPAN><SPAN>Data Store</SPAN></FONT></LI></OL><P><FONT face="comic sans ms,sans-serif"><STRONG>Example:</STRONG></FONT></P><pre class="lia-code-sample language-abap"><code>Total records = 1,000,000 Batch size = 50,000 Total batches = 20 Stored skip values = 0, 50000, 100000, ...</code></pre><P><BR /><FONT face="comic sans ms,sans-serif"><STRONG>Flow 2: Worker iFlow</STRONG></FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Poll Data Store</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Read skip value</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Call API using HTTP adapter with parameters:<BR /></SPAN><SPAN>$top=50000<BR />$skip=&lt;value from datastore&gt;</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Process the dataset</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Deliver to target system</SPAN></FONT></LI></OL><P><STRONG><FONT face="arial,helvetica,sans-serif">T<FONT face="comic sans ms,sans-serif">he Big Win</FONT></FONT></STRONG></P><P><FONT face="arial,helvetica,sans-serif">This mechanism:</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Gave us </SPAN><SPAN>full control over pagination</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Completely bypassed OData V4 limitations</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Handled </SPAN><SPAN>10+ MILLION</SPAN><SPAN> records without stalls</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Achieved stable throughput</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>No permanent IFlow “Processing” states</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>No invisible bottlenecks</SPAN></FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><SPAN>Lesson: If the adapter limits you, take control with HTTP.</SPAN></FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Key Learnings</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><SPAN>SAP Integration Suite is powerful and evolving, but </SPAN><SPAN>very high-volume scenarios still expose adapter-level limitations</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>OData V4 Adapter may not be suitable for massive pagination-driven extractions in all cases</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>HTTP adapter with custom pagination is a reliable fallback</SPAN></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><SPAN>Sometimes the “classic” tool is the best tool</SPAN></FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><SPAN>And yes—raising an SAP incident might eventually clarify the root cause, but business timelines forced us to engineer a workaround.</SPAN></FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Conclusion</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><SPAN>High-volume data integration is never trivial. The key is to experiment, observe system behavior, and design around limitations. Our final approach was not just functional—it was </SPAN><SPAN>scalable, predictable, and production-proof</SPAN><SPAN>.</SPAN></FONT></P><P><FONT face="arial,helvetica,sans-serif">If you’re dealing with similar large dataset transfers in SAP IS, this solution pattern might save you hours of troubleshooting.</FONT></P><P><FONT face="comic sans ms,sans-serif"><STRONG>Feedback Welcome</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">I’d love to hear your thoughts, improvements, or other patterns you’ve successfully used for large-volume integration scenarios in SAP Integration Suite.</FONT></P><P><FONT face="arial,helvetica,sans-serif">Thank you for reading!</FONT></P><P><FONT face="simsun,hei"><STRONG>I2Integration Solutions | A Startup</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">Connect in LinkedIn: <A href="https://www.linkedin.com/company/i2integrate-solutions" target="_self" rel="nofollow noopener noreferrer">I2Integrate</A></FONT></P> 2025-12-02T15:49:20.601000+01:00 https://community.sap.com/t5/artificial-intelligence-blogs-posts/sap-neo-retires-integration-developers-must-know-before-migrating-from-neo/ba-p/14280755 SAP Neo Retires – Integration Developers Must Know Before Migrating from Neo to CF 2025-12-03T12:17:49.941000+01:00 SivaS https://community.sap.com/t5/user/viewprofilepage/user-id/38581 <P class="lia-align-justify" style="text-align : justify;">SAP has announced the <STRONG>end of maintenance for the SAP Cloud Integration Neo environment by December 31, 2028</STRONG>. This impacts customers running Cloud Integration on Neo and planning to move to the <STRONG>Cloud Foundry–based SAP Integration Suite</STRONG>.</P><P>Migrating integration flows from Neo to Cloud Foundry is <STRONG>not a simple export/import of iflows</STRONG>.<BR />Both environments differ significantly in:</P><UL><LI>Infrastructure</LI><LI>Adapter deployment</LI><LI>Logging &amp; monitoring</LI><LI>Environment variables</LI><LI>APIs</LI><LI>Security materials</LI><LI>Transport &amp; lifecycle management</LI></UL><P class="lia-align-justify" style="text-align : justify;">This blog summarizes the <STRONG>key Neo vs. Cloud Foundry differences</STRONG> you must consider before starting your CPI migration.</P><HR /><H2 id="toc-hId-1766379504"><STRONG>1. Cloud Integration Availability: Neo vs. Cloud Foundry</STRONG></H2><H3 id="toc-hId-1698948718"><STRONG>Neo</STRONG></H3><UL><LI>CPI is available as a <STRONG>standalone product</STRONG></LI><LI>Runs in SAP-managed data centers</LI><LI>Uses Neo cockpit &amp; proprietary deployment mechanisms</LI></UL><H3 id="toc-hId-1502435213"><STRONG>Cloud Foundry</STRONG></H3><UL><LI>CPI is part of the <STRONG>SAP Integration Suite capability</STRONG></LI><LI>Runs on <STRONG>hyperscalers</STRONG> (AWS, Azure, GCP)</LI><LI>Uses standard CF principles for scaling, service bindings, logging</LI></UL><P><span class="lia-unicode-emoji" title=":pushpin:">📌</span><EM>Important:</EM> Some Neo features may not be available or behave differently in CF (refer to Note <A href="https://me.sap.com/notes/2903776" target="_blank" rel="noopener noreferrer">2903776</A>).</P><H2 id="toc-hId-1176838989"><STRONG>2. Adapter Deployment Differences</STRONG></H2><H3 id="toc-hId-1109408203"><STRONG>Neo</STRONG></H3><UL><LI>Custom adapters deployed using <STRONG>Eclipse plugin</STRONG></LI><LI>Adapter artifacts managed in Neo cockpit</LI></UL><H3 id="toc-hId-912894698"><STRONG>Cloud Foundry</STRONG></H3><UL><LI>Custom adapters are deployed as <STRONG>Integration Adapter artifacts</STRONG></LI><LI>Managed via Cloud Integration <STRONG>Design &amp; Monitor</STRONG> applications</LI><LI>Deployment uses <STRONG>CF-based lifecycle mechanisms</STRONG></LI></UL><P><STRONG>Impact:</STRONG><BR />If you have custom adapters built for Neo, they must be rebuilt or repackaged for Cloud Foundry. Here is the help document to create custom adapters&nbsp;<A href="https://help.sap.com/docs/cloud-integration/sap-cloud-integration/developing-custom-adapters" target="_self" rel="noopener noreferrer">Developing Customer Adapters</A></P><H2 id="toc-hId-587298474"><STRONG>3. Logging Differences - Audit Logs &amp; Access Logs</STRONG></H2><H3 id="toc-hId-519867688"><STRONG>Neo</STRONG></H3><UL><LI>Audit logs retrieved via <STRONG>Neo Audit Log API</STRONG></LI><LI>Access logs available under “Access Logs” section</LI></UL><H3 id="toc-hId-323354183"><STRONG>Cloud Foundry</STRONG></H3><UL><LI>Audit logs available through <STRONG>SAP Audit Log Service</STRONG></LI><LI>Access logs integrated with <STRONG>Application Logs</STRONG></LI><LI>Monitoring done via Cloud Integration Monitor</LI></UL><P><STRONG>Impact:</STRONG><BR />Integrations relying on Neo-specific APIs must be redesigned for CF logging services. Here is the help document related to Audit logging in CF environment&nbsp;<A href="https://help.sap.com/docs/btp/sap-business-technology-platform/audit-logging-in-cloud-foundry-environment" target="_self" rel="noopener noreferrer">Audit Logging in CF</A>&nbsp;</P><H2 id="toc-hId--2242041"><STRONG>4. Environment Variables - Critical Migration Topic</STRONG></H2><P class="lia-align-justify" style="text-align : justify;"><SPAN>We can use environment variables in integration flows to address technical details like the region,port..etc where&nbsp;</SPAN><SPAN class="">Cloud Integration</SPAN><SPAN>&nbsp;is deployed.</SPAN></P><P class="">The following table provides the mapping of environment variables in Neo and Cloud Foundry.</P><P class="">&nbsp;</P><DIV class=""><DIV class=""><DIV class=""><TABLE border="1" width="100%"><TBODY><TR><TD width="33.333333333333336%" height="30px"><FONT color="#666699"><STRONG>Variable Name - Neo</STRONG></FONT></TD><TD width="33.333333333333336%" height="30px"><FONT color="#666699"><STRONG>Variable Name&nbsp;-CF</STRONG></FONT></TD><TD width="33.333333333333336%" height="30px"><FONT color="#666699"><STRONG>Description</STRONG></FONT></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_APPLICATION</P><P class="">Example value:<SPAN>&nbsp;</SPAN>abcd01iflmap</P></TD><TD width="33.333333333333336%"><P class="">TENANT_NAME</P><P class="">Example value:<SPAN>&nbsp;</SPAN>xyz001</P></TD><TD width="33.333333333333336%"><P class="">Sub domain of worker application (associated with application identifier for worker node)</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_APPLICATION_URL</P><P class="">Example value:<SPAN>&nbsp;</SPAN>abcd01iflmap.uvwxy.eu1.hana.ondemand.com</P></TD><TD width="33.333333333333336%"><P class="">TENANT_NAME + IT_SYSTEM_ID + IT_TENANT_UX_DOMAIN</P><P class="">Example value:<SPAN>&nbsp;</SPAN>xyz001.it-cpi001.cfapps.eu10.hana.ondemand.com</P></TD><TD width="33.333333333333336%"><P class="">URL of the worker application sub domain</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_HOST</P><P class="">Example value:<SPAN>&nbsp;</SPAN>hana.ondemand.com</P></TD><TD width="33.333333333333336%"><P class="">IT_TENANT_UX_DOMAIN</P><P class="">Example value:<SPAN>&nbsp;</SPAN>cfapps.eu10.hana.ondemand.com</P></TD><TD width="33.333333333333336%"><P class="">Base URL of the SAP BTP region host where the application is deployed</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_LOCAL_HTTP_PORT</P><P class="">Example value:<SPAN>&nbsp;</SPAN>9001</P></TD><TD width="33.333333333333336%"><P class="">PORT</P><P class="">Example value:<SPAN>&nbsp;</SPAN>8080</P></TD><TD width="33.333333333333336%"><P class="">HTTP port of the application bound to localhost</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_OP_HTTP_PROXY_HOST</P></TD><TD width="33.333333333333336%"><P class="">VCAP_SERVICES</P></TD><TD width="33.333333333333336%"><P class="">Host of the HTTP Proxy for on-premise connectivity</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_OP_HTTP_PROXY_PORT</P><P class="">Example value:<SPAN>&nbsp;</SPAN>20003</P></TD><TD width="33.333333333333336%"><P class="">VCAP_SERVICES</P><P class="">Example value:<SPAN>&nbsp;</SPAN>20003</P></TD><TD width="33.333333333333336%"><P class="">Port of the HTTP Proxy for on-premise connectivity</P></TD></TR><TR><TD width="33.333333333333336%"><P class="">HC_REGION</P><P class="">Example value:<SPAN>&nbsp;</SPAN>EU_1</P></TD><TD width="33.333333333333336%"><P class="">IT_TENANT_UX_DOMAIN</P><P class="">Example value:<SPAN>&nbsp;</SPAN>cfapps.eu10.hana.ondemand.com</P></TD><TD width="33.333333333333336%"><P class="">Region where the application is deployed</P></TD></TR></TBODY></TABLE><P>Here is the nice blog on how to call/get environment variable in CF Integration flows -&nbsp;<A href="https://community.sap.com/t5/technology-blog-posts-by-members/identify-sap-cloud-integration-tenant-stage-at-runtime/ba-p/13573667" target="_self">identify-sap-cloud-integration-tenant-stage-at-runtime</A>&nbsp;</P><H2 id="toc-hId-148498811"><STRONG>5. OData API Access Considerations</STRONG></H2><H3 id="toc-hId--341417701"><STRONG>Neo</STRONG></H3><UL><LI>Uses Neo-based OAuth clients</LI><LI>API base URLs differ</LI><LI>Some OAuth grant types are Neo-specific</LI></UL><H3 id="toc-hId--537931206"><STRONG>Cloud Foundry</STRONG></H3><UL><LI>Requires enabling API clients via <STRONG>XSUAA</STRONG></LI><LI>Uses Integration Suite base URLs</LI><LI>Authentication flows must be updated</LI></UL><P><STRONG>For example:</STRONG><BR />If you use CPI OData APIs for CI/CD, monitoring, or automation, the endpoints and tokens must be regenerated. Here is the help document gives more details on inbound communication setup -&nbsp;<A href="https://help.sap.com/docs/cloud-integration/sap-cloud-integration/connection-setup-for-inbound-communication-for-api-clients" target="_self" rel="noopener noreferrer">connection setup for inbound communication for api clients</A>&nbsp;</P><H2 id="toc-hId--441041704"><STRONG>6. Transport Management &amp; Lifecycle Changes</STRONG></H2><P class="">For the transport of integration content across different tenants, different options are available:</P><UL class="lia-align-justify" style="text-align : justify;"><LI>Manual export and import</LI><LI>Usage of CTS+</LI><LI>Usage of the cloud-based Transport Management</LI></UL><P class="">These options are identical independent of the environment (Cloud Foundry or Neo). However, setting up transport management using the cloud-based Transport Management service is different in both environments</P><H2 id="toc-hId--637555209"><STRONG>7. Security Materials (Certificates, Keys, OAuth)</STRONG></H2><H3 id="toc-hId--1127471721"><STRONG>Neo</STRONG></H3><UL><LI>Keystore stored and managed directly inside Neo cockpit</LI><LI>OAuth clients created using Neo administration tools</LI></UL><H3 id="toc-hId--1323985226"><STRONG>Cloud Foundry</STRONG></H3><UL><LI>Uses <STRONG>Service Instances</STRONG> for trust stores</LI><LI>Certificates and keys must be re-imported</LI><LI>OAuth clients created via <STRONG>Integration Suite</STRONG></LI></UL><P><STRONG>Impact:</STRONG><BR />Rebuild all security configurations - especially if you use SFTP, AS2, OData, or SOAP with certificates.</P><H2 id="toc-hId--1227095724"><STRONG>8. Custom Scripting Adjustments (Groovy/JavaScript)</STRONG></H2><P>Some scripts refer to Neo-specific:</P><UL><LI>Endpoints</LI><LI>Log files</LI><LI>Environment variables</LI><LI>Adapter classes</LI><LI>Tenant metadata</LI></UL><P>These must be refactored to support CF equivalents.</P><H2 id="toc-hId--1423609229"><STRONG>9. Monitoring Experience Differences</STRONG></H2><H3 id="toc-hId--1913525741"><STRONG>Neo</STRONG></H3><UL><LI>Heavy reliance on Neo cockpit</LI><LI>Simplified logging UI</LI><LI>Access logs separate from application logs</LI></UL><H3 id="toc-hId--1941855555"><STRONG>CF</STRONG></H3><UL><LI>CPI Monitor in Integration Suite</LI><LI>Centralized message processing logs</LI><LI>Audit logging via SAP Audit Log Service</LI><LI>Better integrations with external monitoring tools</LI></UL><P><STRONG>Impact:</STRONG><BR />Monitoring dashboards and alerting need reconfiguration.</P><HR /><H2 id="toc-hId--1844966053"><STRONG>Conclusion</STRONG></H2><P>Migrating SAP Cloud Integration from Neo to Cloud Foundry is a strategic, time-sensitive initiative as Neo retires in 2028. However, the migration is <EM>not</EM> a technical copy-paste.<BR />Neo and Cloud Foundry differ significantly in:</P><UL><LI>Adapter deployment</LI><LI>Logging &amp; audit mechanisms</LI><LI>Environment variables</LI><LI>OAuth &amp; API access</LI><LI>Transport landscape</LI><LI>Runtime services</LI><LI>Security configuration</LI></UL><P>By understanding these environment-specific differences early - and adjusting integration flows, scripts, endpoints, and credentials accordingly - you can avoid migration failures, reduce manual fixes, and ensure a smooth transition to the SAP Integration Suite on Cloud Foundry.</P><P>If planned well, the Cloud Foundry environment offers:</P><UL><LI>Better scalability</LI><LI>CI/CD readiness</LI><LI>Modern logging and monitoring</LI><LI>Hyperscaler flexibility</LI><LI>Improved integration lifecycle governance</LI></UL><P>The best time to start preparing is now.</P></DIV></DIV></DIV> 2025-12-03T12:17:49.941000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/espresso-bites-clean-core-integration-based-on-an-odata-example-for-sap-aif/ba-p/14287148 Espresso bites – Clean core Integration based on an OData example for SAP AIF monitoring 2025-12-10T08:21:30.976000+01:00 RobertSchilling https://community.sap.com/t5/user/viewprofilepage/user-id/121877 <P><SPAN>By Robert Schilling &amp; Bertrand Henkel.</SPAN></P><P><SPAN>As an Enterprise Architect for SAP RISE customers, we guide organizations through business transformation and cloud adoption. The core integration principle enables hybrid architecture scenarios developed by SAP for customers with large and complex SAP landscapes integrating essential elements and enabling a phased transition to the cloud.</SPAN></P><P><SPAN>For an initial introduction to clean core principles, we recommend the following learning resource: <A href="https://learning.sap.com/learning-journeys/managing-clean-core-for-sap-s-4hana-cloud" target="_blank" rel="noopener noreferrer"><STRONG>Managing Clean Core for SAP S/4HANA Cloud</STRONG></A>. Here, we demonstrate how to evaluate and apply clean core principles to ERP systems to maximize business process agility, reduce adaptation efforts, and accelerate innovation. The course is designed for beginners, takes two hours, and is free of charge.</SPAN></P><P><SPAN>In general clean core integration focusses on four pillars – please follow up with the <A href="https://dam.sap.com/mac/u/a/cph8WsF?rc=10&amp;doi=SAP1219405" target="_blank" rel="noopener noreferrer">official documentation</A> or other blog entries like <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/future-proofing-enterprise-agility-with-sap-integration-suite-and-clean/ba-p/14272078" target="_blank">Future-Proofing Enterprise Agility with SAP Integration Suite and Clean Core Principles</A></SPAN></P><P><SPAN>Technically speaking the clean core integration principle focusses on these main components:</SPAN></P><UL><LI><SPAN>Establishing an API-led strategy and event driven architecture</SPAN></LI><LI><A href="https://www.sap.com/products/technology-platform/integration-suite/capabilities.html" target="_blank" rel="noopener noreferrer"><STRONG><SPAN>SAP Integration Suite</SPAN></STRONG></A><SPAN> on SAP BTP,</SPAN></LI><LI><SPAN>Enabling operational excellence for end-to-end monitoring using </SPAN></LI><UL><LI><SPAN>the </SPAN><A href="https://support.sap.com/en/alm/sap-cloud-alm/operations/expert-portal/integration-monitoring.html" target="_blank" rel="noopener noreferrer"><STRONG><SPAN>Integration &amp; Exception Monitoring capability</SPAN></STRONG></A><SPAN> of SAP Cloud Application Lifecycle Management (SAP Cloud ALM).</SPAN></LI><LI><SPAN>The monitoring capability of </SPAN><A href="https://www.sap.com/products/technology-platform/application-interface-mgmt.html" target="_blank" rel="noopener noreferrer"><STRONG><SPAN>SAP Application Interface Framework</SPAN></STRONG></A><SPAN> (SAP AIF) on SAP S/4HANA,</SPAN></LI><LI><SPAN>The monitoring capabilities of SAP Integration Suite on SAP BTP</SPAN></LI></UL></UL><P><SPAN>Based on SAP progress, many integration scenarios are already available within this solution architecture: In our daily business as Enterprise Architects we are often asked: “How do I activate these SAP standard scenarios?” and “What should I consider for my custom-built interfaces?”</SPAN></P><P><SPAN>Therefore, this blog focuses on the reference architecture for the monitoring capabilities for OData Services of <STRONG>SAP Application Interface Framework</STRONG> following the clean core integration principle.</SPAN></P><P><STRONG><SPAN>&nbsp;</SPAN></STRONG><STRONG><SPAN>How do I activate SAP AIF Monitoring &amp; Error Handling for SAP standard interfaces for SAP CALM integration?</SPAN></STRONG></P><P><SPAN>To do this, the corresponding SAP AIF scenario must be activated. In the SAP documentation (<A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/91af7f8d3acd47da90d33aaacfcd0d59/43c43f584eff2160e10000000a44147b.html?locale=en-US&amp;state=TEST&amp;version=2025.000&amp;q=AIF+Purchase+Order" target="_blank" rel="noopener noreferrer">see example: Purchase Requisition – OData V2</A>) for each interface, the available AIF scenario is listed, which can then be activated using the transaction "AIF Content Transport – Deploy /AIF/CONTENT_EXTRACT".</SPAN></P><P><SPAN>If SAP standard interfaces are not listed as a scenario, SAP asks you to submit a corresponding <A href="https://influence.sap.com/sap/ino/#/campaign/2282" target="_blank" rel="noopener noreferrer">SAP Customer</A> Influence Request for the relevant interface, like other clean core topics. In general, you can also check the table /AIF/ICD_DATA, which underlies the transaction /AIF/CONTENT_EXTRACT.</SPAN></P><P><STRONG><SPAN>In the meantime - How to handle SAP Standard Interfaces, where no SAP AIF Scenario is available?</SPAN></STRONG></P><P><SPAN>At present, not all SAP Standard OData Interfaces—such as those from the <A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE/9a02a02d849d4b38a7320d94a71d2a22/13d40bd35fc74d289e81fc284a928448.html?locale=en-US&amp;q=API+Maintenance+Management&amp;version=LATEST" target="_blank" rel="noopener noreferrer">Maintenance Management area</A>—come with available SAP AIF Scenarios. So, what’s the approach for enabling SAP Cloud ALM Integration and Exception Monitoring in these cases?</SPAN></P><P><SPAN>These specific SAP Standard Interfaces are handled <A href="https://help.sap.com/docs/ABAP_PLATFORM_NEW/68bf513362174d54b58cddec28794093/c4f6ff5082d2793ee10000000a423f68.html?locale=en-US&amp;q=API+Maintenance+Management&amp;version=LATEST" target="_blank" rel="noopener noreferrer">through SAP Gateway functionalities</A>. You can monitor them directly in SAP S/4HANA using the SAP Gateway Performance Monitor (/IWFND/MONITOR) and review issues in the <A href="https://help.sap.com/docs/ABAP_PLATFORM_NEW/68bf513362174d54b58cddec28794093/0ff5ff5082d2793ee10000000a423f68.html?locale=en-US&amp;q=API+Maintenance+Management&amp;version=LATEST" target="_blank" rel="noopener noreferrer">SAP Gateway Error Logs (/IWFND/ERROR_LOG)</A>. Currently, with SAP Cloud ALM, only data from the Error Log can be transferred to SAP Cloud ALM Integration &amp; Exception Monitoring. For more details, please refer to the following section: <A href="https://support.sap.com/en/alm/sap-cloud-alm/operations/expert-portal/integration-monitoring/calm-s4-privcloud.html?anchorId=section_copy" target="_blank" rel="noopener noreferrer">Enable SAP ABAP Gateway Errors for SAP Cloud ALM</A>.</SPAN></P><P><STRONG><SPAN>License requirement for using SAP AIF with custom-developed interfaces:</SPAN></STRONG></P><P><SPAN>As described in <A href="https://me.sap.com/notes/2293938" target="_blank" rel="noopener noreferrer">SAP Note 2293938 – License Check for SAP Application Interface Framework</A> and in the SAP blog "<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/sap-application-interface-framework-licensing/ba-p/13399073" target="_blank">SAP Application Interface Framework – Licensing</A>", a separate SAP AIF license is required to manage custom-developed interfaces via SAP AIF. After purchasing the SAP AIF license, the AIF component AIF_GEN can be installed in the system and used for developing custom interfaces under SAP AIF. The necessary configuration steps for custom interfaces can be found in the relevant <A href="https://help.sap.com/docs/ABAP_PLATFORM_NEW/4db1676c3f114f119b500bd80ccd944d/a6ece0f9b54b4a2788ccd4bb97085486.html?version=LATEST&amp;locale=de-DE" target="_blank" rel="noopener noreferrer">SAP help for SAP AIF interface development</A>.</SPAN></P><P><SPAN>Further information:</SPAN></P><UL><LI><SPAN><A href="https://learning.sap.com/learning-journeys/managing-sap-application-interface-framework" target="_blank" rel="noopener noreferrer">Managing SAP Application Interface Framework</A></SPAN></LI><LI><SPAN><A href="https://learning.sap.com/learning-journeys/modernizing-integration-with-sap-integration-suite" target="_blank" rel="noopener noreferrer">Modernizing Integration with SAP Integration Suite</A></SPAN></LI><LI><SPAN><A href="https://learning.sap.com/learning-journeys/operating-with-sap-cloud-alm" target="_blank" rel="noopener noreferrer">Operating with SAP Cloud ALM</A></SPAN></LI></UL> 2025-12-10T08:21:30.976000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/idoc-with-integration-suite-advanced-event-mesh-process-integration-meets/ba-p/14290088 IDoc with Integration Suite, Advanced Event Mesh – Process Integration meets Eventing 2025-12-15T04:43:00.463000+01:00 FlorianOkos https://community.sap.com/t5/user/viewprofilepage/user-id/5536 <H2 id="toc-hId-1767296394">Introduction: Why IDoc-based Eventing matters</H2><P>Event-driven architectures have become a cornerstone for building responsive, loosely coupled SAP landscapes. With SAP Event Mesh and SAP Advanced Event Mesh (AEM), organizations can now distribute business events in near real time across SAP and non-SAP systems.</P><P>However, despite the growing availability of native business events in SAP S/4HANA, <STRONG>IDocs remain one of the most widely used integration mechanisms</STRONG> in productive SAP landscapes today. Especially in hybrid and transition scenarios — such as ECC to S/4HANA conversions, RISE with SAP journeys, or coexistence setups.</P><P>This leads to a pragmatic but important question for many architects and integration teams:&nbsp;<EM>How can we enable event-driven integration when IDocs are the primary source of truth?</EM></P><P>&nbsp;</P><H2 id="toc-hId-1570782889">Motivation: Bridging Legacy Integration with Modern Eventing</H2><P>Many customers want to adopt event-driven integration patterns <STRONG>without breaking existing interfaces</STRONG> or redesigning core business processes. Replacing IDocs with native domain events is often not feasible in the short term due to:</P><UL><LI><P>Large numbers of productive IDoc interfaces</P></LI><LI><P>Business-critical dependencies on established ALE/EDI processes</P></LI><LI><P>Limited capacity to redesign integration contracts during S/4HANA or RISE projects</P></LI></UL><P>At the same time, downstream consumers increasingly expect <STRONG>event-based communication</STRONG>, lightweight payloads, and scalable publish/subscribe models instead of point-to-point messaging.</P><P>&nbsp;</P><H2 id="toc-hId-1374269384">What This Blog Covers</H2><P>In this blog, we will compare <STRONG>four technical approaches</STRONG> for enabling SAP Eventing with IDocs as a source:</P><OL><LI><P><STRONG>SAP RAP</STRONG> – modeling domain-oriented events in the backend</P></LI><LI><P><STRONG>SAP Application Interface Framework (AIF)</STRONG> – governed IDoc processing and transformation</P></LI><LI><P><STRONG>ASAPIO Event Add-on</STRONG> – purpose-built IDoc-to-event enablement</P></LI><LI><P><STRONG>SAP Integration Suite (PI-style Cloud Integration) to SAP AEM</STRONG> – middleware-driven event publication</P></LI></OL><P>The comparison focuses on <STRONG>technical, operational, and commercial KPIs</STRONG>, helping architects and integration leads decide which approach best fits their current landscape — and their long-term eventing strategy.</P><P>Furthermore we are planning to release a setup blog for every approach in 2026.</P><P>&nbsp;</P><TABLE width="1664"><TBODY><TR><TD width="168"><STRONG>KPI Area</STRONG></TD><TD width="267"><STRONG>Sub-KPI</STRONG></TD><TD width="445"><STRONG>SAP RAP</STRONG></TD><TD width="204"><STRONG>SAP AIF</STRONG></TD><TD width="219"><STRONG>ASAPIO Event Add-on</STRONG></TD><TD width="361"><STRONG>IDoc with Integration Suite, Process Integration &nbsp;→ SAP AEM</STRONG></TD></TR><TR><TD width="168"><STRONG>Technical</STRONG></TD><TD width="267"><STRONG>Architecture fit for eventing</STRONG></TD><TD width="445">RAP is a development model for OData/REST apps; events must be explicitly modeled and outbound connectors added.</TD><TD width="204">AIF is built to process, validate and route message-based interfaces, including IDocs, before forwarding.</TD><TD width="219">Purpose-built for IDoc capture → event broker integration; minimal coding needed.</TD><TD width="361">Uses IDoc adapters in Integration Suite (Cloud Integration) to pick up IDocs → transform → push to AEM; classic PI-style patterns. Good for hub-and-spoke landscapes.</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>Payload granularity &amp; semantic model</STRONG></TD><TD width="445">High flexibility; you design domain events — can be fine-grained but requires modeling effort.</TD><TD width="204">Strong transformation/mapping capabilities; suitable for shaping IDoc content into event payloads.</TD><TD width="219">Prebuilt mapping designer enables consistent, semantically enriched payloads quickly.</TD><TD width="361">Very flexible message mapping in Cloud Integration; can output structured JSON events or IDoc-like structures.</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>Payload type (Original IDoc vs. IDoc-like)</STRONG></TD><TD width="445">IDoc-like payload&nbsp;— RAP does&nbsp;not&nbsp;emit IDocs; outputs structured domain events that can mimic IDoc segments if designed so.</TD><TD width="204">Original IDoc or IDoc-like&nbsp;— AIF can pass through raw IDocs or transform into custom IDoc-like payload structures.</TD><TD width="219">IDoc-derived payload&nbsp;— captures the IDoc, then emits a normalized JSON event retaining IDoc semantics (not the raw IDoc).</TD><TD width="361">Setup would use standard distribution model (BD64) based in Original IDoc</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>Latency &amp; throughput</STRONG></TD><TD width="445">Not optimized for high-volume IDoc capture; requires event plumbing; suitable for domain events rather than mass IDoc streaming.</TD><TD width="204">Strong runtime for IDoc processing; can scale with tuning.</TD><TD width="219">Optimized for real-time, high-volume IDoc eventing with direct connectors.</TD><TD width="361">Medium–High depending on Cloud Integration worker capacity; good for steady volumes, less ideal for IDoc bursts.</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>Development effort &amp; maintainability</STRONG></TD><TD width="445">Medium–High: event modeling + outbound logic need to be built.</TD><TD width="204">Medium: mostly configuration-driven, reduced custom coding.</TD><TD width="219">Low–Medium: mostly no-code/low-code; install, configure, map, emit.</TD><TD width="361">Low–Medium:&nbsp; requires integration flow development, mapping, testing, monitoring artifacts.</TD></TR><TR><TD width="168"><STRONG>Operations</STRONG></TD><TD width="267"><STRONG>Monitoring &amp; observability</STRONG></TD><TD width="445">Standard Event Monitor as part of Event Enablement Framework</TD><TD width="204">Very strong: built-in interface monitoring, message lists, business error handling.</TD><TD width="219">Built-in IDoc/event diagnostics + integration with event broker telemetry.</TD><TD width="361">&nbsp;Cloud Integration monitoring via Message Monitor + separated&nbsp; backend IDoc monitoring → two monitoring planes. Could add it to AIF or CALM</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>Error handling &amp; reprocessing</STRONG></TD><TD width="445">Requires custom retry logic; no native interface reprocessing workflow.</TD><TD width="204">Excellent: assisted corrections, reprocessing, business-user-friendly UI.</TD><TD width="219">Strong support for IDoc re-send, retries, scheduling, and alerting.</TD><TD width="361">Reprocessing via Cloud Integration retry or resending IDoc from backend; not as business-friendly as AIF.</TD></TR><TR><TD width="168"><STRONG>Commercial</STRONG></TD><TD width="267"><STRONG>Licensing / procurement</STRONG></TD><TD width="445">No new licenses; uses ABAP/RAP; development cost is the main factor.</TD><TD width="204">Included/licensed with SAP backend; configuration effort applies.</TD><TD width="219">Partner product with subscription/maintenance; reduces dev time.</TD><TD width="361">Requires Integration Suite licenses (message-based or connection-based) + AEM consumption.</TD></TR><TR><TD width="168">&nbsp;</TD><TD width="267"><STRONG>TCO &amp; time-to-value</STRONG></TD><TD width="445">Longer ramp-up due to event modeling and development effort.</TD><TD width="204">Medium TTV; faster than custom coding thanks to AIF tooling.</TD><TD width="219">Fast TTV; immediate eventing from IDocs with minimal dev.</TD><TD width="361">Fast TTV: Integration flows must be developed, deployed, and maintained; good reuse for multi-system connectivity.</TD></TR><TR><TD width="168"><STRONG>Setup Guide</STRONG></TD><TD width="267"><STRONG>How to implement it?</STRONG></TD><TD width="445">Blog will follow&nbsp;</TD><TD width="204"><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/how-to-send-out-idocs-to-advanced-event-mesh-using-sap-application/ba-p/13972680" target="_self">Setup Blog</A></TD><TD width="219"><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/idoc-with-integration-suite-advanced-event-mesh-using-the-event-add-on/ba-p/14290095" target="_self">Setup blog</A></TD><TD width="361"><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/idoc-with-integration-suite-advanced-event-mesh-process-integration-meets/ba-p/14290088" target="_self">Setup Blog</A></TD></TR></TBODY></TABLE><P>There is no one-size-fits-all approach to IDoc-based eventing in SAP landscapes — each option comes with distinct trade-offs across architecture, operations, and cost. While purpose-built add-ons and middleware enable fast time-to-value, backend-driven approaches provide stronger long-term alignment with domain-driven and native event models. Ultimately, the right choice depends on whether your priority is <STRONG>speed, governance, or future-proof event semantics</STRONG> on your journey toward an event-driven SAP architecture.</P><P>Stay tuned and Happy Eventing</P> 2025-12-15T04:43:00.463000+01:00 https://community.sap.com/t5/integration-blog-posts/from-pi-po-to-sap-edge-integration-cell-initial-sizing-approach-and/ba-p/14290592 From PI/PO to SAP Edge Integration Cell: Initial Sizing Approach and Planning for HA/Non-HA Setup 2025-12-16T13:34:19.679000+01:00 Muhammet_Tenbih https://community.sap.com/t5/user/viewprofilepage/user-id/1402365 <H1 id="toc-hId-1638218505">SAP Edge Integration Cell- Sizing Approach&nbsp;</H1><P>Planning the migration from SAP PI/PO to the SAP Integration Suite requires accurate infrastructure sizing for the Edge Integration Cell.</P><P>In this post, I demonstrate a practical sizing approach by applying the official <STRONG><A href="https://help.sap.com/docs/integration-suite/sap-integration-suite/sizing-guidelines" target="_blank" rel="noopener noreferrer">Sizing Guide</A>&nbsp;</STRONG>and SAP Note&nbsp;<SPAN><A href="https://me.sap.com/notes/3247839" target="_self" rel="noopener noreferrer">3247839</A>&nbsp;</SPAN> (Prerequisites) to a real-world scenario.</P><P>By analyzing PI/PO Message Performance, this blog post helps you determine the necessary hardware resources to ensure a stable foundation for your migration to SAP Edge Integration Cell. &nbsp;Furthermore, I will outline the key differences between High Availability (HA) and Non-HA setups, highlighting the critical factors you need to consider when selecting this two setups.&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId-1441705000">&nbsp;&nbsp;&nbsp;</H1><H2 id="toc-hId-1374274214"><U>1. Input for Sizing – Business Throughput</U></H2><P>Business Throughput&nbsp;is the number of business transactions or messages an SAP system processes within a given time, usually specified as messages or transactions per hour or second. It is a key metric in sizing because it determines the necessary CPU, memory, and other resources to handle the expected workload effectively.</P><P>Sizing with Business Throughput involves converting hourly throughput to messages per second to account for load distribution and peak times. SAP uses benchmarks like SAPS (SAP Application Performance Standard) to translate business throughput into hardware requirements.</P><P>In summary: Business Throughput quantifies the expected transactional load and is the foundation for planning the infrastructure capacity to ensure optimal SAP system performance and stability.</P><P>&nbsp;</P><P>In our case, we calculate the&nbsp;SAP PI/PO Business Throughput, which measures the volume of messages or transactions processed by the Process Integration and Process Orchestration system within a defined timeframe. This calculation is essential for sizing the infrastructure to support the expected load and ensure optimal system performance<SPAN>.</SPAN></P><H3 id="toc-hId-1306843428"><U>1.1. Scope</U></H3><P>The following components of the EIC will be considered for sizing:</P><UL><LI>Worker: Executes integration flows within the Integration Suite runtime, built on Apache Camel. Responsible for both synchronous and asynchronous message processing.</LI><LI>Policy Engine: Envoy-based runtime with SAP-specific extensions, enforces policies such as traffic management and security for API proxies.</LI><LI>PostgreSQL Database: Relational database system for storing structured metadata and logging data generated during integration scenarios.</LI><LI>Redis: In-memory data store used primarily for caching and storing operational data, including support for API traffic management policies.</LI><LI>Message Service: Implements asynchronous integration patterns using the JMS protocol and manages local event processing.</LI></UL><P>&nbsp;</P><H2 id="toc-hId-981247204"><U>2. Required input factors&nbsp;</U></H2><P>The following factor will be required:</P><UL><LI>Number of messages per second</LI><LI>Size of average payload</LI><LI>Complexity of Integration flow</LI></UL><P>&nbsp;</P><P><STRONG><U>Number of messages per second</U></STRONG></P><P>When determining the number of messages per second for sizing purposes, it is important to clarify and standardize the assumptions:</P><UL><LI>The “number of messages per second” should refer to the maximum throughput during peak periods, not an average across the year.​</LI><LI>Identify the peak time range (for example, analyze which month has the highest message volume). Subsequently, break down the peak month to the number of messages per second and consider the average payload size for that period.</LI><LI>When calculating, consider the following questions:</LI><UL><LI>Should weekends be included in the calculation, or is message traffic significantly lower?</LI><LI>Should sizing be based on 24-hour operation, or does your existing PI/PO system operate during defined business hours only (e.g., 8 hours per day)?</LI></UL></UL><P>This approach ensures that capacity is planned according to realistic worst-case load, which is recommended to guarantee stable system performance even during short-lasting throughput peaks</P><H2 id="toc-hId-784733699"><U>3. Delivered Messages statistics- Performance Monitoring</U></H2><P><U><STRONG>Crucial Assumption- Peak Period:</STRONG></U></P><P>For the analysis and sizing calculation, message statistics tables were directly extracted from <EM>SAP PI/PO Performance Monitoring</EM>. The available week of message data was chosen as the reference peak period. Because only one week of message statistics was available at the time of analysis, all sizing calculations are based on this week—used here as the reference "peak period."</P><P>Going this approach because possible higher peaks is not yet accessible.</P><P>By using this, the system is planned to handle at least the highest observed workload, safeguarding against typical peak scenarios.​</P><P>To make the evaluation as clear and practical as possible, the message data was systematically split into three tables:</P><OL><LI>An&nbsp;<STRONG><SPAN>All Messages </SPAN></STRONG><STRONG>Statistics Table</STRONG> capturing all messages from the week, combining both synchronous (real-time) and asynchronous (delayed/queued) communications.</LI><LI>A&nbsp;<STRONG>Synchronous Messages Statistics Table</STRONG>&nbsp;including only real-time message exchanges, where instant system-to-system response is required.</LI><LI>An&nbsp;<STRONG>Asynchronous Message Statistics Table</STRONG>&nbsp;gathering messages processed without immediate answers, reflecting queued or scheduled processes.</LI></OL><P>For each of these datasets and every reviewed time interval, two main metrics are calculated:</P><P><STRONG>Average payload size:</STRONG>&nbsp;Indicates the typical data volume per message, usually in kilobytes or megabytes.</P><P><STRONG>Average number of messages per second:</STRONG>&nbsp;Shows the sustained rate at which the system processes messages.</P><P>By distinguishing between message types and quantifying peak load figures, the resource requirements for both synchronous and asynchronous scenarios can be more accurately determined.The ultimate goal is to ensure that system capacity meets actual business needs even during the busiest periods.</P><H3 id="toc-hId-717302913"><U>3.1.&nbsp;All Messages Statistics Table including Sync and Async-Complex</U></H3><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_0-1765548243621.png" style="width: 919px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/351506i5479C10D872F7BA4/image-dimensions/919x299?v=v2" width="919" height="299" role="button" title="Muhammet_Tenbih_0-1765548243621.png" alt="Muhammet_Tenbih_0-1765548243621.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId-649872127"><U>a.All Messages Statistics Table -Average payload size</U></H4><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_0-1765807376895.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352141i0560F3B558F30D9D/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_0-1765807376895.png" alt="Muhammet_Tenbih_0-1765807376895.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId-453358622"><U>b.&nbsp;&nbsp;All Messages Statistics Table - Message per second</U></H4><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_1-1765807405922.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352142i975724D9C535634C/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_1-1765807405922.png" alt="Muhammet_Tenbih_1-1765807405922.png" /></span></P><P>&nbsp;</P><H3 id="toc-hId-127762398"><U>3.2 Synchronous Statistics Messages Table</U></H3><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_2-1765807484858.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352145i604881DBD958BAFD/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_2-1765807484858.png" alt="Muhammet_Tenbih_2-1765807484858.png" /></span></P><H4 id="toc-hId--437385483"><U>a.&nbsp;Synchronous Statistics Message Table - Average payload size</U></H4><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_4-1765807524690.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352147iBB60D40BF78941AA/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_4-1765807524690.png" alt="Muhammet_Tenbih_4-1765807524690.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--633898988"><U>b.&nbsp;Synchronous Statistics Message Table - Message per second</U></H4><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_5-1765807560175.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352148iC4AD9ADD7B76E5FF/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_5-1765807560175.png" alt="Muhammet_Tenbih_5-1765807560175.png" /></span></P><P>&nbsp;</P><H3 id="toc-hId--537009486"><U>3.3 Asynchronous Statistics Messages Table</U></H3><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_6-1765807580210.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352149i37B44B4D0C7BA475/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_6-1765807580210.png" alt="Muhammet_Tenbih_6-1765807580210.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--1026925998"><U>a.&nbsp;Asynchronous Message Table - Average payload size</U></H4><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_7-1765807598143.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352150i89118531207019A8/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_7-1765807598143.png" alt="Muhammet_Tenbih_7-1765807598143.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--1223439503"><U>b.&nbsp;Asynchronous Message Table - Message per second</U></H4><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_8-1765807611539.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352151i42C8B9CAB9028E04/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_8-1765807611539.png" alt="Muhammet_Tenbih_8-1765807611539.png" /></span></P><P>&nbsp;</P><P>&nbsp;</P><H2 id="toc-hId--833146994"><U>4. Sizing Approach for Worker (Complex Integration Flow)</U></H2><P>For sizing the Worker component, all calculations are carried out using<STRONG> complex integration flows</STRONG>. Separating simple from complex flows in real-world systems is often very time-consuming and difficult because this distinction is usually not visible in the message logs.</P><P>&nbsp;</P><P>Furthermore, experience and SAP guidelines show that the actual difference in sizing results between simple and complex flows is not significant. For simplicity and to avoid unnecessary complexity, we base all calculations on complex flows. This guarantees that the system is sufficiently dimensioned for any type of integration workload.</P><P>&nbsp;</P><P><STRONG><U><SPAN>SAP Application Performance Standard (SAPS)</SPAN></U></STRONG></P><P><SPAN>The actual sizing is performed using the SAP Application Performance Standard (SAPS). </SPAN></P><P><SPAN>For all worker-related calculations in this guide, the rule is applied that <STRONG>1 CPU core equals&nbsp;roughly 1400 SAPS</STRONG>.</SPAN></P><H3 id="toc-hId--1323063506"><U>4.1.&nbsp;</U><SPAN><U>Synchronous - Complex Integration Flows</U> </SPAN></H3><H4 id="toc-hId--1812980018"><U>Input</U></H4><OL><LI>Average Payload size for Synchronous -Complex Integration Flow:<STRONG> 12,50KB</STRONG></LI><LI>Throughput = 0,0077 messages per seconds</LI></OL><P><U><SPAN>Applying linear ratio for payload size</SPAN></U></P><P><SPAN>If your actual average payload size or message throughput differs from the values in the predefined table below—which lists CPU and memory requirements for specific throughput and payload sizes—you should use linear scaling to estimate the required resources.</SPAN></P><P><SPAN>Steps to apply linear scaling:</SPAN></P><OL><LI><SPAN>Identify the closest payload size in the reference table&nbsp;that is nearest to your actual average payload size.</SPAN></LI><UL><LI><SPAN>This closest size is called the&nbsp;reference payload size.<BR /><BR /></SPAN></LI></UL><LI><SPAN><SPAN>Calculate the scaled message throughput&nbsp;using:<BR /><BR /></SPAN></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_9-1765807707866.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352153iC50B6486CC769188/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_9-1765807707866.png" alt="Muhammet_Tenbih_9-1765807707866.png" /></span><P>&nbsp;</P></LI><LI><SPAN>Round the scaled message throughput number to the nearest whole number.<BR /><BR /></SPAN></LI><LI><SPAN>Use the CPU, memory, and instance recommendations&nbsp;from the table for the reference payload size and this scaled message rate to size your system.</SPAN></LI></OL><P><SPAN>&nbsp;</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_10-1765807832513.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352155i3A054E063858E366/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_10-1765807832513.png" alt="Muhammet_Tenbih_10-1765807832513.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--2009493523"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>12,5KB/100 * 0,0077= 0,00096 scaled messages per second</SPAN></P><P><SPAN>&nbsp;</SPAN></P><P><SPAN>Correspond to the table below</SPAN></P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>2800 SAPS = 2CPU &nbsp;</SPAN></P><P><SPAN>// (reminder:1400 SAPS = 1CPU)</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Memory</SPAN></P></TD><TD width="349"><P><SPAN>8GB</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Number of instances </SPAN></P></TD><TD width="349"><P><SPAN>1</SPAN></P></TD></TR></TBODY></TABLE><P><SPAN>&nbsp;</SPAN></P><H3 id="toc-hId--1912604021"><U>4.2.&nbsp;&nbsp;Asynchronous- Complex Integration Flows( Variant 2)</U></H3><P><SPAN>&nbsp;</SPAN></P><H4 id="toc-hId-2060630454"><U>Input</U></H4><OL><LI>Average Payload size for Asynchronous -Complex Integration Flow(Variant 2): <STRONG>668,67KB</STRONG></LI><LI>Throughput = 0,56 messages per seconds<BR /><BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_11-1765807907228.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352156i33090A4B8A4C357B/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_11-1765807907228.png" alt="Muhammet_Tenbih_11-1765807907228.png" /></span><P>&nbsp;</P></LI></OL><H4 id="toc-hId-1864116949"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>668,67KB/1024 * 0,56= 0,36 scaled messages per second for 1MB payload size</SPAN></P><P><SPAN>Correspond to the table above</SPAN></P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>2800 SAPS = 2CPU</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Memory</SPAN></P></TD><TD width="349"><P><SPAN>8GB</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Number of instances </SPAN></P></TD><TD width="349"><P><SPAN>1</SPAN></P></TD></TR></TBODY></TABLE><P><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId--2040557838"><U>5.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Message Service Tier for Asynchronous Messages</U></H2><P>For Message Service Tier selection, message throughput usually isn't the deciding factor—the number of <STRONG>required JMS queues and general storage are far more critical</STRONG>. You should carefully review your scenarios to determine how many queues you need, consider the asynchronous interface communications and pipeline concepts for Integration Suite.</P><P>&nbsp;</P><P>Once you've selected a tier, you should test and validate if your throughput matches your selected message service tier.</P><P>&nbsp;</P><H4 id="toc-hId-1471089939"><U>Input</U></H4><OL><LI>Average Payload size for asynchronous -Complex Integration Flow: <STRONG>668,67KB</STRONG></LI><LI>Throughput = 0,56 messages per seconds</LI></OL><P>&nbsp;</P><P><SPAN>668,67KB/1024 * 0,56= 0,36 scaled messages per second for 1MB payload size</SPAN></P><P><SPAN>&nbsp;</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_12-1765807962157.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352157i3748A8E8F6CE71DE/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_12-1765807962157.png" alt="Muhammet_Tenbih_12-1765807962157.png" /></span></P><P>&nbsp;</P><P><STRONG><EM>For production setups, it is recommended to consider&nbsp;Message Service Tier 250 or higher</EM></STRONG>.&nbsp;</P><H4 id="toc-hId-1274576434"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>Correspond to the table above</SPAN></P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>2800 SAPS = 2CPU</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Memory</SPAN></P></TD><TD width="349"><P><SPAN>6.5GiB</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Persistent Volume</SPAN></P></TD><TD width="349"><P><SPAN>100 GiB</SPAN></P></TD></TR></TBODY></TABLE><P><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId-1664868943"><U>6.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Policy Engine</U></H2><H4 id="toc-hId-881549424"><U>Input</U></H4><OL><LI><STRONG>0,56</STRONG><STRONG>&nbsp;messages per seconds </STRONG>for asynchronous messages</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_13-1765808015719.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352158i4DC39273DDEE5FB2/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_13-1765808015719.png" alt="Muhammet_Tenbih_13-1765808015719.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId-685035919"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>Correspond to the table above</SPAN></P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>1400SAPS = 1CPU</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Persistent Volume</SPAN></P></TD><TD width="349"><P><SPAN>6.5GB</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Number of instances</SPAN></P></TD><TD width="349"><P><SPAN>1 </SPAN></P></TD></TR></TBODY></TABLE><H2 id="toc-hId-1075328428"><U>7.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Database Sizing (PostgreSQL)</U></H2><H3 id="toc-hId-585411916"><U>7.1. Storage</U></H3><H4 id="toc-hId-263679095"><U>Input</U></H4><UL><LI>Total messages for 1 week: 345,295 messages</LI><LI>Retention period: 30 days. Therefore, the number of messages per month is calculated as 345,295 * 4 = 1,381,180 messages.</LI></UL><UL><LI>Approximate disk space: 13.81 GB, rounded up to 15 GB.</LI></UL><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_14-1765808061522.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352159i257F20C5602BB4A0/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_14-1765808061522.png" alt="Muhammet_Tenbih_14-1765808061522.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId-67165590"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>&nbsp;</SPAN></P><P><SPAN>This calculation assumes message logging at the INFO level. TRACE-level logging or additional datastore usage could increase storage needs by ~50%.</SPAN></P><P><SPAN>Additional factors like MPL attachments, trace data, general overhead, datastore entries, and message store entries also consume storage. </SPAN></P><P><SPAN>&nbsp;</SPAN></P><P><SPAN>For reliability, size for at least 32 GB --&gt;&gt;ideally 64 GB—a low-cost safeguard.</SPAN></P><P>&nbsp;</P><TABLE><TBODY><TR><TD width="349"><P><SPAN>Disk Space (Volume)</SPAN></P></TD><TD width="349"><P><SPAN>64 GB</SPAN></P></TD></TR></TBODY></TABLE><P><SPAN>&nbsp;</SPAN></P><H3 id="toc-hId-164055092"><U>7.2. Compute Unit</U></H3><P>&nbsp;</P><H4 id="toc-hId--325861420"><U>Input</U></H4><OL><LI><STRONG>0,57</STRONG><STRONG> messages per seconds </STRONG>for all messages</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_15-1765808120060.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352160i1D5C9B92F3CAFBD3/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_15-1765808120060.png" alt="Muhammet_Tenbih_15-1765808120060.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--522374925"><SPAN><U>Determined results</U> </SPAN></H4><P><SPAN>Correspond to the table above</SPAN></P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>1400 SAPS = 1CPU</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Memory</SPAN></P></TD><TD width="349"><P><SPAN>2GB</SPAN></P></TD></TR></TBODY></TABLE><P>&nbsp;</P><H2 id="toc-hId--132082416"><U>8.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sizing for Redis</U></H2><P>&nbsp;</P><H4 id="toc-hId--915401935"><U>Input</U></H4><UL><LI>Number of API Artifacts</LI><LI>Hence minimum setup will be chosen</LI></UL><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_0-1765808524926.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352164iCB1C74CBF810D541/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_0-1765808524926.png" alt="Muhammet_Tenbih_0-1765808524926.png" /></span></P><P>&nbsp;</P><H4 id="toc-hId--1111915440"><SPAN><U>Determined results</U> </SPAN></H4><P>Minimum CPU / Memory requirements:&nbsp;1 CPU / 1 GiB</P><TABLE><TBODY><TR><TD width="349"><P><SPAN>CPU</SPAN></P></TD><TD width="349"><P><SPAN>350SAPS = round up =1CPU</SPAN></P></TD></TR><TR><TD width="349"><P><SPAN>Memory</SPAN></P></TD><TD width="349"><P><SPAN>0,75GB= round up = 1GiB</SPAN></P></TD></TR></TBODY></TABLE><H2 id="toc-hId--721622931"><U>9.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Sizing Output for EIC</U></H2><P><STRONG>Why do we start with the Non-HA Baseline?</STRONG> We start our calculation with the smallest standard configuration: the <STRONG>Non-High Availability (Non-HA)</STRONG> setup.</P><P>In our specific example, the existing PI/PO system has very low message traffic. Due to this low workload, we anticipate that the SAP Edge Integration Cell will not require extensive hardware resources.</P><P>At this step, we have not yet decided if the final SAP EIC will be an HA or Non-HA setup. Therefore, we use this minimum baseline as a starting point. Then, we add our specific calculated workload (the "Delta") to it.</P><P>Finally, after seeing the total requirements, we compare the results against the HA and Non-HA setups to make a final conclusion</P><P>&nbsp;</P><H3 id="toc-hId--1211539443"><U>9.1 Minimum Sizing Requirements for non-HA Setup</U></H3><UL><LI>CPU/Memory- Total 10 CPU/32GIB</LI><LI>Persistent Volumes 101GiB</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_0-1765549774831.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/351541iB8A03355B5CEA9AE/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_0-1765549774831.png" alt="Muhammet_Tenbih_0-1765549774831.png" /></span></P><H3 id="toc-hId--1239869257"><U>9.2 Intermediate Calculation of Sizing Requirements from PI/PO System</U></H3><TABLE><TBODY><TR><TD width="141"><P><U><STRONG><FONT face="arial black,avant garde" color="#000000">Component</FONT></STRONG></U></P></TD><TD width="164"><P><U><STRONG><FONT face="arial black,avant garde" color="#000000">CPU</FONT></STRONG></U></P></TD><TD width="150"><P><U><STRONG><FONT face="arial black,avant garde" color="#000000">Memory</FONT></STRONG></U></P></TD><TD width="128"><P><U><STRONG><FONT face="arial black,avant garde" color="#000000">Persist Volume</FONT></STRONG></U></P></TD><TD width="114"><P><U><STRONG><FONT face="arial black,avant garde" color="#000000">Instance</FONT></STRONG></U></P></TD></TR><TR><TD width="141"><P>Worker</P></TD><TD width="164"><P>1.Synchron: 2 CPU</P><P>2. Asynchron:2CPU</P><P><STRONG>&nbsp;</STRONG></P><P><STRONG>Total:4 CPU</STRONG></P></TD><TD width="150"><P>1.Synchron: 8 GiB</P><P>2. Asynchron: 8 GiB</P><P>&nbsp;</P><P><STRONG>Total= 16 GiB</STRONG></P><P><STRONG>&nbsp;</STRONG></P></TD><TD width="128"><P>&nbsp;</P></TD><TD width="114"><P>2-&gt; Worker Instance/pods</P></TD></TR><TR><TD width="141"><P>Message Service</P></TD><TD width="164"><P>2 CPU</P></TD><TD width="150"><P>6.5 GB</P></TD><TD width="128"><P><STRONG>100 GB</STRONG></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P>Image Replication Service</P></TD><TD width="164"><P>&nbsp;</P></TD><TD width="150"><P>&nbsp;</P></TD><TD width="128"><P><STRONG>1GB</STRONG></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P>Policy Engine</P></TD><TD width="164"><P>1 CPU</P></TD><TD width="150"><P><SPAN>1GB</SPAN></P></TD><TD width="128"><P>&nbsp;</P></TD><TD width="114"><P>1</P></TD></TR><TR><TD width="141"><P>Monitoring (Optional)</P></TD><TD width="164"><P>&nbsp;</P></TD><TD width="150"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128"><P><SPAN>20GB</SPAN></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P><U>Shared Storage:</U> Java Heap Dumps + SNC library (SAPCRYPTOLIB) for RFC Adapter: (Optional)</P></TD><TD width="164"><P>&nbsp;</P></TD><TD width="150"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128"><P><SPAN>&nbsp;</SPAN></P><P><SPAN>50GB </SPAN></P><P><SPAN>+ 1GB =</SPAN></P><P><SPAN>&nbsp;51 GB</SPAN></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P>External Database/-store</P></TD><TD width="164"><P>&nbsp;</P></TD><TD width="150"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P>PostgreSQL</P></TD><TD width="164"><P>1CPU</P></TD><TD width="150"><P><SPAN>2 GB</SPAN></P></TD><TD width="128"><P><SPAN>64 GB</SPAN></P></TD><TD width="114"><P>&nbsp;</P></TD></TR><TR><TD width="141"><P>Redis</P></TD><TD width="164"><P>1 CPU</P></TD><TD width="150"><P><SPAN>1 GB</SPAN></P></TD><TD width="128"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="114"><P>&nbsp;</P></TD></TR></TBODY></TABLE><H3 id="toc-hId--1436382762"><U>9.3 Calculate the Delta and the Final Sizing Requirement&nbsp;</U></H3><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_1-1765879545612.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352608iE9C280B36F33A313/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_1-1765879545612.png" alt="Muhammet_Tenbih_1-1765879545612.png" /></span></P><P>&nbsp;</P><TABLE><TBODY><TR><TD width="141px" height="79px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG>Component</STRONG></FONT></U></P></TD><TD width="164px" height="79px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG>CPU</STRONG></FONT></U></P></TD><TD width="150px" height="79px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG>Memory</STRONG></FONT></U></P></TD><TD width="128px" height="79px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG>Persist Volume</STRONG></FONT></U></P></TD><TD width="114px" height="79px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG>Instance</STRONG></FONT></U></P></TD></TR><TR><TD width="141px" height="77px"><P>Rest of EIC Components</P></TD><TD width="164px" height="77px"><P>6CPU</P></TD><TD width="150px" height="77px"><P>27 GiB</P></TD><TD width="128px" height="77px"><P>&nbsp;</P></TD><TD width="114px" height="77px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="192px"><P>Worker</P></TD><TD width="164px" height="192px"><P>1.Synchron: 2 CPU</P><P>2. Asynchron:2CPU</P><P><STRONG>Total:4 CPU</STRONG></P></TD><TD width="150px" height="192px"><P>1.Synchron: 8 GiB</P><P>2. Asynchron: 8 GiB</P><P><STRONG>Total= 16 GiB</STRONG></P><P>&nbsp;</P></TD><TD width="128px" height="192px"><P>&nbsp;</P></TD><TD width="114px" height="192px"><P>2</P></TD></TR><TR><TD width="141px" height="50px"><P>Message Service</P></TD><TD width="164px" height="50px"><P>2 CPU</P></TD><TD width="150px" height="50px"><P>6.5 GiB</P></TD><TD width="128px" height="50px"><P><STRONG>100 GB</STRONG></P></TD><TD width="114px" height="50px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="77px"><P>Image Replication Service</P></TD><TD width="164px" height="77px"><P>&nbsp;</P></TD><TD width="150px" height="77px"><P>&nbsp;</P></TD><TD width="128px" height="77px"><P><STRONG>1GB</STRONG></P></TD><TD width="114px" height="77px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="50px"><P>Policy Engine</P></TD><TD width="164px" height="50px"><P>1 CPU</P></TD><TD width="150px" height="50px"><P><SPAN>1GiB</SPAN></P></TD><TD width="128px" height="50px"><P>&nbsp;</P></TD><TD width="114px" height="50px"><P>1</P></TD></TR><TR><TD width="141px" height="77px"><P>Monitoring (Optional)</P></TD><TD width="164px" height="77px"><P>&nbsp;</P></TD><TD width="150px" height="77px"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128px" height="77px"><P><SPAN>20GB</SPAN></P></TD><TD width="114px" height="77px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="192px"><P><U>Shared Storage:</U> Java Heap Dumps + SNC library (SAPCRYPTOLIB) for RFC Adapter: (Optional)</P></TD><TD width="164px" height="192px"><P>&nbsp;</P></TD><TD width="150px" height="192px"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128px" height="192px"><P><SPAN>&nbsp;</SPAN></P><P><SPAN>50GB </SPAN></P><P><SPAN>+ 1GB =</SPAN></P><P><SPAN>&nbsp;51 GB</SPAN></P></TD><TD width="114px" height="192px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="163px"><P><EM><FONT face="arial black,avant garde"><U>Total Cluster Sizing for EIC Components </U></FONT></EM></P></TD><TD width="164px" height="163px"><P><EM><FONT face="arial black,avant garde"><STRONG>13 CPU</STRONG></FONT></EM></P></TD><TD width="150px" height="163px"><P><EM><FONT face="arial black,avant garde"><STRONG>50,5 GiB</STRONG></FONT></EM></P></TD><TD width="128px" height="163px"><P><EM><FONT face="arial black,avant garde"><STRONG>Mandatory = 101 GB</STRONG></FONT></EM></P><P><EM><FONT face="arial black,avant garde"><STRONG>Optional= 71 GB</STRONG></FONT></EM></P></TD><TD width="114px" height="163px"><P><EM><FONT face="arial black,avant garde"><STRONG>See Instance setting for each component</STRONG></FONT></EM></P></TD></TR><TR><TD width="141px" height="50px"><P>&nbsp;</P></TD><TD width="164px" height="50px"><P>&nbsp;</P></TD><TD width="150px" height="50px"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="128px" height="50px"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="114px" height="50px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="135px"><P><U><FONT face="arial black,avant garde" color="#000000"><STRONG><EM>External Database/-store Requirement</EM></STRONG></FONT></U></P></TD><TD width="164px" height="135px"><P>&nbsp;</P></TD><TD width="150px" height="135px"><P><FONT face="arial black,avant garde" color="#000000"><STRONG><EM>&nbsp;</EM></STRONG></FONT></P></TD><TD width="128px" height="135px"><P><FONT face="arial black,avant garde" color="#000000"><STRONG><EM>&nbsp;</EM></STRONG></FONT></P></TD><TD width="114px" height="135px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="50px"><P>PostgreSQL</P></TD><TD width="164px" height="50px"><P>1CPU</P></TD><TD width="150px" height="50px"><P><SPAN>2 GB</SPAN></P></TD><TD width="128px" height="50px"><P><SPAN>64 GB</SPAN></P></TD><TD width="114px" height="50px"><P>&nbsp;</P></TD></TR><TR><TD width="141px" height="50px"><P>Redis</P></TD><TD width="164px" height="50px"><P>1 CPU</P></TD><TD width="150px" height="50px"><P><SPAN>1 GB</SPAN></P></TD><TD width="128px" height="50px"><P><SPAN>&nbsp;</SPAN></P></TD><TD width="114px" height="50px"><P>&nbsp;</P></TD></TR></TBODY></TABLE><H3 id="toc-hId--1632896267"><U>9.4.&nbsp; &nbsp; &nbsp; Summary and Conclusion&nbsp;</U></H3><P>The summary of the sizing results for SAP Edge Integration Cell is based on the final table of the document. The recommended sizing for the non-HA setup amounts to 13 CPU and 50.5 GiB of memory for all EIC components, with a mandatory storage requirement of at least 101 GB and optionally up to 71 GB more. For the external database/store components, PostgreSQL is specified with 1 CPU, 2 GB RAM, and 64 GB storage, and Redis with 1 CPU and 1 GB RAM. These total values are derived from summing the individual components and form the basis for technical infrastructure planning.</P><P>The calculation approach was based on real peak load data from one observed week, separating synchronous and asynchronous message types to assess their maximum resource requirements independently. By using these peak values, the sizing ensures the system capacity is designed to handle actual business throughput during the busiest periods, following SAP’s best-practice. The result shows that the calculated values are clearly above the minimum requirement for a non-HA system but do not reach the requirements for an HA setup (which would require at least 20 CPU, 64 GiB RAM, and 204 GiB storage)<BR /><BR /><STRONG><U>Difference between HA and non-HA Setup&nbsp;</U></STRONG></P><P><SPAN>The non-HA sizing results for SAP Edge Integration Cell show that with 13 CPUs, 50.5 GiB RAM, and at least 101 GB of storage, the setup exceeds the minimum required for non-HA, but it is still below the requirements for a high-availability (HA) setup, which needs 20 CPUs, 64 GiB RAM, and 204 GB of storage. <STRONG><EM>External datastores and optional components are not included in these figures.</EM></STRONG></SPAN></P><P><SPAN>A non-HA setup typically operates around 40 pods on one or two worker nodes. An HA setup runs approximately 80 to 90 pods and is designed to provide higher availability and failover capabilities, especially for the Message Service, which is primarily used for asynchronous scenarios and JMS queues. <STRONG>In non-HA, the Message Service runs as a single</STRONG> <STRONG>instance;</STRONG>&nbsp;<STRONG>in HA, it operates with two replicas and an additional monitoring pod</STRONG> so that if one instance fails, the other can take over, significantly reducing interruptions in JMS-based processing.</SPAN></P><P><SPAN>The Message Service is a critical component, as it manages asynchronous messages and internal system events. In a non-HA environment, this service exists as a single instance; if it fails, all integration flows using JMS queues fail during the downtime, while other integration flows without JMS are not affected. During this period, deployments and undeployments of artifacts on the EIC also fail, and new Message Processing Logs (MPLs) are no longer written to the database.</SPAN></P><P><SPAN>Once the Message Service in a non-HA environment is available again, all previously persisted messages remain and will be reprocessed, as long as the underlying persistence volume has not been deleted or replaced. However, if the EIC – including the Message Service – is uninstalled and reinstalled, all queues and messages within them will be lost, since the components are rebuilt from scratch.</SPAN></P><P><SPAN>An HA deployment runs at least two replicas of the Message Service along with a monitoring pod. This setup enables immediate failover: if one replica fails, the second takes over with minimal interruption, allowing JMS-based integration flows to continue with little or no downtime. Upscale activities on the Message Service will always result in a brief downtime, but in an HA setup this disruption is much shorter than when operating with a single instance in non-HA mode.</SPAN></P><P><U><STRONG>Key Aspect for Message Service :</STRONG></U></P><UL><LI><EM>What happens to messages if the Message Service fails in a non-HA setup?</EM></LI></UL><P>For the duration of the downtime the following things will happen: All Integration Flows that use JMS Queues will fail, other IFlows are not affected. All (un-)deployments of artefacts onto EIC fail. No MPLs will be written to the DB anymore.</P><P>&nbsp;</P><UL><LI><EM>Are messages in the queue lost, and can they be processed again when the Message Service is restarted?</EM></LI></UL><P>Once the Message Service is up again/has restarted all previously stored messages are retained and will get processed again as long as the underlying Persistent Volume has not been wiped out. If EIC has been uninstalled and re-installed again, the Message Service also gets deleted completely --&gt; all queues and messages are lost.</P><P>&nbsp;</P><UL><LI><EM>How much downtime do we risk if we need to scale up the Message Service with only one instance in non-HA?</EM></LI></UL><P>The upscale activity always causes a downtime (usually very short - less than a minute or a few minutes) - also in HA mode.</P><UL><LI>Additional Consideration:<UL><LI>The Storage Class must be configured as Block Storage or iSCSI to ensure stable processing of the critical Message Service</LI></UL></LI></UL><P><STRONG><U>HA setup for production&nbsp;</U></STRONG></P><P>Based on the sizing analysis conducted, I recommend deploying a High Availability (HA) setup with 3 worker nodes&nbsp;for both DEV/TEST and PROD.</P><P>Although the HA setup requires higher resources (20 CPUs and 64 GiB RAM) compared to the calculated non-HA requirements (13 CPUs and 50.5 GiB RAM), the delta of 7 CPUs is negligible compared to the outage risks of a non-redundant infrastructure.&nbsp;Crucially, it is not possible to switch from Non-HA to HA without completely redeploying the Edge Integration Cell.</P><P>Furthermore, a consistent HA setup across DEV/TEST and PROD eliminates transport errors and guarantees system consistency across all landscapes.</P><P>Recommended HA Configuration:</P><UL><LI>3 Worker Nodes&nbsp;with a total of 20 CPUs and 64 GiB RAM</LI><LI>Message Service with 2 Replicas&nbsp;+ Monitoring Pod for JMS High Availability</LI><LI>Block Storage or iSCSI&nbsp;for Persistent Volumes (204 GB)</LI><LI>PostgreSQL or HANA&nbsp;as an external database with HA replication</LI><LI>Starting directly with HA&nbsp;prevents future migrations and data loss</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Muhammet_Tenbih_0-1765878895642.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352605i02749397ADFC6970/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_0-1765878895642.png" alt="Muhammet_Tenbih_0-1765878895642.png" /></span></P><P>&nbsp;</P> 2025-12-16T13:34:19.679000+01:00 https://community.sap.com/t5/integration-blog-posts/master-blog-sap-edge-integration-cell-the-hybrid-integration-journey/ba-p/14291633 Master Blog: SAP Edge Integration Cell – The Hybrid Integration Journey 2025-12-16T14:09:27.490000+01:00 Muhammet_Tenbih https://community.sap.com/t5/user/viewprofilepage/user-id/1402365 <H1 id="toc-hId-1638249072">Overview of SAP Edge Integration Cell Master Blog – The Hybrid Integration Journey</H1><P>Migrating to the SAP Integration Suite doesn't mean moving all your data to the public cloud. For sensitive "Ground-to-Ground" scenarios, you need a solution that keeps traffic strictly within your private landscape. This is where the <STRONG>SAP Edge Integration Cell (EIC)</STRONG> comes into play.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Muhammet_Tenbih_0-1765893705855.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352755i857CED69465D4941/image-size/large?v=v2&amp;px=999" role="button" title="Muhammet_Tenbih_0-1765893705855.png" alt="Muhammet_Tenbih_0-1765893705855.png" /></span></P><P>&nbsp;</P><P>&nbsp;</P><P>This <STRONG>Master Blog</STRONG> serves as your central guide for the SAP Edge Integration Cell journey—covering everything from initial planning and architecture to ongoing operations. Whether you are transitioning from SAP PI/PO or starting fresh, this series will guide you through every step.</P><P>Below you will find the collection of deep-dive articles, which will be constantly updated as we progress through the journey.</P><P>&nbsp;</P><TABLE border="1" width="100%"><TBODY><TR><TD width="12.450663796196629%"><U><STRONG>Series</STRONG></U></TD><TD width="28.489415141729463%"><U><STRONG>Topic</STRONG></U></TD><TD width="42.39325439540725%"><U><STRONG>Description</STRONG></U></TD><TD width="16.666666666666668%"><U><STRONG>Link</STRONG></U></TD></TR><TR><TD width="12.450663796196629%">01</TD><TD width="28.489415141729463%">Initial Sizing &amp; HA/non-HA Planning</TD><TD width="42.39325439540725%">We start by laying the foundation. This post uses real-world PI/PO performance data to calculate the necessary infrastructure for the EIC. It also clarifies the critical differences and requirements between <STRONG>High Availability (HA)</STRONG> and <STRONG>Non-HA</STRONG> setups.</TD><TD width="16.666666666666668%"><A href="https://community.sap.com/t5/integration-blog-posts/from-pi-po-to-sap-edge-integration-cell-initial-sizing-approach-and/ba-p/14290592" target="_self">SAP Edge Integration Cell- Initial Sizing Approach and Planning for HA/Non-HA Setup</A>&nbsp;</TD></TR><TR><TD>02</TD><TD>Coming soon...</TD><TD>&nbsp;</TD><TD>&nbsp;</TD></TR></TBODY></TABLE><H3 id="toc-hId-1699901005">Stay Tuned</H3><P>This series is designed to be a "living document." Make sure to <STRONG>bookmark this page</STRONG> or subscribe to the blog to get notified when the next parts of the series are published.</P><P>Do you have specific questions about the Edge Integration Cell that you would like to see covered in upcoming posts? Let me know in the comments!</P> 2025-12-16T14:09:27.490000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/the-curious-case-of-sap-community-moderation-a-study-in-arbitrary/ba-p/14303785 The Curious Case of SAP Community Moderation: A Study in Arbitrary Excellence 2026-01-08T14:39:45.568000+01:00 vinaymittal https://community.sap.com/t5/user/viewprofilepage/user-id/187725 <P class=""><STRONG>The Curious Case of SAP Community Moderation: A Study in Arbitrary Excellence</STRONG></P><P class="">There's something almost admirable about a system so perfectly calibrated to frustrate its most dedicated contributors. SAP Community's moderation team has achieved what few organisations dare to attempt: the complete democratisation of confusion.</P><P class=""><STRONG>The Algorithm Knows Best</STRONG></P><P class="">Picture this: you've spent hours crafting a technical blog post. You hit publish. Within moments, the machine learning system — trained, presumably, by a committee of caffeinated squirrels — determines your carefully researched content is indistinguishable from Nigerian prince correspondence. Welcome to spam jail.</P><P class="">But fear not! The moderators check the filter "regularly." Your content will resurface within 24 business hours. Or minutes. Or perhaps never. The timeline, much like the moderation logic itself, remains charmingly mysterious.</P><P class=""><STRONG>A Tale of Two Blogs</STRONG></P><P class="">Here's where it gets genuinely fascinating.</P><P class="" data-unlink="true">A blog post about shell command execution in SAP CPI&nbsp; —</P><P class="" data-unlink="true">https://community.sap.com/t5/integration-blog-posts/getting-unconventional-with-groovy-part-1-executing-shell-commands-reading/ba-p/14294407#M2049</P><P class="" data-unlink="true">&nbsp;</P><P class="" data-unlink="true">one that, by the author's own assessment, "doesn't reveal 10% of what already existing articles reveal" — gets nuked from orbit.</P><P class="" data-unlink="true">Meanwhile, this gem about terminal access to CPI runtime&nbsp;</P><P class="" data-unlink="true"><A href="https://community.sap.com/t5/technology-blog-posts-by-members/terminal-access-to-cpi-runtime-execution-of-shell-commands-on-cpi-runtime/ba-p/13460198" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-members/terminal-access-to-cpi-runtime-execution-of-shell-commands-on-cpi-runtime/ba-p/13460198 </A><BR /><BR />has been sunbathing peacefully on the platform for five years. Five. Years.</P><P class="" data-unlink="true">And what about this post detailing PGP secret keyring handling&nbsp;<BR /><BR /><A href="https://community.sap.com/t5/technology-blog-posts-by-members/pgp-secret-keyring-in-cpi-the-lost-passphrase-recovery/ba-p/13491284" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-members/pgp-secret-keyring-in-cpi-the-lost-passphrase-recovery/ba-p/13491284</A><BR /><BR />? If we're genuinely concerned about security implications, one might think exposing cryptographic key management deserves more scrutiny than reading a file via shell script. But no. That one's fine. Perfectly fine.</P><P class="">The logic? There isn't any. Or if there is, it's locked in a vault somewhere, guarded by the same people who decided "Moderator 972001" was an appropriately human identifier.</P><P class=""><STRONG>The Silence Says Everything</STRONG></P><P class="">When these inconsistencies are raised — politely at first, then with understandable frustration — the response is instructive in its absence.</P><P class="">No explanation. No acknowledgment. No "we'll look into it."</P><P class="">Just the digital equivalent of being shown the door while someone pretends to be very busy with paperwork.</P><P class="">As one contributor put it: <EM>"Is there no accountability left?"</EM></P><P class="">The answer, delivered through elegant silence, appears to be: correct.</P><P class=""><STRONG>Transparency? Never Heard of Her</STRONG></P><P class="">The moderation team operates with the transparency of a brick wall painted black and placed in a basement with no stairs. Decisions materialise from the void. Appeals dissolve into it.</P><P class="">"Moderator 972001" — a designation that radiates all the warmth and humanity of a tax audit — can apparently remove two blogs in three days from the same contributor without so much as a form letter explaining why.</P><P class="">One might ask: is this moderator a real person? A bot? A random number generator with delete privileges?</P><P class="">The community may never know. And that, apparently, is by design.</P><P class=""><STRONG>The God Complex, Quantified</STRONG></P><P class="">There's a specific institutional pathology at play here. It's the quiet arrogance of systems that punish engagement, reward silence, and treat questions about their own behaviour as an inconvenience.</P><P class="">When contributors ask <EM>"Are the moderators operating in good faith?"</EM> — a reasonable question given the evidence — the absence of any response tells you everything.</P><P class=""><STRONG>In Conclusion</STRONG></P><P class="">The SAP Community moderation team has built something remarkable: a platform where the most knowledgeable contributors are rewarded with suspicion, inconsistency, and bureaucratic indifference.</P><P class="">One needn't resort to profanity to describe them. Their work speaks eloquently enough.</P> 2026-01-08T14:39:45.568000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/ba-p/14304822 A-Z Performance Testing for integrations... And why? 2026-01-09T22:39:41.453000+01:00 stevang https://community.sap.com/t5/user/viewprofilepage/user-id/7643 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="adi-goldstein-EUsVwEOsblE-unsplash.jpg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359979i953C648BBC59761E/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="adi-goldstein-EUsVwEOsblE-unsplash.jpg" alt="adi-goldstein-EUsVwEOsblE-unsplash.jpg" /></span></P><P><EM>A system is as strong as its weakest link</EM> – this is a well-known fact.</P><P>In the integration domain we connect applications and data, in an integrated system capable to seamlessly perform business processes within different connected applications without interruption and human in the loop (popular wording in the age of AI) action. Of course, this is not an official definition – and there could be many of those (definitions), more-or-less formal.</P><P>The important thing is to understand that integrated system of applications will connect many applications and various data flows, usually using various middleware IT components.</P><H2 id="toc-hId-1787740516">Why Performance Testing for integrations?</H2><P>We do run <STRONG>Unit Testing</STRONG>, <STRONG>Functional Testing</STRONG> or <STRONG>System Integration Testing</STRONG> on our applications or our connected applications – but is this enough?</P><P>For the new application, it is just not enough to test <EM>only</EM> if it <EM>fulfils functional requirements</EM> – the same way, for an integrated system, with several applications and one or more middleware IT component, it is not enough to test <EM>only</EM> if the <EM>integration works</EM>. We need to understand if our application or integrated system can meet specific non-functional requirements. Can it perform? And what are the limits it can sustain?</P><P>Yes, I am talking about <STRONG>Performance Testing</STRONG>, <STRONG>Load Testing</STRONG>, <STRONG>Stress Testing</STRONG> and more…</P><P>We do those things with applications, but are we following the same route with integrations and integrated systems?</P><P>We should…</P><P>But let me go first through some general intro – what are the different types of <STRONG>Performance Testing</STRONG> and what are the appropriate <STRONG>Testing Methodology</STRONG> to apply – disregarding if we talk about testing of individual applications, or integrated systems with middleware flows.</P><H3 id="toc-hId-1720309730">Types of Performance Testing</H3><P>I am not going to go with definitions what is <STRONG>Unit Testing</STRONG>, <STRONG>Functional Testing</STRONG> or <STRONG>System Integration Testing</STRONG>, let me focus only on the <EM>family</EM> of <STRONG>Performance Testing</STRONG>.</P><P>While there are many definitions how to split <STRONG>Performance Testing</STRONG> to several distinct types[1][2][3[4], I will stick to my usual habits and stay with the traditional one from IBM[5].</P><UL><LI><STRONG>Load Testing</STRONG> indicates how the system performs when operating with normal and expected loads. We are taking into consideration average numbers of concurrent users, with average system load (operations performed) – i.e. average number of concurrent users placing an order of the average size (number of items).</LI><LI><STRONG>Scalability Testing</STRONG> is <EM>more than</EM> <STRONG>Load Testing</STRONG> and <EM>less than</EM> <STRONG>Stress Testing</STRONG>, as we want to test how the system performs when scaling to the boundary conditions, either current or expected – i.e. peek number of concurrent users (in the peek season, peek hours) placing large orders (number of items).</LI><LI><STRONG>Spike Testing</STRONG> is creating very sudden increase of users traffic creating shape spikes in system actions – i.e. there is a sudden burst of orders coming by synchronizing offline devices; or more commonly could be caused by sudden restoration of the stopped service now sending huge number or orders at once.</LI><LI><STRONG>Volume Testing</STRONG> differed from <STRONG>Spike Testing</STRONG> as its focus is on increase of data volumes primarily, and how the system will manage the increased incoming data volumes – i.e. load of large payloads (like orders with extensive number of items) into database tables of queues etc.</LI><LI><STRONG>Endurance (or Soak) Testing</STRONG> differs from both <STRONG>Spike Testing</STRONG> and <STRONG>Volume Testing</STRONG> as we are not <EM>only</EM> testing the increase of traffic or data volumes, but rather how the system will manage load throughout longer period, and will there be any degradations in service – i.e. continues load of incoming payloads within several hours or more.</LI><LI><STRONG>Stress Testing</STRONG> is pushing the system beyond its operational limits, finding its breaking point, understanding the weakest link (where it will break first) – i.e. for order taking incrementally increase the load in number of concurrent users and order size, until system “breaks”.</LI></UL><P>On top of these testing types, worth mentioning, as part of <STRONG>Stress Testing</STRONG>, we also perform <STRONG>Reliability Testing</STRONG> as well, verifying how the system will recover from the “break” situation – i.e. if the specific service goes down, we do not want to lose any messages in between.</P><H3 id="toc-hId-1523796225">Methodology</H3><P>Testing methodology depends on the overall development approach, but the most common approaches are:</P><UL><LI><STRONG>Waterfall</STRONG> follows sequential testing which occurs after full development.</LI><LI><STRONG>Agile</STRONG> practice Iterative testing of small parts within sprints.</LI><LI><STRONG>V-Model</STRONG> stands for Verification &amp; Validation, where testing is linked to development phases.</LI><LI><STRONG>Spiral</STRONG> combines <EM>best-of-both</EM>; Waterfall and Agile, making it ideal for large and complex projects; testing is risk-driven and it is integrated into each iterative cycle (spiral)[6].</LI></UL><P>However, no matter with development approach we practice, key testing goals always stay the same:</P><OL><LI>Ensure solution meets functional requirements.</LI><LI>Make sure solution also meets non-functional requirements like, overall quality, performance, and security.</LI></OL><P>Now, I have deliberately avoided saying –only functional requirements are business requirements. In fact, non-functional requirements for performance can very often be very much business relevant, as business can set same clear business driven SLAs.</P><UL><LI><STRONG>SLA</STRONG> (Service Level Agreement) is a formal, often contractual promise of service quality, defining what is guaranteed level of service.</LI><LI><STRONG>KPI</STRONG> (Key Performance Indicator) is an internal measurement of how well SLA goals are met. These measurements are usually very operational.</LI></UL><TABLE><TBODY><TR><TD width="327"><P><STRONG>SLA examples</STRONG></P></TD><TD width="296"><P><STRONG>KPI examples</STRONG></P></TD></TR><TR><TD width="327"><P>4s average response time</P></TD><TD width="296"><P>Last month we achieved 4.87s average response time</P></TD></TR><TR><TD width="327"><P>99% of orders must be received and processes within 8s</P></TD><TD width="296"><P>Last quarter we had 97.4% orders processes within 8s</P></TD></TR><TR><TD width="327"><P>1 000 000 orders per day without degradation of service</P></TD><TD width="296"><P>Yesterday we have processed 1 002 158 orders, where all SLAs are kept</P></TD></TR><TR><TD width="327"><P>…</P></TD><TD width="296"><P>…</P></TD></TR></TBODY></TABLE><P>What does this tell us?</P><P>By clearly understanding SLAs, we can define appropriate <STRONG>Performance Testing</STRONG> measurements (and scripts), something we want to test and what is the level of service we need to achieve with our new application or integrated system.</P><P>Please note, focus here is on <STRONG>Performance Testing</STRONG>, so I am not addressing other non-functional requirements, although we may apply similar approach for them as well (bur the actual testing or verification may be significantly different).</P><H2 id="toc-hId-1198200001">Doing it right…</H2><H3 id="toc-hId-1130769215">Requirements Gathering &amp; Planning:</H3><P>For the <STRONG>Performance Testing</STRONG> in integration (or in general), the first step is, of course, gathering all non-functional requirements like SLAs indicating i.e. response time, error rate etc. This is the moment to collect all</P><UL><LI>defined SLAs</LI><LI>Volumes (i.e. hourly, daily etc.) including expected growth</LI><LI>Business patterns (i.e. patterns or spikes during business hours, or during season)</LI><LI>Systems under test (i.e. what are the systems, or integration flows or IT components which are under test?)</LI><LI>Users or user groups (i.e. integrations are usually <EM>built</EM> using technical users, but let this be re-confirmed)</LI></UL><H3 id="toc-hId-934255710">Test Design</H3><H4 id="toc-hId-866824924"><EM>What do we test?</EM></H4><P>Let’s have one thing clear – when testing integration, we are <STRONG>testing inbound and outbound, complex multi system flows and respective endpoints</STRONG> of the Provider and Consumer(s). But we are not testing the actual business process within Provider and Consumer(s)– this should be covered by relevant application testing.</P><H4 id="toc-hId-670311419"><EM>What types of testing?</EM></H4><P>Let us plan what types of <STRONG>Performance Testing</STRONG> we need to execute (<STRONG>Load Testing</STRONG>, <STRONG>Scalability Testing</STRONG>, <STRONG>Spike Testing</STRONG>, <STRONG>Volume Testing</STRONG>, <STRONG>Endurance (or Soak) Testing</STRONG> and/or <STRONG>Stress Testing</STRONG>) and create realistic workload models.</P><P>In realistic terms, for integrations, we may stick with only few types of testing, combining necessary testing requirements:</P><UL><LI><STRONG>Load Testing</STRONG> based on specific SLAs.</LI><LI><STRONG>Scalability Testing</STRONG> but combined with <STRONG>Spike Testing</STRONG>, <STRONG>Volume Testing</STRONG> and <STRONG>Endurance (or Soak) Testing</STRONG> (with increased user loads and data volumes, running for a reasonably longer period – i.e. 50% load more than maximum expected.</LI><LI><STRONG>Stress Testing</STRONG> but also covering <STRONG>Reliability Testing</STRONG> (to understand the limits and recovery process)</LI></UL><H4 id="toc-hId-473797914"><EM>What kind of integration are we testing?</EM></H4><P>There is a difference if we are testing <STRONG>Sync API</STRONG> or flow or <STRONG>Async API</STRONG> or flow[7][8].</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 1. Sync vs. Async" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359977iEA573BF69EB88287/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 1. Sync vs. Async.jpg" alt="Figure 1. Sync vs. Async" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 1. Sync vs. Async</span></span></P><P>All clear, but how does this impact on our <STRONG>Test Design</STRONG> for the <STRONG>Performance Testing</STRONG>? Let's dig deeper...&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 2. Example of Sync flow with SAP Integration Suite (CPI and API-M)" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359973i07EC0A4C6690FCE9/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 2. Example of Sync flow with CPI and API-M.jpg" alt="Figure 2. Example of Sync flow with SAP Integration Suite (CPI and API-M)" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 2. Example of Sync flow with SAP Integration Suite (CPI and API-M)</span></span></P><P><STRONG>Sync</STRONG> <STRONG>Integration Execution</STRONG> is a single thread – only one operation will run at a time.</P><P><STRONG>Sync</STRONG> means <STRONG>Sync Request-Reply</STRONG> pattern. As long as Sender is waiting for a response from the Receiver (either directly or indirectly) to finalize specific operations, this is considered as <STRONG>Sync</STRONG> processing. We may even have <EM>in between</EM> some queueing with retry logic (i.e. within SAP Integration Suite CPI flow), but if Sender is waiting for the <EM>final</EM> response, this is still <STRONG>Sync</STRONG> processing.</P><UL><LI>When testing <STRONG>Sync API</STRONG> or flow i.e. RESTful API or OData API, no matter if we have some CPI flow or API-M in between – testing tool (i.e. JMeter) can immediately get appropriate response, either success (i.e. http200) or some error (i.e. http4xx. htt5xx).</LI></UL><P>While we may collect additional logging from the Receiver and the middleware IT component(s), this would be more relevant from the perspective of monitory and observability, not <STRONG>Performance Testing</STRONG> itself.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 3. Example of Async flow with SAP Advanced Event Mesh" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359970iBD8AE9B7E84D1149/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 3. Example of Async flow with AEM.jpg" alt="Figure 3. Example of Async flow with SAP Advanced Event Mesh" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 3. Example of Async flow with SAP Advanced Event Mesh</span></span></P><P><STRONG>Async Integration Execution</STRONG> is multi-thread – multiple operation can run in parallel.</P><P><STRONG>Async</STRONG> means decouples, and it may be <STRONG>PubSub</STRONG> pattern, or <STRONG>Async Request-Reply</STRONG> pattern. Here situation is a bit more complex, as we need to collect and compare relevant logs.</P><P>In <STRONG>PubSub</STRONG> pattern, individual IT components may or may not be set to send appropriate responses or acknowledgements (http, ACK/NACK, QoS) but those responses or acknowledgements are not (by default) propagated from the Receiver(s) back to the Sender – if set, response and acknowledgement is only an information if the next component in the flow have received the messages.</P><P>With <STRONG>Async Request-Reply</STRONG> pattern, Receiver will send backward a separate response message, for the received messages. However, this message is also sent as <STRONG>Async API</STRONG>, usually after some processing is being done in the Receiver system. Implementation of the <STRONG>Async Request-Reply</STRONG> pattern is a separate topic not covered in this article – but in general it can be completely separate <STRONG>Async </STRONG>flow, or it could be build using correlation IDs (i.e. using SAP Advanced Event Mesh and CPI[9], or Solace PubSub+ SolClient Asynchronous Callbacks[10]).</P><P>In both <STRONG>Async</STRONG> patterns:</P><UL><LI>When testing <STRONG>Async API</STRONG> or flow i.e. Event API – the actual success or error can be seen only in the Receiver system log, on which testing tool (i.e. JMeter) may not have direct access.</LI><LI>If we have also <STRONG>Async</STRONG> return flow, depending on the SLA set, we may have to measure success rate and response time etc. for the full round cycle.</LI><LI><STRONG>Async</STRONG> response messages will carry either confirmation or error message back to the Sender. While error message is not an integration performance issue, the SLA may still require we measure those errors as well.</LI></UL><P>Again, we may collect additional logging from the middleware IT component(s), but this would be more relevant from the perspective of monitory and observability, not <STRONG>Performance Testing</STRONG> itself.</P><H4 id="toc-hId-277284409"><EM>What is the scope?</EM></H4><P>Do we test only inbound flow and Receiver endpoint, or do we need to <EM>emulate </EM>specific processes and actions in the Sender application as well?</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 4. What is the scope of testing, what do we script?" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359986iDE0CADE3FC1820C2/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 4. What is the scope of testing.jpg" alt="Figure 4. What is the scope of testing, what do we script?" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 4. What is the scope of testing, what do we script?</span></span></P><P>Ordinarily we measure performance for inbound flow and endpoint or the Receiver. If the Receiver application is also a Provider, this gives us also the <STRONG>Baseline Performance</STRONG> of the specific <STRONG>Integration Service</STRONG>.</P><UL><LI>Our primary concern is to understand how the Receiver (no matter if it is Provider or Consumer) can perform – i.e. how many new/modified order requests SAP S/4HANA can handle; or how many new/modified customers being replicated SAP Commerce Cloud can handle.</LI><LI>Secondary goal is to understand how the middleware IT components in front of the Receiver are performing – i.e. API-M including all policies, or CPI flow including value mappings, or Advanced Event Mesh including microservices etc.</LI></UL><P>However, if we are introducing new Sender application, SLAs may request we perform <STRONG>Performance Testing</STRONG> on the full process starting from the Sender application.</P><UL><LI>We need to script within the Sender application, appropriate actions invoking APIs, and measure full response from the moment action is invoked, until specific results of the operations are recorded – i.e. order has be placed in SAP Commerce Cloud , API invoked toward SAP S/4HANA, response received, status saved and/or visualized (where in this example, SAP Commerce Cloud would not normally save order, but only visualize the status). .</LI></UL><H4 id="toc-hId--416946191"><EM>Do we test all Consumers at once, or we break the flows?</EM></H4><P>Let’s also understand, depending on the specific integration flow:</P><UL><LI>Provider can be either Receiver (inbound endpoint receiving message) or Sender (outbound endpoint sending message).</LI><LI>The same way, Consumer can be either Receiver (inbound endpoint receiving message) or Sender (outbound endpoint sending message)</LI><LI>Finally, in each integration flow there is only one Provider, and there can be one or more Consumer(s).</LI></UL><P>The question is, in case of multiple Consumers as Receivers, do we test them all at once?</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 5. Test each Receiver separately" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359966iAB4015368B6B74EC/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 5. Test each Receiver separately.jpg" alt="Figure 5. Test each Receiver separately" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 5. Test each Receiver separately</span></span></P><P>The recommended approach is to test separately for each Receiver. This would give us clear picture of the <EM>boundaries</EM> for each individual Receiver system.</P><P>But can we still have multiple Providers?</P><P>Yes and no… In fact, it is possible to have different technical backend systems providing specific Integration Service – i.e. for order taking, we can have two or more SAP S/4HANA backed systems, each servicing different countries or regions, where routing is done in CPI or API-M; but if this is the same Integration Services provided by the same application (even though there are two or more technical systems behind) – in this case,&nbsp;we will consider SAP S/4HANA as one Provider for the order taking Integration Service.</P><H4 id="toc-hId--613459696"><EM>Payloads</EM></H4><P>We need sample payloads, but we also need to ensure all necessary Master Data and Organizational Data exists and is appropriately configured – i.e. if we are creating orders we need to have existing <EM>SoldToParty</EM>, <EM>Product</EM>, <EM>OrdeType</EM>, <EM>SalesOrganization</EM>, <EM>PricingCondition</EM> (or <EM>Promo</EM>) etc.</P><P>But it’s not only payload itself, we also need to understand if we need to set specific attributes with API call (i.e. within http header). This also needs to be defined upfront.</P><P>In some cases, some attributes (i.e. in the http body) are triggered specific processing in the Receiver application – for order taking <EM>OrderType</EM> can invoke different standard/custom functional modules/processing in SAP S/4HANA. All these needs to be defined upfront.</P><H3 id="toc-hId--516570194">Environment Setup</H3><P>Most of the test environments are not sized as productive environments and for i.e. <STRONG>Functional Testing</STRONG> this is perfectly fine, but for <STRONG>Performance Testing</STRONG> this may give considerably wrong picture. General recommendation is to run <STRONG>Performance Testing</STRONG> in the test environment (or QA environment) that closely mirrors production, including all IT components and software versions.</P><P>How do we do this?</P><P>This all depends on the applications and IT components we need to configure. In some cases, it might be rather easy, while in some cases, it might be more challenging. For SAP Integration Suite (CPI or API-M), very common scenario is to have separate tenants but with the same/comparable configuration. For Azure Integration Services (i.e. Service Bus, Functions), it is fairly easy to <EM>temporarily </EM>change the licensing model of the test environment/subscription and assign it the same <EM>power</EM> as the productive environment/subscription. Similar is for most SaaS applications is general – it’s all about <EM>temporarily</EM> configuring the subscription, and if we keep the time window for <STRONG>Performance Testing</STRONG> rather narrow, this will not significantly increase the subscription costs.</P><P>But in some cases, this may not be so simple. In case of SAP Advanced Event Mesh, it all depends on the deployment strategy of the broker:</P><UL><LI>If we use the same tenant for both non-productive (test) and productive environments (separated by i.e. <STRONG>Application Domains</STRONG> hosting productive and non-productive <STRONG>Applications</STRONG>, retrospectively), then no action is needed for conducting <STRONG>Performance Testing</STRONG>, since test and productive environment are the same.</LI><LI>If we use separate tenants for test and productive environment with the same T-shirt size (ideal, but more expensive deployment approach), then again, no issue to proceed with <STRONG>Performance Testing</STRONG>,</LI><LI>If we use separate tenants for test and productive environment but with different T-shirt sizes (more common scenario) we can <EM>temporarily</EM> deploy non-productive Event flow in the productive environment but connect it <EM>temporarily</EM> to the test <STRONG>Applications</STRONG> (Publisher/Subscriber) endpoints. Word of caution here thought – if we use SAP Event Add-on (i.e. on our SAP S/4HANA test and production environment) please make sure you follow specific licensing guidelines to understand in which scenarios it may impact the licensing costs (pls check this <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/cheaper-than-you-think-the-commercial-model-of-the-event-add-on-for-erp/ba-p/14265748" target="_blank">article</A> from <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/7039">@KStrothmann</a>[11]).</LI><LI>Finally, if we do use separate tenants for test and productive environment, but with different T-shirt sizes, and we do not want to do any <EM>temporary</EM> deployments in the production environment – it is always possible to <EM>temporarily</EM> change T-shirt size of our test environment to match the productive environment. Here we should be very careful if it impacts some micro integrations we have, especially during downgrading back after test is done.</LI></UL><P>While for most of IPaaS or SaaS applications and IT components there is a way to (at last) <EM>temporarily</EM> configure test environment to match the productive one, in some cases it might simply not be feasible – especially for on-prem system deployments.</P><P>What do we do?</P><P>There is no golden rule – but there are some <EM>workarounds steps</EM> we could do.</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Step</STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>Measure system performance</P></TD><TD><P>Let’s measure performance of similar services in the test and productive environments – i.e. for SAP environment use <STRONG>Workload Monitor</STRONG> ST03/ST03N for measuring the response time distribution for various task types (like dialog, background).</P></TD></TR><TR><TD><P>2.</P></TD><TD><P>Measure program runtime performance (optional)</P></TD><TD><P>Optionally, run detailed analysis of specific programs – i.e. for SAP environment use <STRONG>Runtime Analysis</STRONG> (SE30/SAT) for ABAP programs to measure execution time of individual statements, function modules, and database calls.</P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Measure database performance (optional)</P></TD><TD><P>Optionally, run trace on specific performance-related SQL activities – i.e. in SAP environment use <STRONG>Performance Analysis</STRONG> (ST05) to measure where is what time spent on which activities.</P></TD></TR><TR><TD><P>4.</P></TD><TD><P>Calculate productive vs test environment processing power</P></TD><TD><P>Use all measurements to calculate realistic processing power of your test and production environment – i.e. <STRONG>Workload Monitor</STRONG>, <STRONG>Runtime Analysis</STRONG> and <STRONG>Performance Analysis</STRONG> will give some different values showing that productive system is faster.</P><P>Example:<BR /><STRONG>Workload Monitor</STRONG><SPAN> response time in productive environment is 1.4 faster than in test;<BR /></SPAN><STRONG>Runtime Analysis </STRONG><SPAN>program executes in productive environment 1.6 faster than in test;<BR /></SPAN><STRONG>Performance Analysis</STRONG><SPAN> database operation performs in productive environment 1.1 faster than in test;<BR /></SPAN>Extrapolate using weight factors (this is just an example): 0.6*1.4+0.2*1.6+0.2*1.1=1.38;<BR /><SPAN>Final calculation says productive environment is 1.38 times more performant than test environment.</SPAN></P></TD></TR><TR><TD><P>5.</P></TD><TD><P>Extrapolate and adjust test results on the test environment</P></TD><TD><P>Extrapolate all <STRONG>Performance Testing</STRONG> results obtained on the test environment with calculated factors.</P><P>Example:<BR /><SPAN>If an average response time on the specific S/4HANA hosted Integration Service is 4s in the test environment, we expect it will be more 1.38 times more performant in the productive environment, or expected average response time in the productive environment is 2.9s</SPAN></P></TD></TR></TBODY></TABLE><H3 id="toc-hId--713083699">Script Development</H3><P>Now we need to start using specific testing tools like JMeter, Azure Load Testing (also using JMeter scripts) or LoadRunner. Goal is to create scripts to simulate specific actions and interactions which we want to test.</P><P>Let’s go through inputs we have collected:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Input</STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>SLAs</P></TD><TD><P>For the order taking API:<BR /><SPAN>Average order response time is up to 4s;<BR /></SPAN><SPAN>99% of orders are created up to 8s;<BR /></SPAN><SPAN>This is valid for any </SPAN><EM>Customer</EM><SPAN>, any </SPAN><EM>SalesOrganization</EM><SPAN>, default </SPAN><EM>OrderType</EM><SPAN>, standard on-invoice </SPAN><EM>PricingCondition</EM><SPAN>;</SPAN></P></TD></TR><TR><TD><P>2.</P></TD><TD><P>Volumes</P></TD><TD><P>Annual average 160 000 orders per working day;<BR /><SPAN>Maximum (peek) season 270 000 orders per working day;<BR /></SPAN><SPAN>Expected annual growth 10%;<BR /></SPAN><SPAN>Average order has 10 items, where orders normally do not contain more than 50 items;</SPAN></P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Business patterns</P></TD><TD><P>80% or orders are created doing extended working hours from 10:00-22:00, out of which half is created in the evening 19:00-21:00</P></TD></TR><TR><TD><P>4.</P></TD><TD><P>Systems under test</P></TD><TD><P>S/4HANA API_SALES_ORDER_SRV <EM>Sales Order (A2X), single cluster, no policy routing</EM>;<BR /><SPAN>API-M </SPAN><EM>SalesOrder</EM><SPAN> with policies, excluding CSRF token;</SPAN></P></TD></TR><TR><TD><P>5.</P></TD><TD><P>Users</P></TD><TD><P>No business user. testing integration only,</P></TD></TR></TBODY></TABLE><P>Based on inputs we will build appropriate <STRONG>Load Testing</STRONG> script:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Script </STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>Target<BR /><STRONG>Test Results</STRONG></P></TD><TD><P>Response time percentile 50 should be below 4s;<BR /><SPAN>Response time percentile 99 should be below 8s;</SPAN></P></TD></TR><TR><TD><P>2.</P></TD><TD><P>Capturing <STRONG>Test Results</STRONG></P></TD><TD><P>Catch the information about sent messages from the Sender side (i.e. testing tool): <STRONG>number of messages</STRONG> sent, <STRONG>start time </STRONG>(sending) and <STRONG>stop time</STRONG> (sending);</P><P>In case of <STRONG>Async API</STRONG>, catch the overall status on the Receiver system logs on successfully received/processed: <STRONG>number of messages</STRONG> received, overall <STRONG>timing from start to end</STRONG>, and catch response status as well if response/acknowledgement is enabled;</P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Number of Threads</P></TD><TD><P>We count for maximum daily volume + 5 years growth + 50% margin:<BR />270 000*1.1*1.1*1.1*1.1*1.1*1.5 = 652 257 orders per peek day;<BR />But this volume is not evenly distributed through 24h, 40% is in 2h only:<BR /><SPAN>652 257*0.4 / 2 = 130 451 orders in peek hour;<BR /></SPAN><SPAN>Or this is 130 451 / 3600 = 36 orders per second;</SPAN></P><P><SPAN>As we have already included safety margin, we are okay to set:<BR /></SPAN>Number of Threads = 36;</P></TD></TR><TR><TD><P>4.</P></TD><TD><P>Rump-up period</P></TD><TD><P>We can use 4s as this is desired average response time, but we will use default 1s for all tests;</P></TD></TR><TR><TD><P>5.</P></TD><TD><P>Loop Count</P></TD><TD><P>For <STRONG>Load Testing</STRONG> there is no need to loop payloads more than 20-50 times; &nbsp;</P></TD></TR><TR><TD><P>6.</P></TD><TD><P>Payloads</P></TD><TD><P>Create payloads:<BR /><SPAN>ideally using different </SPAN><EM>Customer(s)</EM><SPAN>,<BR /></SPAN>ideally covering all (or majority) of <EM>SalesOrganization(s)</EM>,<BR />where each will use default <EM>OrderType</EM>,<BR />where each will use only standard <EM>PricingCondition(s)</EM>;</P><P>Distribute number of items in payloads:<BR />80% average or around average number i.e. 10 items,<BR />5% lower boundaries i.e. between 1-5 items,<BR />10% upper boundaries i.e. between 15-35 items,<BR />5% upper extreme i.e. between 40-50 items;</P><P>In total we may have 20-50 payloads or more;</P></TD></TR><TR><TD><P>7.</P></TD><TD><P>Endpoint</P></TD><TD><P>API-M <EM>SalesOrder</EM> endpoint</P></TD></TR><TR><TD><P>7.</P></TD><TD><P>Users</P></TD><TD><P>No business user;<BR /><SPAN>JMeter will authenticate and obtain key as a client application, through VPN tunnel;</SPAN></P></TD></TR></TBODY></TABLE><P>How does this look like in practice, and what does this mean?</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 6. JMeter configuration example" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359952i79E438A87EFD2E81/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 6. JMeter configuration example.png" alt="Figure 6. JMeter configuration example" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 6. JMeter configuration example</span></span></P><P>In this example, I use JMeter[12] as a testing tool of choice:</P><OL><LI>Number of Threads simulates number of concurrent users or concurrent requests at the same time. Obviously, the higher the number, the bigger the load/spike is.</LI><LI>Rump-up period simulates how often we send next batch of requests. Default value of 1s is already very high.</LI><LI>Loop Count will randomly take payloads we have prepared and loop them indicated number of times. Obviously, the higher the number, the longer the soak is.</LI><LI>Payloads are sample messages, and generally they will never be the same. Depending on the specific API and business process, but in most of the cases, the bigger is the payloads (more items, or more segments), the higher the data volume is.</LI></OL><P>So, for <EM>combined</EM> <STRONG>Scalability Testing</STRONG>, we may adjust the script and just increase Number of Threads (to simulate spikes), increase percentage of payloads with extreme number of items (to simulate data volumes), and increase Loop Count (to simulate soak).</P><P>However, for <EM>combined</EM> <STRONG>Stress Testing</STRONG> we should gradually increase Number of Threads, while keeping all parameters steady (as in <STRONG>Load Testing</STRONG>) – to see when it breaks (errors, what kind of errors). The second test would be to gradually increase number of items in the payload, while keeping all other parameters steady (as in <STRONG>Load Testing</STRONG>) – to see when it breaks (errors, what kind of errors). Further investigation on errors and system behavior is needed to verify integration reliability, but this will depend very much on the specific integration flow – i.e. <STRONG>Async</STRONG> flows should normally be decoupled with queues and built-in retry resilience, while <STRONG>Sync</STRONG> flow normally receive error response and application/user decide next action</P><H3 id="toc-hId--909597204">Test Execution</H3><P>We have designed and created scripts for <STRONG>Load Testing</STRONG>, <EM>combined</EM> <STRONG>Scalability Testing</STRONG> and <EM>combined</EM> <STRONG>Stress Testing</STRONG>.</P><P>As we are simulating real time scenarios, tests should be performed following realistic conditions:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Condition </STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>Applications &nbsp;</P></TD><TD><P>No other users should execute the same integration flow which is under test;<BR /><SPAN>All other (background) jobs and operations should stay as-is (keep it normal as-is);</SPAN></P></TD></TR><TR><TD><P>2.</P></TD><TD><P>IT components</P></TD><TD><P>No other users should execute the same integration flow which is under test;<BR /><SPAN>All other (background) jobs and operations should stay as-is (keep it normal as-is)</SPAN></P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Execution timetable</P></TD><TD><P>Tests should respect the business pattern of operations.<BR /><SPAN>Why? Because during different days or different hours within the day there might be other (background) jobs or operations impacting overall system performance, and we want to simulate all operations as realistic as possible.</SPAN></P><P>We have three distinct business patterns, and we should run all tests during each business pattern:<BR /><SPAN>Business day, non-working hours 22:00 -10:00 next morning,<BR /></SPAN>Business day, normal working hours 10:00-19:00 or 21:00-22:00,<BR />Business day, peek working hours 19:00-21:00;</P></TD></TR></TBODY></TABLE><H3 id="toc-hId--1106110709">Test Results</H3><P>After execution of all tests, we have to conduct appropriate evaluation of results:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Evaluation</STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P><STRONG>Load Testing</STRONG></P></TD><TD><P>As per SLAs evaluate actual percentile 50 and 99 for all <STRONG>Test Executions</STRONG> we did (and we have at least 3 runs, one for each business pattern);</P></TD></TR><TR><TD><P>2.</P></TD><TD><P><STRONG>Scalability Testing</STRONG></P></TD><TD><P>Analyze percentile 50 and 99 for all <STRONG>Test Executions</STRONG> we did (and there could be many runs);<BR /><SPAN>The observations will be used to define the behavior pattern of the overall integration flow i.e.:<BR /></SPAN><SPAN>if we increase Number of Threads 100%, response time will increase 40%,<BR /></SPAN><SPAN>if we orders size for 50%, response time will increase 30%,<BR /></SPAN>if we soak for 2h, aggregated response time will increase 10%</P></TD></TR><TR><TD><P>3.</P></TD><TD><P><STRONG>Stress Testing</STRONG></P></TD><TD><P>Monitor <STRONG>Test Executions</STRONG> we did (and there could be many runs);<BR /><SPAN>This will help us define the behavior pattern of the overall integration flow;</SPAN></P><P>The observations will be used to define the boundaries of the integration flow i.e.:<BR /><SPAN>system cannot sustain more than 1000 concurrent requests of average size,<BR /></SPAN><SPAN>or system cannot sustain more than 150 items in the order;</SPAN></P></TD></TR></TBODY></TABLE><P>For percentiles, we can use aggregated <STRONG>Test Results</STRONG> report.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure 7. Percentiles example" style="width: 705px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359951i268D505BDA2CDA96/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure 7. Percentiles example .jpg" alt="Figure 7. Percentiles example" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Figure 7. Percentiles example</span></span></P><P>This example graph shows that average response time, or percentile 50, is around 3s, while percentile 99 is around 4.7s.</P><P>JMeter may provide number of possibilities to calculate percentiles form the aggregate reports, or we may simply go for some of the add-on graph reports and include it in the Test Plan[13].</P><H2 id="toc-hId--1009221207">Next steps…</H2><P>Remediations?</P><P>Of course, most likely your first round of <STRONG>Performance Testing</STRONG> will not provide fully satisfactory results. The next steps are mostly in identifying what optimization potentials there are, work on it, and then re-run the tests. The good this is – all the scripts are already there, so there is no need to re-do all from scratch.</P><P>However, if (due to whatever reason) SLAs are re-negotiated and changed – in that case, scripts will also have to be adjusted.</P><H2 id="toc-hId--1205734712">Conclusions</H2><P>Why am I writing this article?</P><P>While most of the <STRONG>Project Management</STRONG> and <STRONG>Test Management</STRONG> routines are very much highly regulated, project teams might be (often?) facing lack of some specific guidelines how to test integrations, especially its performance – as integrations are, lets be honest, rather specific area…</P><P>Well, this is at least my view…</P><P>In this article, I have used examples with <a href="https://community.sap.com/t5/c-khhcw49343/SAP+S%25252F4HANA/pd-p/73554900100800000266" class="lia-product-mention" data-product="799-1">SAP S/4HANA</a>, <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Integration+Suite/pd-p/73554900100800003241" class="lia-product-mention" data-product="23-1">SAP Integration Suite</a>&nbsp;(CPI and API-M),&nbsp;SAP Advanced Event Mesh and JMeter – but principles are basically the same, no matter if we use SAP or non-SAP applications and IT component.&nbsp;</P><P>Anyway, as already indicated – there is no golden rule – this is just one possible approach to organize our <STRONG>Performance Testing</STRONG> for integration. Of course, this is not a rule book, and things should be adjusted to the specific needs. As always, this is just a potential guideline – nothing is <EM>carved in stone</EM>…</P><H2 id="toc-hId--1402248217">Acknowledgment</H2><P>*) Intro photo by <A href="https://unsplash.com/@adigold1?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash" target="_blank" rel="noopener nofollow noreferrer">Adi Goldstein</A> on <A href="https://unsplash.com/photos/teal-led-panel-EUsVwEOsblE?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash" target="_blank" rel="noopener nofollow noreferrer">Unsplash</A></P><P>**) This article uses <A href="https://wiki.scn.sap.com/wiki/x/shl7H" target="_blank" rel="noopener noreferrer">SAP Business Technology Platform Solution Diagrams &amp; Icons</A> as per <A href="https://d.dam.sap.com/a/nXJJmw" target="_blank" rel="noopener noreferrer">SAP Terms of Use</A> governing the use of these SAP Materials (please note, newer version of the Solution Diagrams &amp; Icons, as well as Terms of Use, might be in place after the publication of this article).</P><P>More guidelines on Solution Diagrams &amp; Icons can be found in this <A href="https://blogs.sap.com/2018/01/05/be-visual-use-official-icons-and-samples-for-sap-cloud-platform-solution-diagrams/" target="_blank" rel="noopener noreferrer">article</A> by <A href="https://people.sap.com/bertram.ganz" target="_blank" rel="noopener noreferrer">Bertram Ganz</A>.</P><H2 id="toc-hId--1598761722">References</H2><P>[1] Queue IT: <A href="https://queue-it.com/blog/types-of-performance-testing/" target="_blank" rel="noopener nofollow noreferrer">https://queue-it.com/blog/types-of-performance-testing/</A></P><P>[2] Microsoft Learn: <A href="https://microsoft.github.io/code-with-engineering-playbook/automated-testing/performance-testing/" target="_blank" rel="noopener nofollow noreferrer">https://microsoft.github.io/code-with-engineering-playbook/automated-testing/performance-testing/</A></P><P>[3 Microsoft Learn: <A href="https://learn.microsoft.com/en-us/azure/well-architected/performance-efficiency/performance-test" target="_blank" rel="noopener nofollow noreferrer">https://learn.microsoft.com/en-us/azure/well-architected/performance-efficiency/performance-test</A></P><P>[4] JMeter: <A href="https://www.f22labs.com/blogs/mastering-performance-testing-with-jmeter-a-comprehensive-guide/" target="_blank" rel="noopener nofollow noreferrer">https://www.f22labs.com/blogs/mastering-performance-testing-with-jmeter-a-comprehensive-guide/</A></P><P>[5] IBM: <A href="https://www.ibm.com/think/topics/performance-testing" target="_blank" rel="noopener nofollow noreferrer">https://www.ibm.com/think/topics/performance-testing</A></P><P>[6] Spiral model: <A href="https://en.wikipedia.org/wiki/Spiral_model" target="_blank" rel="noopener nofollow noreferrer">https://en.wikipedia.org/wiki/Spiral_model</A></P><P>[7] How to build an Integration Architecture for the Intelligent Enterprise: <A href="https://blogs.sap.com/2023/04/09/how-to-build-an-integration-architecture-for-the-intelligent-enterprise/" target="_blank" rel="noopener noreferrer">Part 1</A></P><P>[8] How to build an Integration Architecture for the Intelligent Enterprise: <A href="https://blogs.sap.com/2023/04/27/part-2-how-to-build-an-integration-architecture-for-the-intelligent-enterprise/" target="_blank" rel="noopener noreferrer">Part 2</A></P><P>[9] SAP AEM <STRONG>Async Request-Reply</STRONG>: <A href="https://community.sap.com/t5/technology-blog-posts-by-members/implement-request-reply-integration-pattern-with-sap-advanced-event-mesh-in/ba-p/14074836" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-members/implement-request-reply-integration-pattern-with-sap-advanced-event-mesh-in/ba-p/14074836</A></P><P>[10] Solace PubSub+ <STRONG>Async Request-Reply</STRONG>: <A href="https://tutorials.solace.dev/c/request-reply/" target="_blank" rel="noopener nofollow noreferrer">https://tutorials.solace.dev/c/request-reply/</A></P><P>[11] SAP Event Add-on: <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/cheaper-than-you-think-the-commercial-model-of-the-event-add-on-for-erp/ba-p/14265748" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-sap/cheaper-than-you-think-the-commercial-model-of-the-event-add-on-for-erp/ba-p/14265748</A></P><P>[12] Apache JMeter: <A href="https://jmeter.apache.org/" target="_blank" rel="noopener nofollow noreferrer">https://jmeter.apache.org/</A></P><P>[13] Apache JMeter Test Plan: <A href="https://jmeter.apache.org/usermanual/build-test-plan.html" target="_blank" rel="noopener nofollow noreferrer">https://jmeter.apache.org/usermanual/build-test-plan.html</A></P><P>&nbsp;</P> 2026-01-09T22:39:41.453000+01:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/idocs-as-events-how-to-simulate-and-test-them-at-enterprise-scale/ba-p/14308890 IDOCs as Events: How to Simulate and Test Them at Enterprise Scale 2026-01-16T16:20:44.064000+01:00 MichalKrawczyk https://community.sap.com/t5/user/viewprofilepage/user-id/45785 <P><SPAN>The integration landscape within the SAP ecosystem has undergone a significant shift in Q4 2025. For years, the industry narrative suggested that IDOCs were relics of the past, destined to be replaced entirely by SOAP Web Services, later one with OData and REST APIs. However, recent updates have changed the table. Not only are IDOCs officially "safe" for the Clean Core era, but they have also evolved into powerful components of modern Event-Driven Architecture (EDA).</SPAN></P><P><SPAN>In this post, I will explore why IDOCs are making a comeback and, more importantly, how you can simulate and test these scenarios at an enterprise scale.</SPAN></P><H2 id="toc-hId-1787859895"><STRONG>The 2025 Turning Point: IDOCs are "Clean Core" Safe</STRONG></H2><P><SPAN>The first major update involves the </SPAN><STRONG>SAP Clean Core guidance (referencing OSS Note 3578329)</STRONG><SPAN>. As of late 2025, IDOCs are now classified as </SPAN><STRONG>Clean Core Extensibility Level B</STRONG><SPAN> as per my blog:&nbsp;<A class="" href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/idocs-are-still-safe-for-sap-s-4hana-sap-clean-core-extensibility-level-b/ba-p/14225439" target="_blank">IDOCs are Still Safe for SAP S/4HANA - SAP Clean Core Extensibility Level B</A>&nbsp;</SPAN></P><P><SPAN>What does this mean for your S/4HANA journey?</SPAN></P><UL><LI><STRONG>Existing Investments are Valid:</STRONG><SPAN> If you have stable, working IDOC integrations, you don’t need to scrap them to be "Clean Core" compliant.</SPAN></LI><LI><STRONG>Pragmatism over Dogma:</STRONG><SPAN> While SAP still recommends APIs and Events for greenfield developments, continuing with IDOCs is a pragmatic choice for stable processes, provided the IDOC type isn't deprecated in the S/4HANA Simplification List.</SPAN></LI></UL><H2 id="toc-hId-1591346390"><STRONG>IDOCs Meet the Advanced Event Mesh (AEM)</STRONG></H2><P><SPAN>The second breakthrough is the ability to "event-enable" classic IDOCs using the </SPAN><STRONG>SAP Event Add-on by ASAPIO</STRONG><SPAN> as per this blog: <A class="" href="https://community.sap.com/t5/technology-blog-posts-by-sap/idoc-with-integration-suite-advanced-event-mesh-using-the-event-add-on/ba-p/14290095" target="_blank">IDoc with Integration Suite, Advanced Event Mesh using the Event Add-On</A></SPAN></P><P><SPAN>This allows organizations to bridge the gap between legacy reliability and modern real-time agility. By using ASAPIO with SAP Integration Suite, Advanced Event Mesh (AEM), you can:</SPAN></P><UL><LI><STRONG>Go No-Code:</STRONG><SPAN> Send and receive IDOCs through AEM in a near real-time, code-less manner.</SPAN></LI><LI><STRONG>Payload Transformation:</STRONG><SPAN> Automatically convert classic IDOC formats to </SPAN><STRONG>JSON</STRONG><SPAN> on-the-fly for cloud consumption, and convert them back for inbound processing.</SPAN></LI><LI><STRONG>Triggering:</STRONG><SPAN> Link business events in ERP so that a transaction triggers an IDOC extraction and immediate publication to the event mesh.</SPAN></LI></UL><H2 id="toc-hId-1394832885"><STRONG>The Testing Challenge: Scalability and Realism</STRONG></H2><P><SPAN>With IDOCs acting as high-speed events, the question arises: </SPAN><STRONG>How do we test these scenarios at scale?</STRONG><SPAN> Standard manual testing isn't enough when you need to validate complex, high-volume event flows.</SPAN></P><H3 id="toc-hId-1327402099"><STRONG>The Traditional Approach: WE19</STRONG></H3><P><SPAN>The standard tool for simulating IDOCs is transaction </SPAN><STRONG>WE19</STRONG><SPAN> (the IDOC Test Tool). To use it, you find an existing IDOC, use it as a template, modify the data, and trigger it again.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure_1_We19.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362088iAF524A09EC51F6BE/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure_1_We19.png" alt="Figure_1_We19.png" /></span></P><P><SPAN>Figure 1 - We19 for outbound processing&nbsp;</SPAN></P><P><STRONG>The Limitations:</STRONG></P><UL><LI><STRONG>Data Sourcing:</STRONG><SPAN> Finding the "perfect" IDOC with specific business content is time-consuming (no easy way to search through them)&nbsp;</SPAN></LI><LI><STRONG>Lack of Scale:</STRONG><SPAN> You cannot easily simulate thousands of IDOCs with variable data (different partners, dates, or materials) manually.</SPAN></LI><LI><STRONG>Isolation:</STRONG><SPAN> WE19 tests the IDOC in a vacuum, not the end-to-end event flow.</SPAN></LI></UL><H2 id="toc-hId-1001805875"><STRONG>Enterprise-Scale Simulation with Int4 Suite</STRONG></H2><P><SPAN>To truly simulate IDOCs as events for enterprise testing, you need automation. </SPAN><STRONG>Int4 Suite</STRONG><SPAN> provides three sophisticated ways to handle this:</SPAN></P><H3 id="toc-hId-934375089"><STRONG>A) Simulating Incoming to Trigger Outgoing</STRONG></H3><P><SPAN>In many business processes, an incoming IDOC triggers an outgoing one (e.g., an incoming Sales Order creates an outgoing Order Confirmation). With Int4 APITester, you can simulate the incoming message and use its </SPAN><STRONG>variable concept</STRONG><SPAN> to inject dynamic values (like unique PO numbers or current dates).</SPAN></P><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure_3_variables.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362089i4BFCC5CFF2996D9C/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure_3_variables.png" alt="Figure_3_variables.png" /></span></SPAN></P><P><SPAN>Figure 3 - variables so each time the IDOC is triggered the content can change dynamically&nbsp;</SPAN></P><P><SPAN>&nbsp;This ensures the resulting "Event" (the outgoing IDOC) is generated by the actual business logic of the system.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure_2_outbound_inbound.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362090i3135EC1B79CF1A6E/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure_2_outbound_inbound.png" alt="Figure_2_outbound_inbound.png" /></span></P><P><SPAN>Figure 2 - IDOCs as outbound from simulating incoming documents&nbsp;</SPAN></P><H3 id="toc-hId-737861584"><STRONG>B) Triggering via Message Control (NACE/NAST)</STRONG></H3><P><SPAN>Int4 Suite can trigger output directly via SAP Message Control. This simulates how the system works in production (e.g., printing or sending an EDI upon saving a document).</SPAN></P><UL><LI><STRONG>How it works:</STRONG><SPAN> By using the Int4 Suite Knowledge Center guidance, you can trigger the output by calling the necessary functional modules or reports (like RSNAST00) to process specific output types for a range of documents, ensuring the "Event" is fired exactly as it would be by a business user.</SPAN></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure_4_message_control.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362091iF5C1539ED585C98F/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure_4_message_control.png" alt="Figure_4_message_control.png" /></span></P><P><SPAN>Figure 4 - How Int4 Suite can trigger IDOCs with Message Control&nbsp;</SPAN></P><H3 id="toc-hId-541348079"><STRONG>C) AI-Driven Simulation and WE19</STRONG></H3><P><SPAN>The future of testing involves </SPAN><STRONG>AI Chatbots</STRONG><SPAN>. Instead of manually navigating WE19, Int4 Suite allows you to use an AI interface to perform selections and data changes. You can instruct the chatbot to "Generate 50 Sales Order IDOCs with varying quantities or different identifiers" and the suite handles the back-end execution in WE19, bridging the gap between conversational AI and legacy SAP transactions.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Figure_5_ai_chat.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362092i90F044B3A2D35FA1/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Figure_5_ai_chat.png" alt="Figure_5_ai_chat.png" /></span></P><P><SPAN>Figure 5 - Searching for IDOCs and resending them with Int4 Suite Chatbot&nbsp;</SPAN></P><H2 id="toc-hId-215751855"><STRONG>Validating SAP AEM Flows</STRONG></H2><P><SPAN>Once the IDOC is triggered as an event, you must ensure it reaches the </SPAN><STRONG>SAP Advanced Event Mesh</STRONG><SPAN> correctly. Int4 Suite allows you to validate these flows by:</SPAN></P><OL><LI><STRONG>Preparation:</STRONG><SPAN> Configuring the connection between Int4 Suite and SAP AEM to allow the tester to "listen" to specific queues or topics.</SPAN></LI><LI><STRONG>Message Selector:</STRONG><SPAN> Using the </SPAN><A href="https://help.int4.com/int4-suite-knowledge-center-library/3.13/int4-suite-sap-advanced-event-mesh-message-selecto" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Int4 Message Selector</SPAN></A><SPAN>, you can pull the actual messages published to AEM and compare them against a "golden" expected payload. This validates that the conversion (e.g., IDOC to JSON) and the routing happened exactly as designed.</SPAN></LI></OL><H3 id="toc-hId-148321069"><STRONG>Watch it in Action</STRONG></H3><P><SPAN>To see how Int4 Suite handles SAP AEM message selection and validation, check out this technical walkthrough:</SPAN></P><P><A href="https://www.youtube.com/watch?v=6SCqcWpSVuE" target="_blank" rel="noopener nofollow noreferrer"><STRONG>Watch: SAP AEM Testing with Int4 Suite</STRONG></A></P><H2 id="toc-hId-169979202"><STRONG>Summary and Next Steps</STRONG></H2><P><SPAN>The combination of </SPAN><STRONG>Clean Core Level B</STRONG><SPAN> status and </SPAN><STRONG>ASAPIO/AEM integration</STRONG><SPAN> has given IDOCs a second life as modern business events. However, to move these scenarios into production with confidence, your testing strategy must evolve from manual WE19 clicks to automated, variable-driven enterprise simulation.</SPAN></P><P><STRONG>Want to learn more?</STRONG></P><UL><LI><STRONG>Deep Dive:</STRONG><SPAN> Explore the </SPAN><A href="https://learning.sap.com/courses/avoid-sap-s-4hana-project-delays-with-third-party-systems-service-virtualization" target="_blank" rel="noopener noreferrer"><SPAN>SAP Learning Course: Avoid SAP S/4HANA Project Delays with Third-Party Systems Service Virtualization</SPAN></A><SPAN> to understand how to decouple your testing from external dependencies.</SPAN></LI><LI><STRONG>Documentation:</STRONG><SPAN> Visit the </SPAN><A href="https://www.google.com/search?q=https://help.int4.com/int4-suite-knowledge-center-library/3.13/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>Int4 Suite Knowledge Center</SPAN></A><SPAN> for detailed configuration guides on SAP AEM and Message Control.</SPAN></LI></UL> 2026-01-16T16:20:44.064000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/how-sap-s-4hana-migration-is-shifting-from-it-projects-to-business/ba-p/14301073 How SAP S/4HANA Migration Is Shifting from IT Projects to Business Transformation 2026-01-19T23:08:44.562000+01:00 juveria_sap_integrity https://community.sap.com/t5/user/viewprofilepage/user-id/2271579 <H4 id="toc-hId-2045809049"><STRONG>Introduction</STRONG></H4><P>SAP S/4HANA migration is no longer just about moving from ECC to a new system. Organizations that treat it as an IT-only initiative often face cost overruns, data issues, and poor user adoption. Successful programs recognize S/4HANA as a foundation for long-term business agility.</P><HR /><H4 id="toc-hId-1849295544"><STRONG>Why S/4HANA Migration Is More Than a Technical Upgrade</STRONG></H4><P>Modern enterprises operate in complex ecosystems where SAP is deeply integrated with finance, procurement, logistics, analytics, and external applications. Any change to SAP impacts:</P><UL><LI><P>End-to-end business processes</P></LI><LI><P>Reporting and compliance</P></LI><LI><P>Data governance and master data quality</P></LI><LI><P>User experience and productivity</P></LI></UL><P>Ignoring these factors increases risk and delays value realization.</P><HR /><H4 id="toc-hId-1652782039"><STRONG>Key Business Areas Impacted</STRONG></H4><OL><LI><P><STRONG>Finance &amp; Controlling</STRONG><BR />Universal Journal (ACDOCA) simplifies reporting but requires process redesign and data cleanup.</P></LI><LI><P><STRONG>Data Quality &amp; Governance</STRONG><BR />Poor master data (vendors, customers, materials) creates migration issues and post-go-live disruptions.</P></LI><LI><P><STRONG>Process Standardization</STRONG><BR />S/4HANA encourages clean core and best practices, pushing organizations to reduce custom code and manual workarounds.</P></LI><LI><P><STRONG>Change Management &amp; Adoption</STRONG><BR />Fiori UX changes how users interact with SAP, making training and communication critical.</P></LI></OL><HR /><H4 id="toc-hId-1456268534"><STRONG>Choosing the Right Migration Approach</STRONG></H4><P>There is no one-size-fits-all strategy. Organizations must evaluate:</P><UL><LI><P>Greenfield vs Brownfield vs Selective Data Transition</P></LI><LI><P>Business readiness and technical debt</P></LI><LI><P>Timeline, budget, and risk appetite</P></LI></UL><P>Early assessment and stakeholder alignment are key to making the right choice.</P><HR /><H4 id="toc-hId-1259755029"><STRONG>Conclusion</STRONG></H4><P>A successful SAP S/4HANA migration is a business-led transformation supported by technology—not the other way around. Companies that invest in data quality, process clarity, and change management unlock faster ROI and long-term resilience.</P> 2026-01-19T23:08:44.562000+01:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/sap-integration-suite-agentic-testing-is-available-now-with-int4-suite/ba-p/14322864 SAP Integration Suite - Agentic Testing is available now with Int4 Suite 2026-02-06T10:13:21.141000+01:00 MichalKrawczyk https://community.sap.com/t5/user/viewprofilepage/user-id/45785 <P><SPAN>The SAP BTP Integration Suite AI roadmap for 2026 showcases a massive shift toward Agentic AI, focusing on making the platform not just a tool for developers, but an orchestrator of autonomous agents. While SAP is actively building these capabilities, Int4 Suite is already delivering on several of these "future" promises today, particularly in the realm of Test Agents.</SPAN></P><H2 id="toc-hId-1789528102"><SPAN>The SAP BTP Integration Suite AI Roadmap (2026)</SPAN></H2><P><SPAN>According to the SAP Community update, the roadmap is divided into two major pillars: AI for Integration (productivity) and Integration for AI (orchestration).</SPAN></P><UL><LI><SPAN><STRONG>MCP (Model Context Protocol) Gateway:</STRONG> SAP is betting heavily on MCP as the standard for connecting and governing AI agents. This will allow the Integration Suite to act as a "control plane" for agents.</SPAN></LI><LI><SPAN><STRONG>Joule-Driven Development</STRONG> : SAP is moving beyond simple prompts to full iFlow generation and design validation using Joule, SAP’s digital assistant.</SPAN></LI><LI><SPAN><STRONG>Specialized AI Agents</STRONG> (Future Roadmap):</SPAN><UL><LI><SPAN>Migration Agent: To convert Java mappings and adapter modules into Groovy scripts.</SPAN></LI><LI><SPAN>Configuration Agent: To suggest iFlow configurations based on historical data.</SPAN></LI><LI><SPAN>Test Agents (Planned): SAP explicitly mentions the plan to develop Test Agents that provide test data and test cases during the development of an iFlow to allow for immediate testing.</SPAN></LI></UL></LI></UL><P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FkFynsigf99o%3Fstart%3D199%26feature%3Doembed%26start%3D199&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DkFynsigf99o&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FkFynsigf99o%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="600" height="337" scrolling="no" title="SAP Integration Suite in 2026✨" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div></P><H2 id="toc-hId-1593014597"><SPAN>Int4 Suite: SAP Integration Suite Test Agents are available today&nbsp;</SPAN></H2><P><SPAN>While SAP’s own Test Agents are currently on the roadmap for future development, Int4 Suite already provides a functional testing engine that automates the most time-consuming parts of SAP integration.</SPAN></P><H3 id="toc-hId-1525583811"><SPAN>1. Natural Language Test Creation</SPAN></H3><P><SPAN>SAP’s roadmap envisions using Joule for iFlow testing. Int4 Suite already utilizes advanced AI models to allow users to create and manage test cases through natural language.</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="new_ways.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369508i7446F6FD9D862794/image-size/large?v=v2&amp;px=999" role="button" title="new_ways.png" alt="new_ways.png" /></span></P><P><SPAN>Figure 1 - Int4 Suite Test Agents can create, change, and run any tests on the SAP Integration Suite.&nbsp;</SPAN></P><UL><LI><SPAN>The Chatbot Experience: Instead of navigating complex technical menus, users interact with an AI "Testing Agent."</SPAN></LI><LI><SPAN>Semantic Intelligence: Built on a Business Knowledge Graph, the system understands the relationship between technical messages and business data. You don't need a Message ID; you can simply ask the assistant to find or create test cases based on specific business criteria, like "Sales Orders for US-based customers."</SPAN></LI><LI><SPAN>Autonomous Execution: The agent doesn't just suggest a case; it handles the setup, injection, and execution of that test into your landscape (e.g., from Dev to QA) automatically.</SPAN></LI></UL><P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FmsbnCupiKPk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DmsbnCupiKPk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FmsbnCupiKPk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="600" height="337" scrolling="no" title="Int4 Suite Test Agents for SAP Integration Suite" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div></P><H3 id="toc-hId-1329070306"><SPAN>2. Automated Test Case Generation from Historical Data</SPAN></H3><P><SPAN>A key goal of SAP's future roadmap is providing "test data and test cases" automatically. Int4 Suite fulfills this today through two innovative modules:</SPAN></P><UL><LI><SPAN>The Robotic Crawler: This tool acts as a "search and capture" engine. It scans historical electronic messages and business documents directly from production environments of SAP Integration Suite or legacy middleware (like SAP PI/PO), extracting the full payload of real transactions.</SPAN></LI><LI><SPAN>The Repeater Module: This module "replays" captured production messages through your new integration scenarios. For example, during a migration to SAP BTP Integration Suite, Int4 Suite takes a real Production Sales Order and runs it through your new iFlow to ensure the resulting S/4HANA document matches the original exactly.</SPAN></LI><LI><SPAN>Secure Anonymization: To ensure GDPR compliance, Int4 Suite includes a data scrambling engine that anonymizes sensitive information before the test case is created, making real-world data safe for use in non-productive environments.</SPAN></LI></UL><H2 id="toc-hId-1003474082"><SPAN>Bridging the Gap: Why Agentic Testing Matters Now</SPAN></H2><P><SPAN>As SAP moves toward "Agentic IPaaS", where autonomous agents like Quote Creation or Receipt Creation Agents operate within your system—verification becomes the primary challenge. While SAP focuses on the orchestration of these agents, Int4 Suite focuses on the validation.</SPAN></P><P><SPAN>Because SAP BTP Integration Suite updates are automatic and frequent, Int4 Suite acts as a "continuous insurance policy." It validates the "logic under the hood", confirming that business documents are created correctly in SAP S/4HANA, rather than just checking if a technical message was "sent."&nbsp;</SPAN></P> 2026-02-06T10:13:21.141000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/part-2-lean-approach-for-the-integration-flow-s-performance-testing/ba-p/14323670 Part 2: Lean (approach) for the integration flow(s) Performance Testing 2026-02-08T14:00:46.912000+01:00 stevang https://community.sap.com/t5/user/viewprofilepage/user-id/7643 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="christopher-burns-Kj2SaNHG-hg-unsplash.jpg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369975i46FC7354540F24E0/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="christopher-burns-Kj2SaNHG-hg-unsplash.jpg" alt="christopher-burns-Kj2SaNHG-hg-unsplash.jpg" /></span></P><P>In my previous <A href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084" target="_blank">article</A>&nbsp;(<A class="" href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/ba-p/14304822" target="_blank">A-Z Performance Testing for integrations... And why?</A>)[1] I’ve tried to provide a comprehensive guardrail for the <STRONG>Performance Testing</STRONG> of the integration flows we are designing, and building…</P><P>But no matter what my wish is – the project teams will be often <EM>pushed </EM>to simplify things a bit, to make it faster, even if there are some trade-offs in quality (although nobody will admit this).</P><P>In this article, let’s see an approach with <STRONG>only two test-runs per integration flow</STRONG>.</P><H2 id="toc-hId-1789555998">Lean approach – pragmatic approach?</H2><P>I won’t go through all the definitions of the various types of <STRONG>Performance Testing</STRONG>, or <STRONG>Testing Methodology</STRONG> etc. All is already covered in my previous <A href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084" target="_blank">article</A>.</P><H3 id="toc-hId-1722125212">Requirements Gathering &amp; Planning</H3><P>Yes, we still have to undergo the collection of requirements and inputs:</P><UL><LI>SLAs</LI><LI>Volumes (i.e. hourly, daily etc.) including expected growth</LI><LI>Business patterns (i.e. patterns or spikes during business hours, or during season)</LI><LI>Systems under test (i.e. what are the systems, or integration flows or IT components which are under test?)</LI><LI>Users or user groups (i.e. integrations are usually <EM>built</EM> using technical users, but let this be re-confirmed)</LI></UL><H3 id="toc-hId-1525611707">Test Design</H3><P>Again, next step is actual design, not too many shortcuts here either, as we still need to understand, but we may simplify it a bit:</P><UL><LI>What do we test – we are testing integration flows now application functionalities;</LI><LI>What types of Performance Testing – let’s focus on <STRONG>Load Testing</STRONG> and simplified <STRONG>Scalability Testing</STRONG> only;</LI><LI>What kind of integrations are we testing – Sync or Async make difference, so this information is needed in all cases;</LI><LI>What is the scope – ideally, let’s focus first on Receiver endpoint and <EM>common</EM> (reusable part) of the middleware IT components; if needed, testing any Sender functions will come at the next iteration;</LI><LI>Do we test all Consumers at once – no, as already recommended, we do one Consumer at the time;</LI><LI>Payloads – we definitely need them in all cases.</LI></UL><H3 id="toc-hId-1329098202">Environment Setup</H3><P>Try to have test environment (or QA environment) that closely mirrors production, including all IT components and software versions. If not possible – okay, just make assumptions “how much faster/slower” test environment vs. productive environment is. This may be educated guess based on some other relevant markers, if existing.</P><H3 id="toc-hId-1132584697">Script Development</H3><P>Here we can make some <EM>real </EM>simplifications.</P><P>We will use same inputs as in previous <A href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084" target="_blank">article</A>:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Input</STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>SLAs</P></TD><TD><P>For the order taking API:<BR /><SPAN>Average order response time is up to 4s;<BR /></SPAN><SPAN>99% of orders are created up to 8s;<BR /></SPAN>This is valid for any <EM>Customer</EM>, any <EM>SalesOrganization</EM>, default <EM>OrderType</EM>, standard on-invoice <EM>PricingCondition</EM>;</P></TD></TR><TR><TD><P>2.</P></TD><TD><P>Volumes</P></TD><TD><P>Annual average 160 000 orders per working day;<BR />Maximum (peek) season 270 000 orders per working day;<BR />Expected annual growth 10%;<BR /><SPAN>Average order has 10 items, where orders normally do not contain more than 50 items;</SPAN></P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Business patterns</P></TD><TD><P>80% or orders are created doing extended working hours from 10:00-22:00, out of which half is created in the evening 19:00-21:00</P></TD></TR><TR><TD><P>4.</P></TD><TD><P>Systems under test</P></TD><TD><P>S/4HANA API_SALES_ORDER_SRV <EM>Sales Order (A2X), single cluster, no policy routing</EM>;<BR /><SPAN>API-M </SPAN><EM>SalesOrder</EM><SPAN> with policies, excluding CSRF token;</SPAN></P></TD></TR><TR><TD><P>5.</P></TD><TD><P>Users</P></TD><TD><P>No business user. testing integration only,</P></TD></TR></TBODY></TABLE><P>Based on these inputs we will build appropriate <STRONG>Load Testing</STRONG> script, no major changes compared to previous <A href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084" target="_blank">article</A> (except in payloads):</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Script </STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>Target<BR /><STRONG>Test Results</STRONG></P></TD><TD><P>Response time percentile 50 should be below 4s;<BR /><SPAN>Response time percentile 99 should be below 8s;</SPAN></P></TD></TR><TR><TD><P>2.</P></TD><TD><P>Capturing <STRONG>Test Results</STRONG></P></TD><TD><P>Catch the information about sent messages from the Sender side (i.e. testing tool): <STRONG>number of messages</STRONG> sent, <STRONG>start time </STRONG>(sending) and <STRONG>stop time</STRONG> (sending);</P><P>In case of <STRONG>Async API</STRONG>, catch the overall status on the Receiver system logs on successfully received/processed: <STRONG>number of messages</STRONG> received, overall <STRONG>timing from start to end</STRONG>, and catch response status as well if response/acknowledgement is enabled;</P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Number of Threads</P></TD><TD><P>We count for maximum daily volume + 5 years growth + 50% margin:<BR />270 000*1.1*1.1*1.1*1.1*1.1*1.5 = 652 257 orders per peek day;</P><P>But this volume is not evenly distributed through 24k, 40% is in 2h only:<BR />652 257*0.4 / 2 = 130 451 orders in peek hour;<BR />Or this is 130 451 / 3600 = 36 orders per second;</P><P>As we have already included safety margin, we are okay to set:<BR />Number of Threads = 36;</P></TD></TR><TR><TD><P>4.</P></TD><TD><P>Rump-up period</P></TD><TD><P>Always use default 1s for all tests;</P></TD></TR><TR><TD><P>5.</P></TD><TD><P>Loop Count</P></TD><TD><P>For <STRONG>Load Testing</STRONG> there is no need to loop payloads more than 20-50 times;&nbsp;</P></TD></TR><TR><TD><P>6.</P></TD><TD><P>Payloads</P></TD><TD><P>Create payloads:<BR /><U>(simplification) </U>using one representative <EM>Customer(s)</EM>,<BR /><U>(simplification) </U>using one or max few representative <EM>SalesOrganization(s)</EM>,<BR />where each will use default <EM>OrderType</EM>,<BR />where each will use only standard <EM>PricingCondition(s)</EM>;</P><P><U>(simplification)</U> make only up to 10 payloads:<BR />8 payloads with average number i.e. 10 items,<BR />1 payload with lower boundaries i.e. 3 items,<BR />1 payload with upper boundaries i.e. 30 items,</P></TD></TR><TR><TD><P>7.</P></TD><TD><P>Endpoint</P></TD><TD><P>API-M <EM>SalesOrder</EM> endpoint</P></TD></TR><TR><TD><P>7.</P></TD><TD><P>Users</P></TD><TD><P>No business user;<BR /><SPAN>JMeter will authenticate and obtain key as a client application, through VPN tunnel;</SPAN></P></TD></TR></TBODY></TABLE><P>We will not conduct <EM>real</EM> <STRONG>Scalability Testing</STRONG> or <STRONG>Stress Testing</STRONG> – but we can conduct something <EM>in-between</EM>, but let’s still call it <STRONG>Scalability Testing</STRONG>. Script would be based on the previously developed for the <STRONG>Load Testing</STRONG>, with only two minor modifications:</P><UL><LI>Increase Number of Threads two or three times i.e. 72;</LI><LI>Increase payloads to the extreme limit i.e. all 10 payloads will have 50 items.</LI></UL><H3 id="toc-hId-936071192">Test Execution</H3><P>We are simulating real time scenarios, but we are also trying to simplify overall execution as well. We will have only two tests each will run only once:</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Condition </STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P>Applications&nbsp;</P></TD><TD><P>No other users should execute the same integration flow which is under test;<BR /><SPAN>All other (background) jobs and operations should stay as-is (keep it normal as-is);</SPAN></P></TD></TR><TR><TD><P>2.</P></TD><TD><P>IT components</P></TD><TD><P>No other users should execute the same integration flow which is under test;<BR /><SPAN>All other (background) jobs and operations should stay as-is (keep it normal as-is)</SPAN></P></TD></TR><TR><TD><P>3.</P></TD><TD><P>Execution timetable</P></TD><TD><P>Conduct both <EM>simplified </EM><STRONG>Load Testing</STRONG> and <STRONG>Stress Testing</STRONG> only on the peak hours period, as for SLA this one only <EM>really</EM> matters…</P><P>Why? Explanations is the same as before – because during different days or different hours within the day there might be other (background) jobs or operations impacting overall system performance, and we want to simulate all operations as realistic as possible.</P><P>We have only one test-run:<BR /><SPAN>Business day, peek working hours 19:00-21:00;</SPAN></P></TD></TR></TBODY></TABLE><H3 id="toc-hId-739557687">Test Results</H3><P>Now there are only two test-runs we did: &nbsp;</P><TABLE><TBODY><TR><TD><P><STRONG>#</STRONG></P></TD><TD><P><STRONG>Evaluation</STRONG></P></TD><TD><P><STRONG>Example</STRONG></P></TD></TR><TR><TD><P>1.</P></TD><TD><P><STRONG>Load Testing</STRONG></P></TD><TD><P>As per SLAs evaluate actual percentile 50 and 99 for the <STRONG>Test Execution</STRONG> we did</P></TD></TR><TR><TD><P>2.</P></TD><TD><P><STRONG>Scalability Testing</STRONG></P></TD><TD><P>Observe percentile 50 and 99 for the <STRONG>Test Execution</STRONG> we did.<BR />If we pass without errors, but response time comparable with SLA, we can conclude – test has passed.</P></TD></TR></TBODY></TABLE><H2 id="toc-hId-413961463">Conclusions</H2><P>Why <EM>simplified</EM> <STRONG>Performance Testing</STRONG>? If we need to do it right, why should we compromise?</P><P>It’s about being pragmatic. Whether we like it or not, delivery teams are under pressure, and full-blown testing should be still conducted for the business-critical Integration Services. But in same cases, making things a bit faster without sacrificing too much – might be a proper choice (when pressed with time).</P><P>Does this resonate?</P><P>Again, this is only a proposal. Architect, experts and project teams may decide differently of course…</P><H2 id="toc-hId-217447958">Acknowledgment</H2><P>*) Intro photo by <A href="https://unsplash.com/@christopher__burns?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash" target="_blank" rel="noopener nofollow noreferrer">Christopher Burns</A> on <A href="https://unsplash.com/photos/white-and-black-digital-wallpaper-Kj2SaNHG-hg?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash" target="_blank" rel="noopener nofollow noreferrer">Unsplash</A></P><H2 id="toc-hId-20934453">References</H2><P>[1] A-Z Performance Testing for integrations: <A href="https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-members/a-z-performance-testing-for-integrations-and-why/bc-p/14305000#M177084</A></P> 2026-02-08T14:00:46.912000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/unlock-s-4hana-cloud-secrets-hybrid-events-apis-strategy-for-seamless-btp/ba-p/14326758 Unlock S/4HANA Cloud Secrets: Hybrid Events+APIs Strategy for Seamless BTP Data Flows 2026-02-12T01:30:35.487000+01:00 tamitdassharma https://community.sap.com/t5/user/viewprofilepage/user-id/153763 <H2 id="toc-hId-1789646278">Strategic Financial Data Extraction from SAP S/4HANA Cloud Public Edition to SAP BTP</H2><H4 id="toc-hId-1851298211">Using Transfer Pricing as a Real-World Example</H4><P>Transfer pricing represents a perfect use case to demonstrate <STRONG>generic financial data extraction strategies</STRONG> from SAP S/4HANA Cloud Public Edition. Multinational organisations need GL, Controlling, Material Ledger, and Asset Accounting postings to flow reliably to external engines hosted on SAP BTP—but Public Cloud’s “no direct database access” constraint demands architecturally sound patterns.</P><H3 id="toc-hId-1525701987">Why Transfer Pricing Perfectly Illustrates the Challenge</H3><P><STRONG>The business need</STRONG>: Calculate inter-company markups (cost-plus, resale-minus) using real-time ACDOCA postings across multiple currencies, profit centres, and cost elements.<BR /><STRONG>The constraint</STRONG>: Pure cloud extensibility—no RFCs, no custom ABAP, no direct table extracts.&nbsp;<BR /><STRONG>The solution:</STRONG> Three proven patterns that work for any financial data integration scenario.</P><H3 id="toc-hId-1329188482">Pattern 1: Event Notifications (Real-Time Push Model)</H3><P><STRONG>How transfer pricing uses it</STRONG>: GL postings trigger automatic business event notifications routed through SAP Event Mesh to BTP applications.</P><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Event Notification" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371720i4CB6FC368836A2D3/image-size/large?v=v2&amp;px=999" role="button" title=" - visual selection-3.png" alt="Event Notification" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Event Notification</span></span></P><P><SPAN><STRONG>Generic applicability</STRONG>: Works for any posting-driven process—revenue recognition, inter-company reconciliation, compliance monitoring.&nbsp;</SPAN></P><DIV><STRONG>High-level activation</STRONG>:</DIV><OL><LI><SPAN><STRONG>Integration configuration apps</STRONG> → Activate accounting-related event notifications</SPAN></LI><LI><SPAN><STRONG>Communication scenarios</STRONG> → Configure outbound event destinations</SPAN></LI><LI><SPAN><STRONG>Event Mesh subscription</STRONG> → Point to BTP service endpoint</SPAN></LI><LI><SPAN><STRONG>Payload contains</STRONG>: Company codes, ledger amounts (local/group), profit centres, material valuations</SPAN></LI></OL><H3 id="toc-hId-1132674977">Pattern 2: Standard Query Services (Scheduled Pull Model)</H3><P><STRONG>How transfer pricing uses it</STRONG>: BTP service queries S/4HANA nightly for new/changed financial postings.</P><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Query Service" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371722iBABF11B5D5404615/image-size/large?v=v2&amp;px=999" role="button" title=" - visual selection-4.png" alt="Query Service" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Query Service</span></span><STRONG><SPAN>Key services (release-dependent availability):</SPAN></STRONG></P><UL><LI><SPAN><STRONG>Financial line item services</STRONG> → GL/CO/ML/AA postings</SPAN></LI><LI><SPAN><STRONG>Profitability services</STRONG> → Margin analysis data</SPAN></LI><LI><SPAN><STRONG>Material valuation services</STRONG> → Cost component details</SPAN></LI></UL><P><STRONG>Generic applicability</STRONG>: Perfect for scheduled reconciliation, historical backfills, or validation runs.</P><H3 id="toc-hId-936161472">Pattern 3: Hybrid Extraction (Production Resilience)</H3><P>Transfer pricing demands 99.9% coverage. Smart architects layer all three patterns:</P><TABLE border="1" width="100%"><TBODY><TR><TD width="25%"><STRONG>Use Case</STRONG></TD><TD width="25%"><STRONG>Primary Pattern</STRONG></TD><TD width="25%"><STRONG>Transfer Pricing Example</STRONG></TD><TD width="25%"><STRONG>Generic Use</STRONG></TD></TR><TR><TD width="25%">Real-time GL postings</TD><TD width="25%">Event notifications</TD><TD width="25%">Invoice → immediate markup</TD><TD width="25%">Any posting trigger</TD></TR><TR><TD>Historical catch-up</TD><TD>Standard APIs</TD><TD>Month-end reconciliation</TD><TD>Data migration</TD></TR><TR><TD>Day 1 implementation</TD><TD>Bulk extraction</TD><TD>Full ACDOCA baseline</TD><TD>Initial loads</TD></TR><TR><TD>Custom calculations</TD><TD>BTP processing</TD><TD>Exception overrides</TD><TD>Business rules</TD></TR></TBODY></TABLE><P><STRONG>&nbsp;Three-tier reference flow:</STRONG></P><H4 id="toc-hId-868730686"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Hybrid Extraction" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371723i0FA6EF631F99C2FE/image-size/large?v=v2&amp;px=999" role="button" title=" - visual selection-5.png" alt="Hybrid Extraction" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Hybrid Extraction</span></span></STRONG><SPAN>BTP Implementation Patterns (Transfer Pricing Example)</SPAN></H4><H5 id="toc-hId-801299900"><SPAN>Cloud Application Programming Model (CAP)</SPAN></H5><pre class="lia-code-sample language-javascript"><code>service FinancialIntegration { entity Postings as projection on external financial services; entity PricingResults as projection on HANA staging; }</code></pre><P><SPAN><STRONG>&nbsp;Pattern</STRONG>: Event handlers + scheduled queries → unified HANA staging → business rules.</SPAN></P><H5 id="toc-hId-604786395"><SPAN>ABAP RESTful Application Programming (RAP)</SPAN></H5><pre class="lia-code-sample language-abap"><code>define root view entity FinancialDataView as select from standard financial services</code></pre><P><STRONG>Pattern</STRONG>: Analytical CDS views over external data → transactional services.</P><H4 id="toc-hId-279190171">Architect’s Production Checklist</H4><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Tenant validation: Test events/APIs in your specific S/4HANA Cloud environment<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Quota management: Monitor Event Mesh throughput + API rate limits<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Fallback design: API polling validates event stream completeness<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Quarterly readiness: SAP evolves event coverage and API fields continuously<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Pure cloud: Zero custom code in production S/4HANA</P><H4 id="toc-hId--415040429">The Universal Pattern</H4><DIV>Transfer pricing proves the strategy works for complex financial scenarios. The same architecture applies to:</DIV><UL><LI><SPAN>Inter-company reconciliation</SPAN></LI><LI><SPAN>Revenue recognition automation</SPAN></LI><LI><SPAN>Compliance reporting engines</SPAN></LI><LI><SPAN>Analytics data pipelines</SPAN></LI><LI><SPAN>Any standard data object/model driven process</SPAN></LI></UL><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Universal Pattern" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371724i43222D4AB9F30955/image-size/large?v=v2&amp;px=999" role="button" title=" - visual selection-6.png" alt="Universal Pattern" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Universal Pattern</span></span></SPAN><SPAN><STRONG>Key takeaway</STRONG>: This isn’t “just for transfer pricing”—it’s your reusable blueprint for any S/4HANA Cloud → BTP data integration.&nbsp;</SPAN></P> 2026-02-12T01:30:35.487000+01:00