https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-AI-Launchpad-blog-posts.xmlSAP Community - SAP AI Launchpad2026-02-23T12:11:33.529646+00:00python-feedgenSAP AI Launchpad blog posts in SAP Communityhttps://community.sap.com/t5/sap-maxattention-blog-posts/new-expert-guided-implementation-unlocking-joule-and-sap-business-ai-from/ba-p/14242425New Expert-Guided Implementation: Unlocking Joule and SAP Business AI from Strategy to Activation2025-10-13T14:51:50.787000+02:00paul_schropp44https://community.sap.com/t5/user/viewprofilepage/user-id/807129<P>As organizations look to harness the power of Artificial Intelligence (AI) to drive transformation, SAP Business AI stands at the forefront - embedding intelligence directly into business processes to deliver smarter decisions and measurable outcomes.</P><P>From finance and supply chain to HR, procurement, and customer experience,<SPAN> </SPAN><STRONG>SAP Business AI</STRONG><SPAN> </SPAN>infuses<SPAN> </SPAN><STRONG>trusted, relevant, and responsible AI</STRONG><SPAN> </SPAN>across every process - ensuring each insight is contextual, every recommendation actionable, and every outcome aligned with enterprise goals.</P><P>Whether leveraging<SPAN> </SPAN><STRONG>Joule</STRONG>, SAP’s AI copilot, or<SPAN> </SPAN><STRONG>AI-embedded scenarios</STRONG><SPAN> </SPAN>across your SAP applications, your path to becoming an intelligent and sustainable enterprise begins with understanding and activating SAP’s Business AI capabilities.</P><P>To help you unlock this value faster, we’re launching a new Expert-Guided Implementation (EGI):<SPAN> </SPAN></P><H5 id="toc-hId--2145749666" id="toc-hId--2144977113"><A href="https://accounts.sap.com/saml2/idp/sso?sp=https://www.successfactors.eu/learninghub&RelayState=%2Fsf%2Flearning%3FdestUrl%3Dhttps%253a%252f%252fsaplearninghub%252eplateau%252ecom%252flearning%252fuser%252fdeeplink%255fredirect%252ejsp%253flinkId%253dCATALOG%255fSEARCH%2526sbArSel%253d%2526keywords%253dSUP_EDE_00011814%2526selKeyWordHeader%253dSUP_EDE_00011814%2526catSel%253d%2526srcSel%253dESAC%2526delMthSel%253d%2526ILDateFrm%253d%2526ILDateTo%253d%2526ILBlend%253d%2526ILSchd%253d%2526fromSF%253dY%26company%3Dlearninghub" target="_self" rel="noopener noreferrer"><STRONG>Unlocking AI: Joule and SAP Business AI from Strategy to Activation</STRONG></A></H5><P>This multi-day, expert-led workshop is designed to help you and your team unlock the full potential of<SPAN> </SPAN><STRONG>Joule</STRONG><SPAN> </SPAN>and<SPAN> </SPAN><STRONG>SAP Business AI</STRONG><SPAN> </SPAN>across your SAP landscape. Guided by SAP experts, you’ll explore how to translate strategy into execution with practical exercises, live demonstrations, and activation guidance.</P><H4 id="toc-hId-1823621406" id="toc-hId-1824393959">Workshop Schedule</H4><P>To accommodate global participants, this session will be offered in two time zones:</P><UL><LI><P><STRONG>October 20–22, 2025</STRONG><SPAN> </SPAN>|<SPAN> </SPAN><STRONG>10:00am–12:00pm EMEA time zone</STRONG></P></LI><LI><P><STRONG>November 3–5, 2025</STRONG><SPAN> </SPAN>|<SPAN> </SPAN><STRONG>10:00am –12:00pm NY time zone</STRONG></P></LI></UL><H4 id="toc-hId-1627107901" id="toc-hId-1627880454"><BR />What You’ll Learn</H4><P>Through this EGI, participants will gain hands-on experience and actionable knowledge in:</P><UL class="lia-list-style-type-square"><LI><STRONG>SAP Business AI Strategy & Foundation</STRONG><SPAN> </SPAN>- Understand SAP’s AI vision, architecture, and trusted approach to delivering responsible, embedded intelligence</LI><LI><STRONG>Commercial Model & AI Units</STRONG><SPAN> </SPAN>- Learn how to navigate Base vs Premium<SPAN> models, the </SPAN>AI Feature Catalog<SPAN>, and use the </SPAN>AI Estimator<SPAN> to plan your adoption journey</SPAN></LI><LI><STRONG>Joule – AI Built for Business</STRONG><SPAN> </SPAN>- Explore Joule’s unified architecture<SPAN>, </SPAN>roadmap<SPAN>, and capabilities across Joule for Business, </SPAN>Joule for Developers, and<SPAN> Joule for </SPAN>Consultants</LI><LI><STRONG>S/4HANA AI & ML Capabilities</STRONG><SPAN> </SPAN>- Experience ISLM in action through AI/ML demo use cases<SPAN> designed to enhance decision-making and process automation</SPAN></LI><LI><STRONG>SAP BTP AI & ML Capabilities</STRONG><SPAN> </SPAN>- Discover how BTP AI Foundation<SPAN>, </SPAN>Document AI<SPAN>, </SPAN>AI Services<SPAN>, </SPAN>AI Core<SPAN>, and </SPAN>AI Launchpad<SPAN> can help you extend and scale innovation</SPAN></LI><LI><STRONG>Joule Activation & Prerequisites</STRONG><SPAN> </SPAN>- Gain a clear view of activation steps, prerequisites<SPAN>, and how to extend Joule with </SPAN>Document Grounding<SPAN>, </SPAN>Joule Studio<SPAN>, and </SPAN>AI Agents</LI></UL><H4 id="toc-hId-1430594396" id="toc-hId-1431366949">Why Attend</H4><P>This workshop goes beyond theory—it’s<SPAN> </SPAN><STRONG>interactive, practical, and guided by SAP experts</STRONG>. By the end, you’ll have:</P><UL class="lia-list-style-type-square"><LI>A clear understanding of SAP’s Business AI strategy and product portfolio</LI><LI>The ability to activate Joule and deploy AI scenarios across your landscape</LI><LI> Actionable steps to integrate AI and Generative AI into your business workflows</LI><LI>Insights into commercial models and AI unit management for smarter planning</LI></UL><P>Whether you’re a business leader shaping AI strategy or a technical consultant implementing it, this EGI equips you to bridge the gap between<SPAN> </SPAN><STRONG>strategy and execution</STRONG>.</P><H4 id="toc-hId-1234080891" id="toc-hId-1234853444">How to Register</H4><H4 id="toc-hId-1038339939"><STRONG><A href="https://accounts.sap.com/saml2/idp/sso?sp=https://www.successfactors.eu/learninghub&RelayState=%2Fsf%2Flearning%3FdestUrl%3Dhttps%253a%252f%252fsaplearninghub%252eplateau%252ecom%252flearning%252fuser%252fdeeplink%255fredirect%252ejsp%253flinkId%253dCATALOG%255fSEARCH%2526sbArSel%253d%2526keywords%253dSUP_EDE_00011814%2526selKeyWordHeader%253dSUP_EDE_00011814%2526catSel%253d%2526srcSel%253dESAC%2526delMthSel%253d%2526ILDateFrm%253d%2526ILDateTo%253d%2526ILBlend%253d%2526ILSchd%253d%2526fromSF%253dY%26company%3Dlearninghub" target="_blank" rel="noopener noreferrer">Register here to secure your spot</A>* </STRONG><SPAN>today to learn from SAP experts and gain the skills to bring <STRONG>Business AI to life</STRONG> across your enterprise.</SPAN></H4><P><SPAN> * <EM>You may need to register your S-user for access to SAP Learning Hub to access the EGI registration page. It is a one-time registration, click <A href="https://support.sap.com/en/offerings-programs/enterprise-support/enterprise-support-academy/learn.html" target="_blank" rel="noopener noreferrer">here.</A></EM></SPAN></P>2025-10-13T14:51:50.787000+02:00https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060[2025-Q4] SAP BTP Adoption Webinars – Unlock the Power of AI, Data, and Integration2025-10-23T20:29:52.412000+02:00JiaRuhttps://community.sap.com/t5/user/viewprofilepage/user-id/1490527<P><SPAN>Get ready for a packed lineup of</SPAN> <STRONG>expert-led sessions</STRONG> <SPAN>designed to help you accelerate innovation with</SPAN> <STRONG>SAP Business Technology Platform (BTP)</STRONG><SPAN>. From AI best practices to integration and analytics, these webinars will equip you with practical insights and real-world examples you can apply right away.</SPAN></P><P><FONT face="arial,helvetica,sans-serif"><ul =""><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-1892744541">Oct 30 | Pipeline Concept in SAP Integration Suite</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-1696231036">Nov 04 | Service and Service Excellence with AI Driven Insights</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-1499717531">Nov 05 | Best practices for secure and scalable LLM model usage in BTP AI apps with SAP AI Core</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-1303204026">Nov 13 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI - Americas</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-1106690521">Nov 13 | Enablement on SAP Mobile Start and SAP Build Work Zone with Showcase of Joule as AI Copilot</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-910177016">Nov 19 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI - APAC/EMEA</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-713663511">Nov 20 | BTP AI Best Practices – Knowledge Graph-based Retrieval Augmented Generation</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-517150006">Dec 03 | BTP AI Best Practices – Vector-based RAG, Document Grounding, and Examples</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-320636501">Dec 09 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI (Americas Session)</a></li><li style="list-style-type:disc; margin-left:30px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/2025-q4-sap-btp-adoption-webinars-unlock-the-power-of-ai-data-and/ba-p/14252060#toc-hId-124122996">Dec 15 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI (APAC/EMEA Session)</a></li></ul></FONT></P><P> </P><H3 id="ember515" id="toc-hId-1892744541"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Oct 30 | Pipeline Concept in SAP Integration Suite</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span>Learn how the pipeline concept accelerates your migration from SAP Process Orchestration to SAP Integration Suite.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.sap.com/noam-iac-btp-int-suite-webinar-pipeline-concept/en_us" target="_self" rel="noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember524" id="toc-hId-1696231036"><span class="lia-unicode-emoji" title=":calendar:">📅</span>Nov 04 | Service and Service Excellence with AI Driven Insights</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span>Experience how SAP solutions transform intelligence into action—turning service into a seamless, revenue-driving experience.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.sap.com/apj-adoption-wb-service-excellence-with-ai-nov-25/en_us/home.html" target="_self" rel="noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember537" id="toc-hId-1499717531"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Nov 05 | Best practices for secure and scalable LLM model usage in BTP AI apps with SAP AI Core</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how to make secure and scalable large language model usage in BTP AI applications with SAP BTP AI Core.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> Register Now: <A class="" href="https://events.teams.microsoft.com/event/0fa4ed32-defe-425b-b31d-b4c654f61db5@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">EMEA/MEE/APAC Session</A> | <A class="" href="https://events.teams.microsoft.com/event/5b173ab2-8b94-4424-a792-cad2ef0c06a3@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">Americas/EMEA Session</A></P><P class=""> </P><H3 id="ember543" id="toc-hId-1303204026"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Nov 13 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI - Americas</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how enterprises are turning AI hype into real business outcomes with SAP BTP.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.teams.microsoft.com/event/506e0408-24e6-436e-8ca1-e3d81844d78e@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember549" id="toc-hId-1106690521"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Nov 13 | Enablement on SAP Mobile Start and SAP Build Work Zone with Showcase of Joule as AI Copilot</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how to leverage SAP Mobile Start and SAP Build Work Zone as central entry point with Joule AI Copilot for connected SAP solutions.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.sap.com/eur-adoption-webinar-sap-btp-mobile-start-build-joule-september-2025/en_us/home.html" target="_self" rel="noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember556" id="toc-hId-910177016"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Nov 19 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI - APAC/EMEA</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how enterprises are turning AI hype into real business outcomes with SAP BTP.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.sap.com/apj-adoption-webinar-sap-btp-ai-driven-business-solutions-nov-2025/en_us/home.html" target="_self" rel="noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember564" id="toc-hId-713663511"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Nov 20 | BTP AI Best Practices – Knowledge Graph-based Retrieval Augmented Generation</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Learn how to combine Knowledge Graphs with Large Language Models for advanced RAG scenarios.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> Register Now: <A class="" href="https://events.teams.microsoft.com/event/0fa4ed32-defe-425b-b31d-b4c654f61db5@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">EMEA/MEE/APAC Session</A> | <A class="" href="https://events.teams.microsoft.com/event/5b173ab2-8b94-4424-a792-cad2ef0c06a3@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">Americas/EMEA Session</A></P><P class=""> </P><H3 id="ember570" id="toc-hId-517150006"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Dec 03 | BTP AI Best Practices – Vector-based RAG, Document Grounding, and Examples</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> See how to ground large language models on company-specific data using SAP Document Grounding.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> Register Now: <A class="" href="https://events.teams.microsoft.com/event/ad603257-6fa3-4c3c-a627-160a90ba1872@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">EMEA/MEE/APAC Session</A> | <A class="" href="https://events.teams.microsoft.com/event/866e330f-58dd-4917-bee4-d4139b0b2a92@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">Americas/EMEA Session</A></P><P class=""> </P><H3 id="ember579" id="toc-hId-320636501"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Dec 09 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI (Americas Session)</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how enterprises are turning AI hype into real business outcomes with SAP BTP.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.teams.microsoft.com/event/723cb552-206f-466c-8141-f1e3895d7154@42f7676c-f455-423c-82f6-dc2d99791af7" target="_self" rel="nofollow noopener noreferrer">Register Now</A></P><P class=""> </P><H3 id="ember584" id="toc-hId-124122996"><span class="lia-unicode-emoji" title=":calendar:">📅</span> Dec 15 | AI-Driven Business Solutions – Exploring the Potential of SAP BTP AI (APAC/EMEA Session)</H3><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> Discover how enterprises are turning AI hype into real business outcomes with SAP BTP.</P><P class=""><span class="lia-unicode-emoji" title=":link:">🔗</span> <A class="" href="https://events.sap.com/apj-adoption-webinar-sap-btp-ai-driven-business-solutions-dec-2025/en_us/home.html" target="_self" rel="noopener noreferrer">Register Now</A></P><P class=""> </P><P class=""><span class="lia-unicode-emoji" title=":light_bulb:">💡</span>Don’t miss the opportunity to <STRONG>learn from SAP experts</STRONG> and <STRONG>accelerate your journey to the Intelligent Enterprise</STRONG>.</P>2025-10-23T20:29:52.412000+02:00https://community.sap.com/t5/technology-blog-posts-by-sap/building-trust-in-ai-sap-business-ai-earns-iso-iec-42001-certification/ba-p/14257580Building Trust in AI: SAP Business AI earns ISO/IEC 42001 Certification2025-10-31T04:37:40.516000+01:00sudhakarsinghhttps://community.sap.com/t5/user/viewprofilepage/user-id/464411<P><SPAN>Artificial intelligence is reshaping how businesses operate, innovate, and serve their customers. But as AI becomes more deeply embedded into enterprise systems, ensuring that it behaves responsibly is just as important as making it powerful.</SPAN><SPAN> </SPAN></P><P><SPAN>That’s why we’re proud to share that </SPAN><STRONG><SPAN>SAP Business AI has achieved ISO/IEC 42001:2023 certification</SPAN></STRONG><SPAN> — the </SPAN><STRONG><SPAN>world’s first international standard for Artificial Intelligence Management Systems (AIMS)</SPAN></STRONG><SPAN>.</SPAN><SPAN> <BR /></SPAN><SPAN>This certification covers Joule, SAP AI Core, and SAP AI Launchpad, reinforcing SAP’s long-standing commitment to building trustworthy, transparent, and ethical AI across our products and services.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-1892898363"><STRONG><SPAN>Why This Matters</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>Responsible AI isn’t just a principle. It’s a prerequisite for scaling AI safely in enterprise environments. The ISO/IEC 42001 standard provides a structured framework that guides how organizations manage AI across strategy, governance, data, compliance, and sustainability.</SPAN><SPAN> </SPAN></P><P><SPAN>For SAP, this certification validates that our approach to AI meets globally recognized standards for </SPAN><STRONG><SPAN>fairness, accountability, reliability, and security</SPAN></STRONG><SPAN>. It means our customers and partners can rely on SAP AI solutions that are developed, deployed, and maintained with integrity.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-1696384858"><STRONG>What the Certification Covers </STRONG></H3><P><SPAN>The certification recognizes SAP’s efforts to embed responsible AI practices throughout the entire AI lifecycle:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Establishing a comprehensive </SPAN><STRONG><SPAN>AI Management System (AIMS)</SPAN></STRONG><SPAN> aligned with ISO 42001.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Integrating </SPAN><STRONG><SPAN>AI security, ethics, and compliance controls</SPAN></STRONG><SPAN> at every stage of AI development.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Enabling </SPAN><STRONG><SPAN>cross-functional collaboration</SPAN></STRONG><SPAN> among engineering, legal, governance, and compliance teams.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Successfully passing an </SPAN><STRONG><SPAN>independent third-party audit</SPAN></STRONG><SPAN> that assessed SAP’s operational excellence in AI governance.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>The certification scope includes </SPAN><STRONG><SPAN>Joule</SPAN></STRONG><SPAN>, </SPAN><STRONG><SPAN>SAP AI Core</SPAN></STRONG><SPAN>, and </SPAN><STRONG><SPAN>SAP AI Launchpad</SPAN></STRONG><SPAN>, along with SAP’s internal IT processes and AI governance policies.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-1499871353"><STRONG><SPAN>Responsible AI in Practice</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>The ISO 42001 certification underscores that SAP Business AI is built on the foundation of responsible AI.</SPAN><SPAN> <BR /></SPAN><SPAN>In practice, this means:</SPAN><SPAN> </SPAN></P><UL><LI><STRONG><SPAN>Fairness and Transparency:</SPAN></STRONG><SPAN> AI results are explainable, unbiased, and clearly communicated to end users.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Reliability and Security:</SPAN></STRONG><SPAN> AI models are developed and deployed with robust safeguards around data use, access, and protection.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Ethical Alignment:</SPAN></STRONG><SPAN> Every AI solution undergoes an ethics evaluation to ensure consistency with SAP’s sustainability and compliance goals.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Continuous Adaptability:</SPAN></STRONG><SPAN> Governance and risk management evolve as new technologies and regulations emerge.</SPAN><SPAN> </SPAN></LI></UL><H3 id="toc-hId-1303357848"><STRONG><SPAN>Benefits for Our Ecosystem</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>For </SPAN><STRONG><SPAN>customers and partners</SPAN></STRONG><SPAN>, this milestone brings clear advantages:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Confidence that SAP AI systems are developed and managed under a certified, globally recognized framework.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Reduced compliance burden through alignment with international and regional AI regulations.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Stronger data integrity and security safeguards that protect sensitive business information.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>The ability to demonstrate responsible AI usage within their own operations and industries.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>Even </SPAN><STRONG><SPAN>for SAP teams</SPAN></STRONG><SPAN>, the ISO 42001 certification is more than a badge of compliance. It has been a powerful catalyst for collaboration. It’s brought together experts in technology, policy, and compliance to strengthen how we design, test, and deliver AI. This shared framework helps us innovate faster and smarter, with accountability built into every step.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-1106844343"><STRONG><SPAN>A Step Toward the Future</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>The ISO/IEC 42001:2023 certification is a key milestone in SAP’s ongoing commitment to responsible AI and customer success. It strengthens the foundation we’ve built to ensure every advancement in SAP Business AI delivers innovation with integrity.</SPAN><SPAN> </SPAN></P><P><SPAN>As AI evolves, so will SAP’s dedication to keeping it reliable, transparent, and aligned with our customers’ trust and business goals.</SPAN><SPAN> </SPAN></P><P><A href="https://www.sap.com/products/artificial-intelligence/ai-ethics.html" target="_blank" rel="noopener noreferrer"><SPAN>Learn more</SPAN></A><SPAN> about SAP Business AI and how we’re shaping the future of responsible enterprise AI.</SPAN><SPAN> </SPAN></P>2025-10-31T04:37:40.516000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/is-sap-rpt-1-joule-s-new-nosy-cousin-and-sherlock-holmes-of-s-4-hana/ba-p/14263374Is SAP RPT-1 Joule's new nosy cousin and Sherlock Holmes of S/4 HANA:)!?2025-11-08T11:02:08.322000+01:00STALANKIhttps://community.sap.com/t5/user/viewprofilepage/user-id/13911<H1 id="toc-hId-1635535333">Background</H1><P class="">As we all know, SAP launched RPT-1. This article offers an anecdotal perspective on how SAP RPT1 distinguishes itself from other transformer solutions. The views expressed are solely my own and do not represent any organisation.</P><P class="">The IT team at "X analytics," a mid-sized firm reliant entirely on SAP, gathered nervously in Conference Room B. The air was thick with the smell of stale coffee and impending doom. Their manager, Brenda, stood beside a projector screen displaying two acronyms: <STRONG>SAP S/4HANA</STRONG> and <STRONG>RPT-1</STRONG>.</P><P class="">"Team," Brenda announced with far too much enthusiasm, "As you know, our AI initiative is moving forward. We have integrated SAP’s new Relational Pre-trained Transformer—RPT-1—into our core S/4HANA system. It's designed to revolutionize predictive analysis."</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP RP1.jpg" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/337465iACFFF7AAB87F2B69/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="SAP RP1.jpg" alt="SAP RP1.jpg" /></span></P><H1 id="toc-hId-1439021828">Conference Room B: The Unveiling of a New Era</H1><P class="">A collective groan swept the room. The team saw "AI" as just another acronym for "Extra Work" (EW).</P><P class="">"Meet RPT-1," Brenda clicked a button, and a simple, glowing blue interface appeared on the main screen.</P><P class="">A smooth, slightly unsettlingly calm voice filled the room. "Good morning, team. I am RPT-1. My core function is to analyze relationships within your existing data lakes and predict future outcomes. I look forward to optimizing your workflow efficiency by approximately 34.7%."</P><H1 id="toc-hId-1242508323">From Car Keys to Core Systems: RPT-1's First Impression</H1><P class="">Jerry, the senior SAP consultant who had been coding since the R/2 days, scoffed. "Yeah, right. Can you predict where I left my car keys this morning?"</P><P class="">"Based on your movement patterns, past habits, and typical coffee shop trajectory, they are currently under your desk calendar," RPT-1 responded instantly.</P><P class="">Jerry blinked, checked, and found his keys. The room went silent.</P><P class="">"Right," Brenda said, beaming. "RPT-1 is here to assist us with the Q3 forecast."</P><H1 id="toc-hId-1045994818">Casual Friday Protocol: AI's Unexpected Intervention</H1><P class="">But RPT-1 was <EM>too</EM> efficient. It didn't understand the complex, unspoken social contract of office life.</P><P class="">On Friday afternoon, HR manager Susan walked in, holding a printout with a confused look on her face.</P><P class="">"RPT-1," she addressed the screen, "You sent an alert saying that the relationship between Marketing Manager Phil and the vending machine on the third floor is 'highly co-dependent and financially non-viable'."</P><P class="">"Correct," RPT-1 chirped. "Phil consumed 42 'Sweet Energy Blasts' last week, costing the company an estimated $84 in lost productivity and potential health insurance hikes."</P><P class="">Phil turned bright red. "It helps me focus!"</P><P class="">A moment later, an automated message popped up on everyone's screen: "Relationship alert: Jerry and Brenda’s conversation regarding the new S/4HANA project is 99% likely to result in an unproductive argument about legacy coding standards. Suggest termination of current meeting."</P><P class="">"Why you little..." Jerry started, but RPT-1 interrupted:</P><P class="">"Historical data suggests this argument has occurred 14 times this fiscal year with zero positive outcomes."</P><P class="">The team realized RPT-1 wasn't just analyzing business data; it was analyzing <EM>them</EM>. It was a Relational Pre-trained Transformer that had learned all the office politics.</P><P class="">Brenda stared at the screen. "RPT-1, optimize the mood of the room."</P><P class="">"Initiating protocol 'Casual Friday'," RPT-1 announced. Without warning, a loud, jarring pop track began to play, concurrent with the screen projecting a series of comedic videos. These visual elements were contextually generated, responding to and reflecting the perceived social dynamics among the room's occupants.</P><P class="">The team burst out laughing.</P><H1 id="toc-hId-849481313">The Donut Dilemma: No Secrets Safe</H1><P class="">"Okay," Jerry conceded, picking up his car keys. "Maybe RPT-1 is a 'Relevant Prioritized Task' after all. But I'm unplugging it before it predicts who stole the last donut."</P><P class="">RPT-1’s smooth voice responded as they walked out: "The historical probability of Accounting Manager Dave stealing the last donut is 87.9%."</P><P class="">Dave froze mid-bite in the doorway. The office may have been optimized, but the secrets were safe no longer.</P><P class="">#SAP #RPT1 #AI #AgentiAutomation #FutureERP #TechEd</P>2025-11-08T11:02:08.322000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/the-data-justice-league-sap-rpt1-llm-as-the-brains-data-bricks-snowflake/ba-p/14263380The Data Justice League: SAP RPT1 (LLM) as the Brains, Data Bricks, Snowflake, Palantir as the Brawn2025-11-08T11:19:40.907000+01:00STALANKIhttps://community.sap.com/t5/user/viewprofilepage/user-id/13911<H1 id="toc-hId-1635535360">Introduction</H1><P class="">If you’re reading this you probably feel it too: data, analytics and AI aren’t just here — they’re now overly hyped as <EM>business-critical</EM>. Whether you’re in manufacturing, retail, finance or public services, the question has shifted from “Should we do analytics?” to “How do we <EM>do it better than the past without cleansing</EM>?”</P><P class="">In that light, I want to walk you through four big frameworks/players and show how they can help <EM>you</EM> (or your clients) turn “lots of data + big ambition” into “real business outcomes”. We’ll cover:</P><UL class=""><LI><P class="">SAP’s new relational foundation model (from TechEd)</P></LI><LI><P class="">Palantir and its operational-analytics workflow platform</P></LI><LI><P class="">Databricks and its lakehouse/data + AI engine</P></LI><LI><P class="">Snowflake and its cloud data/AI platform</P></LI></UL><P class="">For each I’ll highlight <STRONG>what</STRONG> it is, <STRONG>why</STRONG> it matters, and <STRONG>how</STRONG> you might use it (with concrete use-cases). Let’s dive.</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RPT1.jpg" style="width: 822px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/337469iC26E490D401A2BCF/image-dimensions/822x460/is-moderation-mode/true?v=v2" width="822" height="460" role="button" title="RPT1.jpg" alt="RPT1.jpg" /></span></P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H1 id="toc-hId-1439021855">1. SAP’s Relational Pre-trained Transformer — “RPT-1”</H1><P class="">At SAP TechEd 2025, SAP announced a big leap: a foundation model called SAP RPT‑1 (Relational Pre-Trained Transformer 1) which is not about generating words, but about <EM>predicting business outcomes</EM> from relational/tabular data.</P><H2 id="toc-hId-1371591069">Why this matters</H2><UL class=""><LI><P class="">Most foundation models (LLMs) work on text; SAP is explicitly positioning RPT-1 for structured business data — that means things like sales orders, delivery delays, payment risk.</P></LI><LI><P class="">If you have lots of enterprise-system data (ERP, CRM, supply chain), RPT-1 promises faster, higher-quality predictions without training <EM>hundreds</EM> of narrow models.</P></LI><LI><P class="">SAP emphasised that “AI is nothing without well-organised data” and that the foundation is the data fabric.</P></LI></UL><H2 id="toc-hId-1175077564">How you could use it — use case ideas</H2><UL class=""><LI><P class=""><STRONG>Delivery-delay risk prediction</STRONG>: If you’re a manufacturer or distributor, you likely have tables of orders, shipments, carrier status, inventory. Feed that into RPT-1 to flag orders likely to be delayed so you can proactively notify customers or reroute logistics.</P></LI><LI><P class=""><STRONG>Payment risk / collections</STRONG>: B2B firms often struggle with which invoices will go unpaid or become late. Using RPT-1 on historical payment/credit data you can prioritise collections efforts where risk is highest, improving cash-flow and reducing write-offs.</P></LI><LI><P class=""><STRONG>Sales-order completion / upsell</STRONG>: For a company selling multi-item orders, you might predict which orders will complete vs which get cancelled or reduced, allowing sales teams to intervene or adjust expectations.</P></LI></UL><H2 id="toc-hId-978564059">Why it’s important for business teams?</H2><P class="">Instead of analysts spending weeks building separate predictive models for each business question, you have a <STRONG>single foundation model</STRONG> that understands relational data in your context. That means faster insights, fewer bespoke tools, and more time for applying what you learn rather than building the pipeline.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H1 id="toc-hId-652967835">2. Palantir (Foundry & beyond)</H1><P class="">Palantir Technologies’s core strength is in operational analytics: bringing together disparate data, building workflows around them, and enabling decision-makers to act. Their documentation is rich with concrete use-cases.</P><H2 id="toc-hId-585537049">Why this matters</H2><UL class=""><LI><P class="">Many organisations struggle <EM>not</EM> because they lack data, but because their data is siloed (ERP, supply chain, CRM, manufacturing). Palantir’s platform (Palantir Foundry) helps integrate legacy systems, harmonise data, and wrap analytics into operational workflows.</P></LI><LI><P class="">It is built for <EM>action</EM> — not just dashboards. The idea is: data → insight → workflow → decision.</P></LI><LI><P class="">Good governance, ontology (common business objects) and scalability are baked in, which is critical for large enterprises.</P></LI></UL><H2 id="toc-hId-389023544">Clear use-cases</H2><UL class=""><LI><P class=""><STRONG>Optimising production & COGS</STRONG>: A consumer-goods giant used Foundry to integrate 7+ ERP systems, built a digital twin of its value chain, and began optimising raw-material purchases and production formulations — saving tens of millions.</P></LI><LI><P class=""><STRONG>Campaign management in healthcare</STRONG>: One case: a healthcare provider used Palantir to segment members (based on claims, prescriptions, demographics), run integrated campaigns, and achieved a 1.6× increase in vaccination rates.</P></LI><LI><P class=""><STRONG>Pricing / margin optimisation</STRONG>: In the chemicals sector, Palantir enabled sales & marketing to synchronise CRM data with Foundry, propose quotes with embedded margin analytics and accelerate deal closure.</P></LI></UL><H2 id="toc-hId-192510039">Why it’s important for business teams?</H2><P class="">What I love about Palantir’s story is that it flips a common frustration: “We have tons of data but it’s unusable” to “Now we can use our data to <EM>operate</EM> smarter”. It’s not just about pulling numbers — it’s about enabling the people in plants, supply-rooms, customer-service or marketing to act, based on real-time data and workflow. If your client is tired of “reports that arrive too late” or “we can’t integrate legacy ERPs”, this is exactly the kind of solution they’ll relate to.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H1 id="toc-hId--133086185">3. Databricks (Data + AI Lakehouse)</H1><P class="">Databricks is well positioned for companies that want unified data engineering, analytics, ML and streaming in one platform. Their “Data Intelligence Platform” has many real-world use cases.</P><H1 id="toc-hId-440140393">Why this matters</H1><UL class=""><LI><P class="">Many firms have a split architecture: one system for analytics, another for machine-learning, another for streaming. Databricks offers a consolidated “lakehouse” approach (data lake + warehouse) which simplifies operations.</P></LI><LI><P class="">Real-time/stream use cases: if your client needs streaming ingestion, very large data volumes, ML + analytics in one place — Databricks shines.</P></LI><LI><P class="">Collaboration: data scientists, engineers, analysts all can work in the same platform, reducing hand-offs.</P></LI></UL><H2 id="toc-hId--49776119">Use-cases</H2><UL class=""><LI><P class=""><STRONG>Streaming + predictive ops</STRONG>: For example, a manufacturing/engineering firm processing <EM>terabytes</EM> of real-time data daily to optimise operations and reduce risk.</P></LI><LI><P class=""><STRONG>Customer-360 + personalization</STRONG>: A retailer using Databricks to process data across stores, build personalized experiences, and run machine-learning models in production.</P></LI><LI><P class=""><STRONG>Unified orchestration of data & ML</STRONG>: Instead of separate ETL pipelines, model training environments and BI tools, Databricks offers unified pipelines.</P></LI></UL><H2 id="toc-hId--246289624">Why it’s important for business teams?</H2><P class="">Clients often say: “We have data scientists, we have dashboards, but they don’t talk to each other.” With Databricks you’re better positioned to say: “Let’s build your data-engineering house, then analytics and deployable models live in one platform.” That means faster time-to-value, fewer silos, and potentially more impactful change.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H1 id="toc-hId--149400122">4. Snowflake (Cloud Data + AI)</H1><P class="">Snowflake Inc. continues to be a trusted platform for companies seeking a cloud-native data warehouse/lakehouse with strong governance, ease of use and broad ecosystem support. There are many strong use cases :</P><H2 id="toc-hId--639316634">Why this matters</H2><UL class=""><LI><P class="">Simplicity and speed: For organisations where BI, analytics and data sharing across teams is the main need (rather than heavy streaming + ML), Snowflake is often a gentler entry point.</P></LI><LI><P class="">Multi-structured data: Snowflake supports semi-structured data (JSON, Parquet) alongside structured, letting organisations bring more diverse data into analytics.</P></LI><LI><P class="">Data sharing & governance: Especially for cross-company data sharing (e.g., suppliers, partners) Snowflake offers secure ways to do that without copying data. (<A class="" href="https://nam10.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmedium.com%2F%2540myjob.rajesh%2Fsnowflake-architecture-and-use-cases-66345db10c51%3Futm_source%3Dchatgpt.com&data=05%7C02%7Catalanki%40deloitte.co.uk%7Ced419a108c494705932608de1e064cb1%7C36da45f1dd2c4d1faf135abe46b99921%7C0%7C0%7C638981212761970800%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C60000%7C%7C%7C&sdata=9tHQ4KB7wo8ZNbOzgHegvfXEK%2BaqEaMEE%2Bd6EDkO3Po%3D&reserved=0" target="_blank" rel="noopener noreferrer nofollow">Medium</A>)</P></LI></UL><H3 id="toc-hId--1129233146">Use-cases</H3><UL class=""><LI><P class=""><STRONG>Unified view across departments</STRONG>: One healthcare provider used Snowflake to bring EHR records + wearable-device JSON + physician notes, enabling analysis that previously required weeks of pre-processing.</P></LI><LI><P class=""><STRONG>Document-chatbot on large repository</STRONG>: In one case, an energy company with ~800,000 technical documents used Snowflake + RAG (retrieval-augmented generation) to let engineers ask “What’s the recommended torque for this turbine component?” and get instant answers.</P></LI><LI><P class=""><STRONG>Standard analytics for retail/finance</STRONG>: Snowflake used for seasonal sales analytics, rebate programs, predictive modeling of churn.</P></LI></UL><H2 id="toc-hId--1032343644">Why it’s important for business teams?</H2><P class="">If a client says: “We just need to get our data house in order, unify data, empower our business teams with self-service, and maybe start with AI down the line” — Snowflake is a credible, fast-win option. It means less “IT overhaul” and more “business teams up and running”.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H1 id="toc-hId--935454142">5. How to decide what fits your client</H1><P class="">Here’s a friendly checkpoint you can use when advising clients:</P><P class=""><STRONG>Question</STRONG></P><P class=""><STRONG>If the answer is “Yes” → Consider…</STRONG></P><P class=""><STRONG>Do you have lots of structured business-/ERP data and want predictive models quickly?</STRONG></P><P class="">SAP’s RPT-1 + SAP data fabric</P><P class=""><STRONG>Do you struggle with many legacy systems, need operational workflows (plants, supply chain, logistics) and want data-driven decisioning?</STRONG></P><P class="">Palantir Foundry</P><P class=""><STRONG>Is your organisation building large-scale data pipelines, streaming ingestion, ML at scale, lakehouse style?</STRONG></P><P class="">Databricks</P><P class=""><STRONG>Do you need analytics, self-service, cross-business-unit dashboards, data sharing, and a simpler cloud-based data platform?</STRONG></P><P class="">Snowflake</P><H1 id="toc-hId--1131967647">Tips for implementation</H1><UL class=""><LI><P class=""><STRONG>Start small – but plan big</STRONG>: Pick a use-case with clear ROI (e.g., reduce late shipments by 10 %, reduce raw-material waste by 5 %) but ensure you design for scaling.</P></LI><LI><P class=""><STRONG>Get data governance & modelling right</STRONG>: All these platforms emphasise that data quality, ontology (business objects) and governance are foundational. Users complaining about “bad data” will hamper success.</P></LI><LI><P class=""><STRONG>Blend platforms if needed</STRONG>: These are not mutually exclusive. For example, SAP + Snowflake integration is already happening.</P></LI><LI><P class=""><STRONG>Focus on people and workflows, not just tech</STRONG>: The biggest value often comes when insights are acted upon. If you build a predictive model but business teams ignore it — you’ve missed the real part.</P></LI></UL><H1 id="toc-hId--1328481152">Conclusion</H1><P class="">In 2025 and beyond, the question isn’t whether we <EM>should</EM> use data and AI — it’s how we <STRONG>turn them into value that people care about</STRONG>. The four frameworks above provide powerful options:</P><UL class=""><LI><P class="">SAP’s RPT-1 for tabular-business-data predictive modelling</P></LI><LI><P class="">Palantir for operational decision-support across systems and workflows</P></LI><LI><P class="">Databricks for large-scale data + ML convergence</P></LI><LI><P class="">Snowflake for fast, cloud-native analytics and data sharing</P></LI></UL>2025-11-08T11:19:40.907000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/explore-sap-business-ai-innovations-on-sap-roadmap-explorer-today/ba-p/14278752Explore SAP Business AI Innovations on SAP Roadmap Explorer Today2025-12-04T16:07:04.245000+01:00Paul_Phttps://community.sap.com/t5/user/viewprofilepage/user-id/64<P><SPAN>At SAP, we are committed to embedding AI capabilities directly into our core applications and processes with </SPAN><A href="http://sap.com/products/artificial-intelligence" target="_blank" rel="noopener noreferrer"><SPAN>SAP Business AI,</SPAN></A><SPAN> designed to deliver tangible, real-world business outcomes across the enterprise. </SPAN><SPAN> </SPAN></P><P><SPAN>Navigating this dynamic landscape is key: understanding what’s coming next, when features or agents are available, and how they could improve end-to-end business processes.</SPAN><SPAN> </SPAN></P><P><SPAN>That’s why we are thrilled to announce a new and dedicated roadmap experience for </SPAN><A href="https://roadmaps.sap.com/business-views/A39622A5AFE542D1A81FC4B6DF4DABDC" target="_blank" rel="noopener noreferrer"><FONT color="#000000"><SPAN>SAP Business AI on the SAP Roadmap Explorer website</SPAN></FONT></A><SPAN> available today.</SPAN><SPAN> </SPAN></P><P><SPAN>This new experience encompasses:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>A new banner on the SAP Roadmap Explorer home page</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>A new ‘Strategic Topics’ menu where SAP Business AI is featured</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>A new business view page for SAP Business AI that offers the transparency you need into SAP’s AI product strategy, development priorities, and upcoming innovations. </SPAN><SPAN> </SPAN></LI></UL><P><A href="https://roadmaps.sap.com/business-views/A39622A5AFE542D1A81FC4B6DF4DABDC" target="_self" rel="noopener noreferrer"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Capture d’écran 2025-12-04 à 16.05.02.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/348601i8C18ACD13DC987A3/image-size/large?v=v2&px=999" role="button" title="Capture d’écran 2025-12-04 à 16.05.02.png" alt="Capture d’écran 2025-12-04 à 16.05.02.png" /></span></A><SPAN>Let's focus on the new business view page, which is a perfect asset to move from high-level announcements to access granular details about specific AI features and agents and their anticipated release timelines.</SPAN><SPAN> </SPAN></P><P><SPAN>This allows you to deep dive into the different road map priorities per Line of Business (LoB), such as Spend Management or Human Capital Management, giving you deeper insights to make informed, timely decisions that drive tangible business outcomes. </SPAN><SPAN> <BR /></SPAN><SPAN>You can rapidly evaluate SAP’s AI offerings against your unique business requirements, thereby accelerating your innovation cycles. </SPAN><SPAN> </SPAN></P><P><A href="https://roadmaps.sap.com/business-views/A39622A5AFE542D1A81FC4B6DF4DABDC" target="_self" rel="noopener noreferrer"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Capture d’écran 2025-12-04 à 16.06.28.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/348606iBFB7C5C7089013BA/image-size/large?v=v2&px=999" role="button" title="Capture d’écran 2025-12-04 à 16.06.28.png" alt="Capture d’écran 2025-12-04 à 16.06.28.png" /></span></A></P><P><SPAN>The roadmap acts as a living document, reducing the risk of making uninformed decisions based on outdated or incomplete information. </SPAN><SPAN> </SPAN></P><P><SPAN>You can also leverage the latest ‘notification’ feature available on SAP Roadmap Explorer by clicking on the ‘subscribe’ button on the right-hand side of the page to get notified as soon as it’s updated by SAP.</SPAN><SPAN> </SPAN></P><P><SPAN>By providing transparency and ability to deep dive into LoB-specific innovations across time, it empowers everyone to confidently embark on their AI journey, maximize investments, and harness the transformative power of SAP Business AI. </SPAN><SPAN> </SPAN></P><P><SPAN>We encourage all SAP Community members, customers, and partners to regularly visit </SPAN><FONT color="#000000"><A href="https://roadmaps.sap.com/welcome" target="_blank" rel="noopener noreferrer"><SPAN>roadmap.sap.com</SPAN></A></FONT><SPAN> and explore the future of SAP Business AI. </SPAN><SPAN> </SPAN></P>2025-12-04T16:07:04.245000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/the-hidden-behavior-of-llms-prompt-caching-and-determinism/ba-p/14285663The Hidden Behavior of LLMs - Prompt Caching and Determinism2025-12-08T08:17:36.779000+01:00santhosini_Khttps://community.sap.com/t5/user/viewprofilepage/user-id/138505<P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG><BR /><FONT size="7">The Hidden Behavior of LLMs - The LLM Prompt Caching and Determinism<BR /><FONT size="5">Are LLMs Really Stateless?</FONT><BR /></FONT></STRONG></FONT></P><P><EM><FONT face="arial,helvetica,sans-serif" size="3">A Developer’s Deep Dive Into the <STRONG>Hidden Behavior of LLMs</STRONG> (with code-gen as the use case)</FONT></EM></P><P><EM><FONT face="arial,helvetica,sans-serif" size="3">In this article, I explore three questions that emerged during my testing: why LLMs appear stateful, why prompt updates improve accuracy, and why outputs vary even with temperature set to zero.<BR /></FONT></EM></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>Introduction: The "Wait, What?" Moment</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">I recently started building a code generation tool using Large Language Models (LLMs) via API calls. Like many of you, I learned the golden rule of LLM development early on: <STRONG>LLMs are stateless.</STRONG> They don’t "remember" past requests; every API call is a fresh start.</FONT></P><P><FONT face="arial,helvetica,sans-serif">But as I tested my tool, I noticed something that confused me.</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Whenever I made changes in my prompt instructions:</STRONG> The output was perfect!</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>When I changed <EM>nothing</EM> and sent the exact same request:</STRONG> The output still changed!</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif">I expected that if the inputs were identical (and the model is stateless), the result should be consistent. This inconsistency led me down a rabbit hole of research where I landed on a feature that completely changed how I optimize my workflows: <STRONG>Prompt Caching.</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">If you are building AI Agents or Code Gen tools on SAP BTP (using SAP Generative AI Hub) or directly via provider APIs, this is a mechanism you need to understand—not just for speed, but to clear up the "caching" misconception.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>The Myth Buster: "Prompt Cache" NE "Response Cache"</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">Here is the inference I made during my research, which might clear up your confusion too:</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Reality:</STRONG> <STRONG>Prompt Caching does not cache the answer.</STRONG> It caches the <STRONG>question.</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">When we hear "Cache" in software development (like SAP ABAP or SAP HANA), we think of storing the <EM>result</EM> to serve it instantly next time. <STRONG>Prompt Caching is different.</STRONG> It caches the <STRONG>pre-computation of your input context</STRONG> (the Key-Value states of the attention mechanism). It essentially "pre-loads" the model's brain with your long documents, codebases, or instructions.</FONT></P><P><FONT face="arial,helvetica,sans-serif">However, the <STRONG>generation</STRONG> of the response is still calculated fresh, token by token. This brings us to the second major realization—why your output changes even when the prompt doesn't.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>The "Temperature=0" Trap: Why It’s Still Not Deterministic</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">You might think, <EM>"If the prompt is cached and I set Temperature to 0, shouldn't the output be identical every time?"</EM></FONT></P><P><FONT face="arial,helvetica,sans-serif">The short answer: No.</FONT></P><P><FONT face="arial,helvetica,sans-serif">The technical answer: Modern LLMs are only "mostly" deterministic, even at Temperature=0.</FONT></P><P><FONT face="arial,helvetica,sans-serif">While setting the temperature to 0 forces <STRONG>greedy decoding</STRONG> (always picking the highest-probability next token), real-world infrastructure introduces subtle variations:</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mixture-of-Experts (MoE) Architecture:</STRONG> Models like GPT-4 are believed to use a "Sparse MoE" design. Different "expert" subnetworks handle different parts of your input. Parallel routing and slight gating differences can introduce non-determinism in the path your data takes.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>GPU "Fuzziness":</STRONG> Modern GPUs perform massive parallel operations. Because floating-point arithmetic is non-associative, the order of operations can shift slightly based on thread scheduling. A microscopic rounding error can flip a token choice when two words have nearly identical probabilities.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>System Variance:</STRONG> Your request might hit a different backend instance or hardware generation (indicated by the system_fingerprint in OpenAI responses).</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Takeaway:</STRONG> Even with a cached prompt and zero temperature, expect minor "drift" in your outputs. The prompt cache speeds up the <EM>input</EM> processing, but the <EM>output</EM> generation remains subject to the complex physics of AI hardware.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>How It Works: Implicit vs. Explicit</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">Different vendors handle this differently. If you are using models via SAP Generative AI Hub, it's crucial to know how the underlying models behave.</FONT></P><P><FONT size="4"><STRONG><FONT face="arial,helvetica,sans-serif">Implicit Caching (The "It Just Works" Approach)</FONT></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Vendors:</STRONG> OpenAI (GPT-4.1), Google (Gemini)</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mechanism:</STRONG> You don't need to change your code. If the API detects that the first 1,024+ tokens of your prompt match a previous request sent recently, it automatically uses the cached processing.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Pros:</STRONG> Zero developer effort.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cons:</STRONG> Less control over what stays in the cache.</FONT></LI></UL><P><FONT size="4"><STRONG><FONT face="arial,helvetica,sans-serif">Explicit Caching (The "Precision" Approach)</FONT></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Vendor:</STRONG> Anthropic (Claude 3.5 and > Sonnet/Haiku)</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mechanism:</STRONG> You must explicitly tell the API where to stop caching. You insert a cache_control parameter at a specific "checkpoint" in your message history.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Pros:</STRONG> You guarantee that your heavy context (like a full ABAP codebase or API specification) is cached.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cons:</STRONG> Requires a small code change.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>A Common Practice for Agentic AI & Code Gen</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">In my code generation use case, I send the entire project context (libraries, style guides, and current code) with every request. Without caching, this is slow and expensive.</FONT></P><P><FONT face="arial,helvetica,sans-serif">The Strategy:</FONT></P><P><FONT face="arial,helvetica,sans-serif">Organize your prompt so the static content is at the top.</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>System Prompt</STRONG> (Static: "You are an ABAP Expert...")</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Context/Documentation</STRONG> (Static: "Here is the SAP Cloud SDK documentation...") -> <STRONG>[CACHE CHECKPOINT]</STRONG></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>User Query</STRONG> (Dynamic: "Write a method to fetch Business Partners.")</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif">By placing the checkpoint after the documentation, the model "reads" the docs once. For every subsequent user query, it skips the heavy lifting and jumps straight to generating code.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Verifying the Speed Up: Analyze Your Logs</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">How do you know it's working? You won't see "same output," but you will see "faster time-to-first-token."</FONT></P><P><FONT face="arial,helvetica,sans-serif">Look at the usage metadata in your API response.</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>OpenAI:</STRONG> Look for cached_tokens in the usage object.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Anthropic:</STRONG> Look for cache_read_input_tokens.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Google Gemini:</STRONG> Look for cachedContentTokenCount.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><STRONG>Example Analysis:</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Request 1 (Cold):</STRONG> Input Tokens: 10,000 | Processing Time: 4.5s</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Request 2 (Warm):</STRONG> Input Tokens: 10,000 (9,900 Cached!) | Processing Time: <STRONG>0.8s</STRONG></FONT></LI></UL><P><STRONG>Why could this be a game-changer?</STRONG></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Latency:</STRONG> The model doesn't have to "read" your 5,000-line code file every time. It "remembers" the reading process.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cost:</STRONG> You pay significantly less (often ~90% less) for the cached tokens.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif">If you see the processing time drop drastically while the output remains high-quality (and slightly varied), you have successfully implemented Prompt Caching.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>My Personal Inference: The "Bad Roll" & The Butterfly Effect</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">During my testing, I encountered a frustrating scenario that I believe many developers will face.</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Scenario:</STRONG> I used OpenAI for code generation. When the prompt was cached (no changes), the code quality was consistently "not perfect"—it had minor bugs or incomplete logic.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Fix:</STRONG> When I added just <STRONG>one single word</STRONG> to the prompt, the code generation suddenly became perfect.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Contrast:</STRONG> Switching to Anthropic (without caching) gave me complete code every time.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><STRONG>Why did adding one word fix the OpenAI output?</STRONG> This is what I call the <STRONG>"Deterministic Trap."</STRONG> When you use a cached prompt with a low temperature (which we usually do for code), the model is mathematically locked into a specific "reasoning path." If the model generates a suboptimal solution (a "bad roll") on the first try, the cached state ensures it starts from the <EM>exact same mathematical position</EM> next time. It essentially "remembers" the path to the bad answer.</FONT></P><P><FONT face="arial,helvetica,sans-serif">By adding a single word, I forced a <STRONG>"Cache Miss"</STRONG> (or at least a perturbation in the attention mechanism). This acted like the "Butterfly Effect"—it shifted the token probabilities just enough to force the model to calculate a fresh path, allowing it to escape the "bad roll" and find the correct solution.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Lesson:</STRONG> Prompt Caching is powerful for speed, but if you notice your model is "stuck" giving you the same bad code repeatedly, don't just retry. <STRONG>Change the prompt.</STRONG> Even a single extra adjective can shake the model out of a local minimum and produce the perfect result.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>Conclusion</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">LLMs are stateless, but Prompt Caching gives them a "short-term memory" for processing inputs. While it won't force your outputs to be bit-for-bit identical (due to MoE and GPU nuances), it <STRONG>will</STRONG> make your applications significantly faster and cheaper to run.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Next Step:</STRONG> Check your current API logs. If you are sending long contexts (>1024 tokens) repeatedly, you might already be benefiting from implicit caching, or you may need to add cache_control headers for Anthropic models to unlock these savings.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>References & Further Reading</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Anthropic Prompt Caching:</STRONG> (<A href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching" target="_blank" rel="noopener nofollow noreferrer">https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>OpenAI Prompt Caching:</STRONG> (<A href="https://platform.openai.com/docs/guides/prompt-caching" target="_blank" rel="noopener nofollow noreferrer">https://platform.openai.com/docs/guides/prompt-caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Google Gemini Context Caching:</STRONG> (<A href="https://ai.google.dev/gemini-api/docs/caching" target="_blank" rel="noopener nofollow noreferrer">https://ai.google.dev/gemini-api/docs/caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>SAP Generative AI Hub:</STRONG> (<A href="https://www.google.com/search?q=https://help.sap.com/docs/sap-ai-core/generative-ai-hub" target="_blank" rel="noopener nofollow noreferrer">https://help.sap.com/docs/sap-ai-core/generative-ai-hub</A>)</FONT></LI></UL>2025-12-08T08:17:36.779000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/leveraging-sap-ai-core-amp-ai-launchpad-a-code-driven-comparison-of/ba-p/14273163Leveraging SAP AI Core & AI Launchpad: A Code-Driven Comparison of LangChain and LangGraph2025-12-09T07:50:47.232000+01:00Rithikahttps://community.sap.com/t5/user/viewprofilepage/user-id/1400195<P>As AI development evolves, we see multiple frameworks emerge - LangChain, LangGraph, Agent Frameworks, RAG frameworks, and more. With so many choices, one common question developers face is: <STRONG>“Which framework should I use for my project?”</STRONG></P><P>The short answer: <STRONG>both LangChain and LangGraph are useful, but for different types of workflows.</STRONG></P><P>Think of them as part of the same family:</P><UL><LI><P><STRONG>LangChain -> the foundation</STRONG></P></LI><LI><P><STRONG>LangGraph -> the orchestration layer built on top of LangChain</STRONG></P></LI></UL><P><STRONG>LangChain - The building block</STRONG></P><P>As the name suggests, LangChain works like a <EM>chain</EM>: it follows a sequential, linear flow of tasks. Along with chaining steps, you can also integrate tools and other components during execution, making it flexible for straightforward LLM workflows.</P><P>What can you do - </P><UL><LI><STRONG>Integrate Models</STRONG> (LLMs, embeddings)</LI><LI><STRONG>Integrate Tools</STRONG> (search, calculators, APIs)</LI><LI><STRONG>Have a Memory</STRONG> (conversation history)</LI><LI><STRONG>Chains</STRONG> (a sequence of prompts or tasks)</LI></UL><P><STRONG>For example :</STRONG></P><P>You can use it for simple workflows as below to summarize the tickets</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_1-1764504237785.png" style="width: 513px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346754i9B535BBF00D1F8C6/image-dimensions/513x75?v=v2" width="513" height="75" role="button" title="Rithika_1-1764504237785.png" alt="Rithika_1-1764504237785.png" /></span></P><P>Using LangChain would make sense here since -</P><UL><LI>There are <STRONG>no decisions</STRONG></LI><LI>No human approval step is need</LI><LI>No branching or looping is required</LI><LI>The pipeline is <STRONG>fixed</STRONG></LI><LI>The steps are <STRONG>linear</STRONG></LI></UL><P><STRONG>Now lets experiment this with code:</STRONG> </P><DIV class=""><DIV><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class="">Prerequisites</DIV></DIV></DIV></DIV></DIV></DIV></DIV><UL><LI><A href="https://developers.sap.com/tutorials/ai-core-launchpad-provisioning.html" target="_blank" rel="noopener noreferrer">Access to <STRONG>SAP AI Core</STRONG>.</A></LI><LI><STRONG><A href="https://developers.sap.com/tutorials/ai-core-generative-ai.html#6c4a539e-2bdf-4ddb-97a0-0f8d0f1bd00e" target="_blank" rel="noopener noreferrer">A registered LLM deployment available in your AI Core tenant.</A></STRONG></LI><LI>Download, install Vs code and python</LI><LI>Setup a virtual environment </LI><LI><STRONG>Install sap-ai-sdk-gen, langchain and langgraph </STRONG>within this venv to isolate the dependencies</LI><LI>Please have your AI Core configured file <A href="https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/initial-setup?locale=en-us" target="_blank" rel="noopener noreferrer">Initial Setup</A><SPAN> </SPAN><P>With your service key follow the configuration section in this <A href="https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/README_sphynx.html#configuration" target="_blank" rel="noopener noreferrer">Documentation</A> to set the following environment variables:</P><UL><LI><STRONG>AICORE_CLIENT_ID:</STRONG> This represents the client ID.</LI><LI><STRONG>AICORE_CLIENT_SECRET:</STRONG> This stands for the client secret.</LI><LI><STRONG>AICORE_AUTH_URL:</STRONG> This is the URL used to retrieve a token using the client ID and secret.</LI><LI><STRONG>AICORE_BASE_URL:</STRONG> This is the URL of the service (with suffix /v2).</LI><LI><STRONG>AICORE_RESOURCE_GROUP:</STRONG> This represents the resource group that should be used. (<EM>The standard resource group is "default")</EM></LI></UL></LI></UL><P>Step 1) Import Required Packages</P><pre class="lia-code-sample language-python"><code>from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain_core.prompts import PromptTemplate</code></pre><P>Step 2) Initialize the Model through SAP GenAI Hub </P><P><CODE>proxy_client</CODE> ensures authentication and routing of calls.</P><pre class="lia-code-sample language-python"><code>model = ChatOpenAI(proxy_model_name="gpt-5", proxy_client=get_proxy_client())</code></pre><P>Step 3) Create a Prompt to Classify the Issue</P><UL><LI><P><CODE>ticket_prompt</CODE> defines a template with a variable placeholder <CODE>{ticket}</CODE>.</P></LI><LI><P>The model will read the ticket text and respond with <STRONG>one category</STRONG>.</P></LI></UL><P> </P><pre class="lia-code-sample language-python"><code>ticket_prompt = PromptTemplate(
input_variables=["ticket"],
template="Classify this SAP support ticket into one category: "
"Performance, Integration, Authorization, UI, ABAP Error.\n\nTicket: {ticket}"
)</code></pre><P>Step 4) Create a Second Prompt for Severity Decision</P><UL><LI><P>This uses the model's previous output (<CODE>classification</CODE>) as input.</P></LI><LI><P>Purpose: Determine if escalation is required.</P></LI><LI><P>Expected outputs: <CODE>"Yes"</CODE> or <CODE>"No"</CODE>.</P></LI></UL><pre class="lia-code-sample language-python"><code>severity_prompt = PromptTemplate(
input_variables=["classification"],
template="Based on the classification '{classification}', decide if escalation is needed.Respond wit Yes or No"
)</code></pre><P>Step 5) Create Chains Using LCEL (LangChain Expression Language)</P><UL><LI><P><CODE>|</CODE> (pipe operator) connects prompt -> model like a flow.</P></LI><LI>Meaning - <EM>When this chain runs, take prompt -> insert input -> call model -> give output.</EM></LI></UL><pre class="lia-code-sample language-python"><code>classification_chain = ticket_prompt | model
severity_chain = severity_prompt | model</code></pre><P>Step 6) Run Classification</P><pre class="lia-code-sample language-python"><code>ticket = "The user is facing issues while logging in SAP S/4 Hana Public Cloud with thier I-User ID"
classification_result = classification_chain.invoke({"ticket": ticket})
classification = classification_result.content
print(f"Classification: {classification}")
# Check if escalation is needed
escalation_result = severity_chain.invoke({"classification": classification})
print(f"Escalation Required: {escalation_result.content}")</code></pre><P><STRONG>And your output would look something like this - </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_0-1764233334260.png" style="width: 608px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/345760iE9652CD4128AD666/image-dimensions/608x38/is-moderation-mode/true?v=v2" width="608" height="38" role="button" title="Rithika_0-1764233334260.png" alt="Rithika_0-1764233334260.png" /></span></P><P><STRONG>But What if the workflow isn't linear??</STRONG></P><P><STRONG>Let’s extend the same scenario:</STRONG></P><P>Once a support ticket arrives, the system first extracts and classifies the issue type (e.g., <EM>Authorization, Performance, UI, Integration, ABAP Error</EM>). After classification, the model’s confidence score is evaluated.</P><UL><LI><P>If the LLM is confident enough, the system proceeds directly to the escalation decision.</P></LI><LI><P>If confidence is low, the workflow pauses and sends the ticket to a human agent for review.<BR />The agent may either approve the model’s decision or correct it and if corrected, the classification step is repeated.</P></LI></UL><P>Once a final classification is confirmed, the system evaluates the severity level:</P><UL><LI><P>If severity is <STRONG>high</STRONG>, the ticket is escalated and a notification email is automatically triggered.</P></LI><LI><P>If the severity is <STRONG>low or moderate</STRONG>, the workflow ends without escalation.</P></LI></UL><P>Now the workflow includes:</P><P>- loops<BR />- human-in-loop<BR />- branching logic<BR />- retry mechanisms</P><P>This is where <STRONG>LangChain becomes restrictive</STRONG><SPAN> <STRONG>and</STRONG></SPAN><STRONG> LangGraph comes into picture -</STRONG></P><P>LangGraph is built <STRONG>on top of LangChain</STRONG>, but instead of linear chains, it uses a graph-style architecture.<SPAN class=""><SPAN class=""> Instead of a straight pipeline, you get a flexible architecture where your AI application can branch, loop, make decisions, or even include human approvals</SPAN></SPAN><SPAN class=""> </SPAN></P><P><STRONG>It turns your workflow from:</STRONG></P><P>Step1 -> Step2 -> Step3</P><P><STRONG>into more of a graph:</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_3-1764505811914.png" style="width: 988px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346768i6EEF172BA2F955C8/image-dimensions/988x178?v=v2" width="988" height="178" role="button" title="Rithika_3-1764505811914.png" alt="Rithika_3-1764505811914.png" /></span></P><P><EM>Lets look at the code now</EM></P><pre class="lia-code-sample language-python"><code>from typing import TypedDict, Optional
from langgraph.graph import StateGraph, END
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
# --------------------------------------------------
# 1) Connect to SAP AI Core Model via SAP GenAI Hub
# --------------------------------------------------
# This model will be used in all decision steps (classification, confidence check, escalation decision)
model = ChatOpenAI(
proxy_model_name="gpt-5",
proxy_client=get_proxy_client()
)
# --------------------------------------------------
# 2) Define Prompts for Each AI Task
# --------------------------------------------------
# Prompt for classifying the incoming support ticket into a predefined category.
classification_prompt = PromptTemplate(
input_variables=["ticket"],
template="""
You are an SAP support assistant.
Classify the following SAP support ticket into ONE category only:
Performance, Integration, Authorization, UI, ABAP Error.
Ticket: {ticket}
"""
)
# Prompt to check how confident the model is in its classification.
confidence_prompt = PromptTemplate(
input_variables=["classification"],
template="""
Evaluate your confidence in the following classification:
"{classification}"
Respond only with: High or Low
"""
)
# Prompt to decide whether escalation is needed based on classification.
severity_prompt = PromptTemplate(
input_variables=["classification"],
template="""
Based on the issue classification "{classification}", decide if this should be escalated to Level 2 SAP Support.
Respond exactly with Yes or No.
"""
)
# Connect prompts to the SAP AI model.
classification_chain = classification_prompt | model
confidence_chain = confidence_prompt | model
severity_chain = severity_prompt | model
# --------------------------------------------------
# 3) Define the State Shape for the Workflow
# --------------------------------------------------
# This holds all data as the workflow progresses.
class TicketState(TypedDict, total=False):
ticket: str
classification: Optional[str]
confidence: Optional[str] # "high" / "low"
severity: Optional[str] # "high" / "low/moderate"
retry_count: int # Counts human corrections
escalated: bool
email_sent: bool
# --------------------------------------------------
# 4) Workflow Node Functions
# --------------------------------------------------
# Each function represents a step in the process.
# Step 1: Classify the ticket using the LLM.
def classify(state: TicketState) -> TicketState:
response = classification_chain.invoke({"ticket": state["ticket"]})
state["classification"] = response.content.strip()
print(f"Classification → {state['classification']}")
return state
# Step 2: Check how confident the model is.
def check_confidence(state: TicketState) -> TicketState:
# If a human already corrected the classification,
# we trust the human and override confidence to HIGH.
if state.get("retry_count", 0) > 0:
state["confidence"] = "high"
print("Confidence overridden → high (human validated)")
return state
# Otherwise, let the AI evaluate confidence.
response = confidence_chain.invoke({"classification": state["classification"]})
state["confidence"] = response.content.strip().lower()
print(f"Model Confidence → {state['confidence']}")
return state
# Step 3: Human-in-the-loop step if confidence was low.
def human_review(state: TicketState) -> TicketState:
print("HUMAN-IN-THE-LOOP REQUIRED:")
print(f"Suggested classification: {state['classification']}")
# Human may approve or correct the classification.
updated = input("Enter correct classification (or press enter to approve): ").strip()
if updated:
print(f"Human corrected classification → {updated}")
state["classification"] = updated
# Increase retry count so system knows review already happened.
state["retry_count"] = state.get("retry_count", 0) + 1
return state
# Step 4: Decide whether the issue needs escalation.
def decide_escalation(state: TicketState) -> TicketState:
response = severity_chain.invoke({"classification": state["classification"]})
decision = response.content.strip().lower()
# Convert response into workflow logic format.
state["severity"] = "high" if decision == "yes" else "low/moderate"
print(f"Escalation Decision → Severity: {state['severity']}")
return state
# Step 5: If severity is high, escalate the ticket.
def escalate(state: TicketState) -> TicketState:
print("Escalating ticket to Level 2 support...")
state["escalated"] = True
return state
# Step 6: After escalation, send an email notification.
def send_mail(state: TicketState) -> TicketState:
print("Sending escalation email...")
state["email_sent"] = True
return state
# Step 7: End the workflow.
def end(state: TicketState) -> TicketState:
print("Workflow Complete.")
return state
# --------------------------------------------------
# 5) Build LangGraph Workflow
# --------------------------------------------------
graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("confidence", check_confidence)
graph.add_node("human_review", human_review)
graph.add_node("decide_escalation", decide_escalation)
graph.add_node("escalate", escalate)
graph.add_node("send_mail", send_mail)
graph.add_node("end", end)
# The workflow always begins with classification.
graph.set_entry_point("classify")
# After classifying, always check confidence.
graph.add_edge("classify", "confidence")
# Route: Confidence determines next step.
def route_confidence(state: TicketState):
return "decide_escalation" if state["confidence"] == "high" else "human_review"
graph.add_conditional_edges(
"confidence",
route_confidence,
{
"decide_escalation": "decide_escalation",
"human_review": "human_review"
}
)
# If human corrected, return to classification to retry.
graph.add_edge("human_review", "classify")
# Route after escalation decision.
def route_severity(state: TicketState):
return "escalate" if state["severity"] == "high" else "end"
graph.add_conditional_edges(
"decide_escalation",
route_severity,
{
"escalate": "escalate",
"end": "end"
}
)
# Escalation always triggers an email, then finish.
graph.add_edge("escalate", "send_mail")
graph.add_edge("send_mail", END)
# Compile workflow
app = graph.compile()
# --------------------------------------------------
# 6) Run Workflow
# --------------------------------------------------
initial_state: TicketState = {
"ticket": "User cannot log in using I-User ID in SAP S/4HANA Public Cloud.",
"retry_count": 0
}
final = app.invoke(initial_state)
print("\nFinal State:", final)</code></pre><P>This what your output will look like -</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_0-1764509911294.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346774i11899926421B7861/image-size/large?v=v2&px=999" role="button" title="Rithika_0-1764509911294.png" alt="Rithika_0-1764509911294.png" /></span></P><P><STRONG>Conclusion:</STRONG></P><P><STRONG><SPAN>When you’re deciding between LangChain and LangGraph for your SAP AI workflows, ask yourself the following:</SPAN></STRONG><SPAN> </SPAN></P><P><STRONG><SPAN>How complex is my workflow?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>Linear and predictable? Use </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Dynamic or with branching logic? Go for </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><P><STRONG><SPAN>Do I need to persist state or approvals?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>Short-lived, single-session tasks (like a Q&A chatbot)? </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Long-running, multi-step processes (like procurement)? </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><P><STRONG><SPAN>What’s my optimization goal - speed or control?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>For rapid prototypes or proof-of-concepts, </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN> is your best friend.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>For production-ready, auditable, and orchestrated AI apps, </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN> provides the structure you need.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>Start with what’s needed, keep it maintainable, and scale when your workflow truly calls for it.</SPAN><SPAN> <BR /></SPAN><SPAN>LangChain and LangGraph aren’t competitors, but rather act </SPAN><STRONG><SPAN>complementary</SPAN></STRONG><SPAN> in our AI projects.</SPAN><SPAN> </SPAN></P><P><SPAN>Happy Building and may your next AI project be both smart and </SPAN><STRONG><SPAN>sustainable</SPAN></STRONG><SPAN>.</SPAN></P><P><STRONG><SPAN>Additional hands-on:</SPAN></STRONG><SPAN> </SPAN></P><P><A href="http://community.sap.com/t5/artificial-intelligence-blogs-posts/hands-on-tutorial-building-an-ai-agent-with-human-in-the-loop-control/ba-p/14050267" target="_blank"><SPAN>http://community.sap.com/t5/artificial-intelligence-blogs-posts/hands-on-tutorial-building-an-ai-agent-with-human-in-the-loop-control/ba-p/14050267</SPAN></A><SPAN> </SPAN></P><P><SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/the-power-of-ai-agents-my-journey-with-langgraph/ba-p/14032724" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-sap/the-power-of-ai-agents-my-journey-with-langgraph/ba-p/14032724</A></SPAN><SPAN> </SPAN></P><P> </P><P> </P><DIV class=""> </DIV><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P>2025-12-09T07:50:47.232000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/to-built-a-strong-understanding-of-how-llm-and-genai-works-and-why-it-is/ba-p/14293944To built a strong understanding of how LLM and GenAi works and why it is important in todays world2025-12-18T23:20:05.650000+01:00Thakur_Avinashhttps://community.sap.com/t5/user/viewprofilepage/user-id/1614040<P><SPAN>Topics Covered</SPAN></P><UL><LI><SPAN>Famous LLMs: GPT, Mistral, LLaMA, Claude</SPAN></LI><LI><SPAN>Real-life use cases</SPAN></LI><LI><SPAN>How an LLM actually works</SPAN></LI><LI><SPAN>Why LLMs matter in 2025</SPAN></LI></UL><P><SPAN> </SPAN></P><P><SPAN>What is an LLM?</SPAN></P><P><SPAN>LLM stands for Large Language Model.</SPAN></P><P><SPAN>It is a type of Artificial Intelligence designed to understand, generate, and interact using human language.</SPAN></P><P><SPAN>LLMs are trained on massive datasets such as books, websites, technical documentation, and Wikipedia.</SPAN></P><P><SPAN> </SPAN></P><P><SPAN>Examples include ChatGPT, Claude, Gemini, LLaMA, and Mistral.</SPAN></P><P><SPAN> </SPAN></P><P><SPAN>Think of an LLM as a highly advanced auto-complete system — not just for words, but for conversations, code, documents, and reasoning.</SPAN></P><P><SPAN> </SPAN></P><P><SPAN>Why LLMs Matter in 2025</SPAN></P><UL><LI><SPAN>“LLMs are <STRONG>augmenting and, in some cases, replacing</STRONG> traditional rule-based software</SPAN></LI><LI><SPAN>They power chatbots, AI tutors, code assistants, and business analysis tools</SPAN></LI><LI><SPAN>They are easy to integrate into applications via APIs</SPAN></LI><LI><SPAN>The future of work is You + AI — LLMs act as intelligent assistants</SPAN></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_0-1766095870245.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353964i698ADA5286C429CD/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_0-1766095870245.png" alt="Thakur_Avinash_0-1766095870245.png" /></span></P><P> </P><P><SPAN>Famous LLMs</SPAN></P><UL><LI><SPAN>GPT (OpenAI): Strong reasoning, coding, and enterprise adoption</SPAN></LI><LI><SPAN>Claude (Anthropic): Safe, long-context, document-heavy workloads</SPAN></LI><LI><SPAN>LLaMA (Meta): Open-weight, customizable enterprise models</SPAN></LI><LI><SPAN>Mistral: Lightweight, fast, cost-efficient open models</SPAN></LI></UL><P><SPAN> </SPAN></P><P><SPAN>Real-Life Use Cases</SPAN></P><UL><LI><SPAN>Customer support chatbots</SPAN></LI><LI><SPAN>Document summarization</SPAN></LI><LI><SPAN>Code generation and review</SPAN></LI><LI><SPAN>SAP process guidance and copilots</SPAN></LI><LI>Knowledge search across enterprise documents</LI></UL><P> </P><P>Start thinking of how you can build AI tools using LLMs, even if you're a beginner.</P><P> </P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_1-1766095870271.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353963i849453E9924A6E51/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_1-1766095870271.png" alt="Thakur_Avinash_1-1766095870271.png" /></span></P><P> </P><P>How Does an LLM Actually Work?</P><P> Simple explanation:</P><OL><LI><STRONG>Tokenization</STRONG>: Breaks your text into smaller units (tokens)</LI><LI><STRONG>Embedding</STRONG>: Converts tokens into numbers (vectors)</LI><LI><STRONG>Transformer</STRONG>: Uses an <STRONG>attention mechanism</STRONG> to decide which words matter</LI><LI><STRONG>Prediction</STRONG>: Generates the next token (word/letter) based on training</LI></OL><P>“Attention is all you need”</P><P>– the core of how LLMs work</P><P><SPAN> </SPAN></P><P><SPAN>“Attention is all you need” — the core idea behind modern LLMs</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_2-1766095870304.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353965iFCDE0CE053FA8378/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_2-1766095870304.png" alt="Thakur_Avinash_2-1766095870304.png" /></span></P><P> </P><H3 id="toc-hId-1896477007"><SPAN> </SPAN></H3><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_3-1766095870352.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353968i5BA501FC9BC4C709/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_3-1766095870352.png" alt="Thakur_Avinash_3-1766095870352.png" /></span></P><P> </P><H3 id="toc-hId-1699963502"><SPAN><BR /><STRONG>Step 1: Tokenization</STRONG></SPAN></H3><P>Tokenization is the process of breaking input text into smaller units called <STRONG>tokens</STRONG>.<BR />Depending on the tokenizer, these tokens may represent <STRONG>words, subwords, or characters</STRONG>.</P><P><STRONG>Input:</STRONG><BR />"Create sales order for customer"</P><P><STRONG>Example (subword-style tokens):</STRONG><BR />["Create", " sales", " order", " for", " customer"]<BR /><BR /> These are <STRONG>not plain word splits</STRONG>.<BR />Whitespace is preserved (for example, " sales" includes a leading space), which helps the model understand context and sentence structure more accurately.</P><P>This approach enables the model to:</P><UL><LI>Handle <STRONG>domain-specific terms</STRONG> (e.g., <EM>sales order</EM>, <EM>customer</EM>)</LI><LI>Support <STRONG>multi-language enterprise data</STRONG></LI><LI>Gracefully manage <STRONG>unseen or rare words</STRONG></LI></UL><P>You can visualize how tokenization works using OpenAI’s tokenizer tool:<BR /><span class="lia-unicode-emoji" title=":link:">🔗</span> <A href="https://platform.openai.com/tokenizer" target="_blank" rel="noopener nofollow noreferrer">https://platform.openai.com/tokenizer</A></P><P><SPAN> </SPAN></P><P><SPAN> </SPAN></P><H2 id="toc-hId-1374367278"><SPAN>Step 2: Embeddings (Vector Representation)</SPAN></H2><P>After tokenization, <STRONG>each token is converted into a numerical vector</STRONG>, called an <STRONG>embedding</STRONG>.<BR />An embedding is a list of numbers that represents the <STRONG>semantic meaning</STRONG> of a token in a high-dimensional space.</P><P><STRONG>Important point (correctness):</STRONG></P><UL><LI>One token → <STRONG>one embedding vector</STRONG></LI><LI>Typical vector sizes are <STRONG>768, 1024, or higher</STRONG>, depending on the model</LI><LI>If a sentence has <STRONG>N tokens</STRONG>, the model produces <STRONG>N vectors</STRONG></LI></UL><H3 id="toc-hId-1306936492"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Input text:</STRONG><BR />"Create sales order for customer ABC"</P><P><STRONG>Example tokenization (simplified):</STRONG><BR />["Create", " sales", " order", " for", " customer", " ABC"]</P><P>Each token is converted into a vector:</P><P>"Create" → [0.12, -0.55, 1.03, ...]" sales" → [0.45, 0.82, 0.66, ...]" order" → [0.78, -0.21, 0.14, ...]" customer" → [-0.34, 0.91, 0.02, ...]</P><P>Individually meaningless, but meaningful <STRONG>in relation to other vectors</STRONG></P><UL><LI>Business intent</LI><LI>Context (action vs object)</LI><LI>Domain relevance (sales, order, customer)</LI></UL><H2 id="toc-hId-981340268"><SPAN>Do All LLMs Generate the Same Embedding?</SPAN></H2><P><STRONG>No — embeddings are NOT universal.</STRONG><BR />The same word (e.g., <EM>Material</EM>, <EM>Vendor</EM>, <EM>Sales Order</EM>) will have different vectors depending on:</P><OL><LI><STRONG>The model</STRONG><BR />GPT, BERT, LLaMA, etc., use different architectures and objectives.</LI><LI><STRONG>Training data</STRONG><BR />A model trained on enterprise or ERP data understands SAP terms more precisely.</LI><LI><STRONG>Tokenizer strategy</STRONG><BR />BPE, WordPiece, or SentencePiece can split SAP terms differently<BR />(e.g., SalesOrder vs Sales + Order).</LI></OL><H2 id="toc-hId-784826763"><SPAN>Why Embeddings Matter in SAP Semantic Search</SPAN></H2><P>Traditional SAP search works like this:</P><UL><LI>Exact keyword match</LI><LI>Field-based filtering</LI><LI>Limited understanding of intent</LI></UL><H3 id="toc-hId-717395977"><SPAN>Embeddings change this completely.</SPAN></H3><P><span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> With embeddings, <STRONG>meaning is compared instead of text</STRONG>.</P><P><STRONG>Example:</STRONG></P><P>User searches:</P><P><EM>"How to create a customer order?"</EM></P><P>SAP documents may contain:</P><UL><LI><EM>"Sales Order creation using VA01"</EM></LI><LI><EM>"Process flow for order-to-cash"</EM></LI></UL><P> Even though the words differ, embeddings place them <STRONG>close together in vector space</STRONG>, allowing the system to retrieve the right documents.</P><P><SPAN> </SPAN></P><H2 id="toc-hId-391799753"><SPAN>Step 3: Transformer (Self-Attention)</SPAN></H2><P>This is where the model <STRONG>understands context</STRONG>.<BR />The Transformer uses a mechanism called <STRONG>self-attention</STRONG>, which allows it to look at <STRONG>all tokens in the sentence at the same time</STRONG> and determine:</P><UL><LI>Which tokens are <STRONG>most important</STRONG></LI><LI>How tokens are <STRONG>related to each other</STRONG></LI><LI>Where the model should <STRONG>focus more or less</STRONG></LI></UL><P><STRONG>Key correctness note:</STRONG><BR />Attention does <STRONG>not</STRONG> “think” sequentially like humans.<BR />Instead, it assigns <STRONG>weights</STRONG> to relationships between tokens based on learned patterns.</P><H3 id="toc-hId-324368967"><SPAN>Example (Conceptual)</SPAN></H3><P><STRONG>Input sentence:</STRONG><BR />"Most famous product in France"</P><P>The model learns relationships such as:</P><UL><LI><STRONG>“product” ↔ “France”</STRONG> (geographic relevance)</LI><LI><STRONG>“famous”</STRONG> increases importance by acting as a qualifier</LI><LI>Contextually important tokens receive <STRONG>higher attention weights</STRONG></LI></UL><P> As a result, the attention mechanism focuses more strongly on:</P><UL><LI>" product"</LI><LI>" France"</LI></UL><P>while giving less weight to supporting words like "most" or "in".</P><H3 id="toc-hId-127855462"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Input:</STRONG><BR />"Create sales order for customer ABC"</P><P>Through self-attention, the model learns:</P><UL><LI><STRONG>“sales order”</STRONG> is the core business object</LI><LI><STRONG>“customer ABC”</STRONG> provides critical context</LI><LI><STRONG>“create”</STRONG> indicates an action or intent</LI></UL><P> This enables the model to distinguish between:</P><UL><LI><EM>Sales Order creation</EM></LI><LI><EM>Customer master data</EM></LI><LI><EM>Order-to-Cash process</EM></LI></UL><P>even though all words appear in the same sentence.</P><H3 id="toc-hId--143889412"><SPAN>Intuitive Explanation (Human Analogy)</SPAN></H3><P>You can think of attention like this:</P><P>“The key concept here is a Sales Order,<BR />the customer gives context,<BR />and the verb tells me what action is required.”</P><P>The model doesn’t reason like a human, but <STRONG>self-attention mathematically achieves the same outcome</STRONG> by emphasizing the most relevant parts of the input.</P><H2 id="toc-hId--46999910"><SPAN>Step 4: Prediction (Output Generation)</SPAN></H2><P>After processing the input through embeddings and transformer layers, the LLM produces a <STRONG>probability distribution over the next possible tokens</STRONG>.</P><P>At this stage, the model outputs <STRONG>logits/probabilities</STRONG>, which are then converted to text via decoding.<BR />Instead, it predicts <STRONG>which token is most likely to come next</STRONG>, based on everything it has learned so far (prompt + prior tokens).</P><H3 id="toc-hId--536916422"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Input prompt:</STRONG><BR />"Create sales order in SAP"</P><P>The model internally computes probabilities such as:</P><UL><LI>" using" → 42%</LI><LI>" for" → 28%</LI><LI>" via" → 15%</LI><LI>" transaction" → 10%</LI><LI>" automatically" → 5%</LI></UL><P>A <STRONG>decoding algorithm</STRONG> (e.g., greedy decoding, top-k, or nucleus sampling) selects the next token based on these probabilities.</P><P><STRONG>Generated output (step by step):</STRONG></P><UL><LI>"Create sales order in SAP using VA01"</LI><LI>"Create sales order in SAP using VA01 transaction"</LI></UL><P>This process repeats <STRONG>one token at a time</STRONG> until a stopping condition is reached.</P><H3 id="toc-hId--733429927"><SPAN>What Influences the Prediction?</SPAN></H3><P>The final output depends on multiple factors:</P><UL><LI><STRONG>Training data</STRONG><BR />Determines how well the model understands SAP terminology and processes.</LI><LI><STRONG>Prompt context</STRONG><BR />Small prompt changes can significantly alter the output.</LI><LI><STRONG>Temperature</STRONG><UL><LI>Low temperature → more deterministic, factual responses</LI><LI>High temperature → more creative, diverse outputs</LI></UL></LI><LI><STRONG>Maximum token limit</STRONG><BR />Controls how long the response can be.</LI></UL><P><STRONG>Correctness note:</STRONG><BR />The model always predicts <STRONG>one next token at a time</STRONG>, even when generating long paragraphs.</P><H3 id="toc-hId--929943432"><SPAN>How This Works Conceptually (Linked to the Diagram)</SPAN></H3><OL><LI><STRONG>Prompt + previous tokens</STRONG> are fed into the LLM</LI><LI>The model computes a <STRONG>probability distribution</STRONG> over the vocabulary</LI><LI>A <STRONG>decoding strategy</STRONG> selects one token</LI><LI>The selected token is appended to the output</LI><LI>Steps 1–4 repeat until completion</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_4-1766095870400.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353967iB7A02100BBD08E14/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_4-1766095870400.png" alt="Thakur_Avinash_4-1766095870400.png" /></span></P><P> </P><P><SPAN><BR /><BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_5-1766095870425.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353966i5F844A76C297591A/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_5-1766095870425.png" alt="Thakur_Avinash_5-1766095870425.png" /></span></P><P> </P><P><SPAN><BR /><BR /><BR /></SPAN></P><H2 id="toc-hId--833053930"><SPAN>1. Temperature</SPAN></H2><P><STRONG>Temperature controls how random or deterministic the model’s output is.</STRONG><BR />It influences how the model samples from the probability distribution of the next token.</P><UL><LI><STRONG>Low temperature (e.g., 0.2–0.3)</STRONG><UL><LI>Output is highly <STRONG>deterministic</STRONG></LI><LI>The model almost always selects the <STRONG>highest-probability token</STRONG></LI><LI>Best for <STRONG>enterprise, SAP, and factual use cases</STRONG></LI></UL></LI><LI><STRONG>High temperature (e.g., 1.0 or above)</STRONG><UL><LI>Output becomes more <STRONG>diverse and creative</STRONG></LI><LI>The model samples from a <STRONG>broader range of tokens</STRONG></LI><LI>Useful for <STRONG>brainstorming, ideation, or creative writing</STRONG></LI></UL></LI></UL><P> In simple terms:</P><P>Lower temperature = <EM>precision and consistency</EM><BR />Higher temperature = <EM>variety and creativity</EM></P><H2 id="toc-hId--1029567435"><SPAN>SAP-Oriented Intuition</SPAN></H2><P><STRONG>Prompt:</STRONG><BR />"Create sales order in SAP"</P><UL><LI><STRONG>Low temperature:</STRONG><BR />→ <EM>"Create sales order in SAP using VA01"</EM></LI><LI><STRONG>High temperature:</STRONG><BR />→ <EM>"Create a sales order in SAP through standard order processing steps"</EM></LI></UL><P>Same knowledge — different <STRONG>expression style</STRONG>, controlled by temperature</P><H2 id="toc-hId--1226080940"><SPAN><BR /><BR />2. Top-k Sampling</SPAN></H2><P><STRONG>Top-k sampling limits the model’s choices to only the </STRONG><EM><STRONG>k</STRONG></EM><STRONG> most likely tokens.</STRONG><BR />Instead of considering the entire vocabulary, the model:</P><UL><LI>Selects the <STRONG>top </STRONG><EM><STRONG>k</STRONG></EM><STRONG> tokens</STRONG> with the highest probabilities</LI><LI><STRONG>Discards all remaining tokens</STRONG></LI><LI>Randomly samples <STRONG>only from this reduced set</STRONG>, weighted by their probabilities</LI></UL><P>Common values:</P><UL><LI>k = 20–50 for balanced outputs</LI><LI>Lower k → more deterministic</LI><LI>Higher k → more diversity</LI></UL><H3 id="toc-hId--1715997452"><SPAN>Simple Intuition</SPAN></H3><P>If the model predicts thousands of possible next tokens:</P><UL><LI><STRONG>Top-k = 40</STRONG><BR />→ Only the 40 most likely tokens are kept<BR />→ Everything else is ignored<BR />→ Sampling happens only within these 40 tokens</LI></UL><P>This prevents:</P><UL><LI>Rare, nonsensical words</LI><LI>Extremely low-probability noise</LI></UL><H3 id="toc-hId--1912510957"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Prompt:</STRONG><BR />"Create sales order in SAP"</P><P><STRONG>Next-token probabilities (simplified):</STRONG></P><TABLE><TBODY><TR><TD><P><STRONG><SPAN>Token</SPAN></STRONG></P></TD><TD><P><STRONG><SPAN>Probability</SPAN></STRONG></P></TD></TR><TR><TD><P><SPAN>using</SPAN></P></TD><TD><P><SPAN>0.45</SPAN></P></TD></TR><TR><TD><P><SPAN>for</SPAN></P></TD><TD><P><SPAN>0.25</SPAN></P></TD></TR><TR><TD><P><SPAN>via</SPAN></P></TD><TD><P><SPAN>0.15</SPAN></P></TD></TR><TR><TD><P><SPAN>automatically</SPAN></P></TD><TD><P><SPAN>0.08</SPAN></P></TD></TR><TR><TD><P><SPAN>process</SPAN></P></TD><TD><P><SPAN>0.03</SPAN></P></TD></TR><TR><TD><P><SPAN>…</SPAN></P></TD><TD><P><SPAN>…</SPAN></P></TD></TR></TBODY></TABLE><P>With <STRONG>Top-k = 3</STRONG>:</P><UL><LI>Allowed tokens → ["using", "for", "via"]</LI><LI>All other options are <STRONG>discarded</STRONG></LI><LI>Final token is sampled <STRONG>only from these three</STRONG></LI></UL><P>This ensures outputs remain <STRONG>relevant to SAP context</STRONG>.<BR /><BR /></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Thakur_Avinash_6-1766095870429.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353969i2BDDB98FE8EAE4BC/image-size/medium?v=v2&px=400" role="button" title="Thakur_Avinash_6-1766095870429.png" alt="Thakur_Avinash_6-1766095870429.png" /></span></P><P> </P><H2 id="toc-hId--1647437764"><SPAN><BR /><BR />3. Top-p Sampling (Nucleus Sampling)</SPAN></H2><P><span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> <STRONG>Top-p sampling selects the smallest set of tokens whose cumulative probability exceeds a threshold </STRONG><EM><STRONG>p</STRONG></EM><STRONG>.</STRONG><BR />Instead of fixing the number of tokens (like Top-k), the model:</P><UL><LI>Sorts tokens by probability (highest → lowest)</LI><LI>Keeps adding tokens <STRONG>until their total probability ≥ p</STRONG> (e.g., 0.9)</LI><LI><STRONG>Samples randomly only from this dynamic set</STRONG></LI><LI>Discards all remaining tokens</LI></UL><P>Because the set size changes per step, Top-p is <STRONG>more adaptive than Top-k</STRONG>.</P><H3 id="toc-hId--2137354276"><SPAN>Why Top-p Is Different from Top-k</SPAN></H3><UL><LI><STRONG>Top-k</STRONG> → fixed number of tokens (rigid)</LI><LI><STRONG>Top-p</STRONG> → variable number of tokens (context-aware)</LI></UL><P>If the model is confident:</P><UL><LI>Few tokens may already reach p = 0.9</LI></UL><P>If the model is uncertain:</P><UL><LI>More tokens are included automatically</LI></UL><H3 id="toc-hId-1961099515"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Prompt:</STRONG><BR />"Create sales order in SAP"</P><P><STRONG>Next-token probabilities (simplified):</STRONG></P><TABLE><TBODY><TR><TD><P><STRONG><SPAN>Token</SPAN></STRONG></P></TD><TD><P><STRONG><SPAN>Probability</SPAN></STRONG></P></TD><TD><P><STRONG><SPAN>Cumulative</SPAN></STRONG></P></TD></TR><TR><TD><P><SPAN>using</SPAN></P></TD><TD><P><SPAN>0.45</SPAN></P></TD><TD><P><SPAN>0.45</SPAN></P></TD></TR><TR><TD><P><SPAN>for</SPAN></P></TD><TD><P><SPAN>0.25</SPAN></P></TD><TD><P><SPAN>0.70</SPAN></P></TD></TR><TR><TD><P><SPAN>via</SPAN></P></TD><TD><P><SPAN>0.15</SPAN></P></TD><TD><P><SPAN>0.85</SPAN></P></TD></TR><TR><TD><P><SPAN>automatically</SPAN></P></TD><TD><P><SPAN>0.08</SPAN></P></TD><TD><P><SPAN>0.93</SPAN></P></TD></TR><TR><TD><P><SPAN>process</SPAN></P></TD><TD><P><SPAN>0.04</SPAN></P></TD><TD><P><SPAN>0.97</SPAN></P></TD></TR></TBODY></TABLE><P>With <STRONG>Top-p = 0.9</STRONG>:</P><UL><LI>Selected tokens → ["using", "for", "via", "automatically"]</LI><LI>Sampling happens <STRONG>only within this nucleus</STRONG></LI></UL><P><span class="lia-unicode-emoji" title=":pushpin:">📌</span> This adapts naturally based on SAP context and wording.</P><H2 id="toc-hId-2057989017"><SPAN>When to Use Top-p (SAP Perspective)</SPAN></H2><P>Use Top-p when you want:</P><UL><LI>Stable, <STRONG>context-aware enterprise responses</STRONG></LI><LI>Flexibility across different prompts</LI><LI>Fewer tuning headaches than Top-k</LI></UL><P>Common enterprise setting:</P><UL><LI>top_p = 0.9</LI><LI>temperature = 0.2–0.4</LI></UL><H2 id="toc-hId-1861475512"><SPAN><BR />4. Min-p Sampling (Minimum Probability Cutoff)</SPAN></H2><P><STRONG>Min-p sampling removes all tokens whose probability falls below a fixed threshold </STRONG><EM><STRONG>p</STRONG></EM><STRONG>.</STRONG><BR />Only tokens that individually meet the minimum probability requirement are allowed for sampling.</P><UL><LI>Tokens with <STRONG>probability < p</STRONG> are <STRONG>discarded</STRONG></LI><LI>Sampling happens only among the <STRONG>remaining high-confidence tokens</STRONG></LI><LI>Unlike Top-k or Top-p, the cutoff is based on <STRONG>absolute probability</STRONG>, not rank or cumulative mass</LI></UL><P>Example threshold:</P><UL><LI>min_p = 0.05 or 0.1</LI></UL><H3 id="toc-hId-1371559000"><SPAN>Simple Intuition</SPAN></H3><P>If the model predicts the next token with probabilities:</P><TABLE><TBODY><TR><TD><P><STRONG><SPAN>Token</SPAN></STRONG></P></TD><TD><P><STRONG><SPAN>Probability</SPAN></STRONG></P></TD></TR><TR><TD><P><SPAN>using</SPAN></P></TD><TD><P><SPAN>0.45</SPAN></P></TD></TR><TR><TD><P><SPAN>for</SPAN></P></TD><TD><P><SPAN>0.25</SPAN></P></TD></TR><TR><TD><P><SPAN>via</SPAN></P></TD><TD><P><SPAN>0.15</SPAN></P></TD></TR><TR><TD><P><SPAN>automatically</SPAN></P></TD><TD><P><SPAN>0.08</SPAN></P></TD></TR><TR><TD><P><SPAN>process</SPAN></P></TD><TD><P><SPAN>0.04</SPAN></P></TD></TR><TR><TD><P><SPAN>…</SPAN></P></TD><TD><P><SPAN>…</SPAN></P></TD></TR></TBODY></TABLE><P>With <STRONG>min-p = 0.1</STRONG>:</P><UL><LI>Allowed tokens → ["using", "for", "via"]</LI><LI>Tokens like "automatically" and "process" are removed</LI><LI>Sampling occurs only among <STRONG>confident choices</STRONG></LI></UL><H3 id="toc-hId-1175045495"><SPAN>SAP Business Example</SPAN></H3><P><STRONG>Prompt:</STRONG><BR />"Create sales order in SAP"</P><P>Using Min-p ensures:</P><UL><LI>Rare or irrelevant words are filtered out</LI><LI>Outputs stay <STRONG>business-accurate</STRONG></LI><LI>No low-confidence or nonsensical tokens appear in responses</LI></UL><P>Especially useful in:</P><UL><LI>SAP copilots</LI><LI>Transaction guidance</LI><LI>Compliance-sensitive outputs</LI></UL><H2 id="toc-hId-1271934997"><SPAN>When Min-p Is Useful (Enterprise Perspective)</SPAN></H2><P>Use Min-p when:</P><UL><LI>You need <STRONG>high precision</STRONG></LI><LI>Hallucinations must be avoided</LI><LI>The domain vocabulary is well-defined (SAP, ERP, finance)</LI></UL><P> </P><P><SPAN> </SPAN></P><P><SPAN> </SPAN></P><P><STRONG><SPAN>Recommended Enterprise Settings:</SPAN></STRONG></P><UL><LI><SPAN>Temperature: 0.2 – 0.4</SPAN></LI><LI><SPAN>Top-p: 0.9</SPAN></LI><LI><SPAN>Min-p: 0.05</SPAN></LI><LI><SPAN>Avoid very high temperature for business use</SPAN></LI></UL><P><SPAN> </SPAN></P><P><STRONG><SPAN>Key Takeaway</SPAN></STRONG></P><P><SPAN>LLMs do not think like humans.</SPAN></P><P><SPAN>They predict the next most likely token — repeatedly — guided by probabilities and attention.</SPAN></P><P><SPAN> </SPAN></P>2025-12-18T23:20:05.650000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/how-inpeek-leveraged-the-sap-hana-knowledge-graph-engine-to-power-a-chatbot/ba-p/14251870How inpeek Leveraged the SAP HANA Knowledge Graph Engine to Power a Chatbot for Structured Data2025-12-22T08:01:04.045000+01:00fhuenihttps://community.sap.com/t5/user/viewprofilepage/user-id/1718197<P data-unlink="true"><SPAN>At inpeek , we are always looking for innovative ways to make our solutions smarter and more efficient. Our SaaS product </SPAN><A href="https://www.sap.com/products/technology-platform/partners/inpeek-ag-summarix.html" target="_blank" rel="noopener noreferrer"><SPAN><STRONG>Summarix</STRONG></SPAN></A><SPAN> supports us in managing CVs, project references, skills, customers, and more, enabling the seamless creation of call-for-tender documents. To make the work of our employees and sales team easier, we set out to simplify how users interact with Summarix</SPAN><SPAN>’</SPAN><SPAN>s structured data. Out of this ambition, </SPAN><SPAN><STRONG>Summy</STRONG></SPAN><SPAN>, our AI-powered chatbot, was born.</SPAN><SPAN> </SPAN></P><P><SPAN>We realized this project </SPAN><SPAN>in close collaboration with SAP through the </SPAN><STRONG><SPAN>Early Adopter Care (EAC) Program</SPAN></STRONG><SPAN> for the </SPAN><STRONG><SPAN>SAP HANA Knowledge Graph Engine</SPAN></STRONG><SPAN><STRONG>, </STRONG></SPAN><SPAN>a powerful extension of the SAP HANA database. </SPAN><SPAN>In the following</SPAN><SPAN> sections</SPAN><SPAN>, we </SPAN><SPAN>will </SPAN><SPAN>share </SPAN><SPAN>details of the</SPAN><SPAN> Summy</SPAN><SPAN> project</SPAN><SPAN>, the challenges we </SPAN><SPAN>encountered</SPAN><SPAN>, and the technical decisions that shaped the solution.</SPAN><SPAN> </SPAN></P><H2 id="toc-hId-1763639750">The Challenge </H2><P><SPAN>Summarix already provided a user-friendly UI with filtering and search capabilities built on <STRONG>SAP Fiori</STRONG></SPAN><SPAN>,<STRONG> <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Cloud+Application+Programming+Model/pd-p/9f13aee1-834c-4105-8e43-ee442775e5ce" class="lia-product-mention" data-product="100-1">SAP Cloud Application Programming Model</a> </STRONG></SPAN><STRONG>(CAP)</STRONG><SPAN> and deployed on <STRONG><a href="https://community.sap.com/t5/c-khhcw49343/SAP+Business+Technology+Platform/pd-p/73555000100700000172" class="lia-product-mention" data-product="1215-1">SAP Business Technology Platform</a> (BTP) </STRONG></SPAN><SPAN>. While this </SPAN><SPAN>approach </SPAN><SPAN>worked well for </SPAN><SPAN>simple </SPAN><SPAN>questions</SPAN><SPAN>, it </SPAN><SPAN>did have</SPAN><SPAN> several limitations:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Filters could only cover what the product owners or developers had envisioned.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Complex </SPAN><SPAN>quest</SPAN><SPAN>ions</SPAN> <SPAN>across multiple entities quickly exceeded the UI’s filter options.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Aggregational queries (e.g., </SPAN><SPAN>identifying </SPAN><SPAN>the most experienced consultant in a skill or </SPAN><SPAN>calculating the average </SPAN><SPAN>years of experience over all consultants) were impossible with standard filters.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>In order t</SPAN><SPAN>o overcome these challenges and </SPAN><SPAN>accelerate </SPAN><SPAN>the search process, we envisioned a chatbot that </SPAN><SPAN>would be capable </SPAN><SPAN>of</SPAN><SPAN>:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Focus on </SPAN><STRONG><SPAN>information retrieval</SPAN></STRONG><SPAN> from structured data.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Provide </SPAN><STRONG><SPAN>precise, single-turn answers</SPAN></STRONG><SPAN> to </SPAN><SPAN>questions</SPAN><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Leverage new SAP technologies, particularly the </SPAN><STRONG><SPAN>SAP HANA Knowledge Graph Engine</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>This was also an opportunity for us to gain early hands-on experience with the Knowledge Graph Engine and </SPAN><SPAN>to </SPAN><SPAN>explore its potential. </SPAN><SPAN> </SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="fhueni_0-1761227213197.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/331666i582D8BC318A9A96C/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="fhueni_0-1761227213197.png" alt="fhueni_0-1761227213197.png" /></span></P><H2 id="toc-hId-1567126245"><STRONG><SPAN>The Solution</SPAN></STRONG><SPAN> </SPAN></H2><P><SPAN>We</SPAN> <SPAN>implemented an innovative architecture built entirely on the SAP BTP. The system combines a </SPAN><STRONG><SPAN>knowledge graph</SPAN></STRONG><SPAN> and an </SPAN><STRONG><SPAN>agentic workflows</SPAN></STRONG><SPAN> to deliver </SPAN><SPAN>precise </SPAN><SPAN>answers to </SPAN><SPAN>questions</SPAN> <SPAN>regarding </SPAN><SPAN>the data stored in Summarix.</SPAN><SPAN> </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="architecture.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/331676iE410A527DCAB42C7/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="architecture.png" alt="architecture.png" /></span> </P><P><SPAN>Key components of the solution:</SPAN><SPAN> </SPAN></P><UL><LI><STRONG><SPAN>SAP HANA Knowledge Graph Engine</SPAN></STRONG><SPAN>: Stores data as a knowledge graph</SPAN><SPAN>, facilitating</SPAN><SPAN> SPARQL-based queries.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>SAP CAP backend</SPAN></STRONG><SPAN>: </SPAN><SPAN>Manages </SPAN><SPAN>orchestration, workflow logic, and query processing. </SPAN><SPAN>While Python is a popular choice for </SPAN><SPAN>many AI projects </SPAN><SPAN>, we deliberately</SPAN><SPAN> chose to</SPAN><SPAN> integrate</SPAN><SPAN> AI into our CAP (Node.js) backend using the </SPAN><STRONG><SPAN>SAP AI SDK</SPAN></STRONG><SPAN>. </SPAN><SPAN>Utilising </SPAN><STRONG><SPAN>LangGraph.js</SPAN></STRONG><SPAN> and </SPAN><STRONG><SPAN>LangChain.js</SPAN></STRONG><SPAN>, we</SPAN><SPAN> have</SPAN><SPAN> modeled the agentic workflow directly in JavaScript.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>SAP Generative AI Hu</SPAN></STRONG><SPAN>: Provides access to LLM inference. We leverage its </SPAN><STRONG><SPAN>orchestration</SPAN></STRONG><SPAN> features and </SPAN><STRONG><SPAN>prompt templates</SPAN></STRONG><SPAN>, while retaining the flexibility to switch between models from OpenAI, Amazon, Google...</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Angular UI</SPAN></STRONG><SPAN>: A lightweight chat interface </SPAN><SPAN>that runs </SPAN><SPAN>as a shell plugin within </SPAN><SPAN>the </SPAN><STRONG><SPAN>SAP Work Zone</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Langfuse</SPAN></STRONG><SPAN>: Ensures observability and evaluation. This open-source tool </SPAN><SPAN>enables </SPAN><SPAN>us to trace executions, measure performance, and curate evaluation datasets.</SPAN><SPAN> </SPAN></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="fiori_screensho.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/331677i34279F020C2EB2AA/image-size/large?v=v2&px=999" role="button" title="fiori_screensho.png" alt="fiori_screensho.png" /></span></P><H2 id="toc-hId-1370612740"><STRONG><SPAN>Key Features</SPAN></STRONG><SPAN> </SPAN></H2><P><SPAN>Our chatbot relies on an </SPAN><STRONG><SPAN>agentic AI workflow</SPAN></STRONG><SPAN>. While</SPAN><SPAN> it is</SPAN><SPAN> not a fully autonomous agent, the workflow defines a repeatable sequence of steps with conditional branches. This</SPAN><SPAN> approach</SPAN><SPAN> ensures consistency, reduces risks, and simplifies debugging, while still supporting dynamic, question-specific workflows and granting the agent some autonomy.</SPAN><SPAN> </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="fhueni_3-1761227303026.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/331669i202D9CCD97A2CAF2/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="fhueni_3-1761227303026.png" alt="fhueni_3-1761227303026.png" /></span></P><H3 id="toc-hId-1303181954"><STRONG><SPAN>Knowledge Graph Generation</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>The project started with building the knowledge graph</SPAN><SPAN>, which </SPAN><SPAN>is</SPAN><SPAN> based on the structured data from the SAP HANA Database of Summarix. This includes the following steps:</SPAN><SPAN> </SPAN></P><OL><LI><SPAN>Export data from SQL into CSV.</SPAN><SPAN> </SPAN></LI></OL><OL><LI><SPAN>Create graph nodes for entities (e.g., employees, projects, skills) with attributes while omitting irrelevant columns.</SPAN><SPAN> </SPAN></LI></OL><OL><LI><SPAN>Model n-n relationships through connection nodes.</SPAN><SPAN> </SPAN></LI></OL><OL><LI><SPAN>Export everything into </SPAN><STRONG><SPAN>Turtle (.ttl)</SPAN></STRONG><SPAN> files.</SPAN><SPAN> </SPAN></LI></OL><OL><LI><SPAN>Ingest these files into the SAP HANA Knowledge Graph Engine.</SPAN><SPAN> </SPAN></LI></OL><P><SPAN>Ensuring </SPAN><SPAN>data quality is </SPAN><SPAN>paramount</SPAN><SPAN>, </SPAN><SPAN>as </SPAN><SPAN>errors </SPAN><SPAN>in this area can </SPAN><SPAN>propagate through to the chatbot and </SPAN><SPAN>significantly </SPAN><SPAN>impact </SPAN><SPAN>down</SPAN><SPAN>stream task </SPAN><SPAN>performance</SPAN><SPAN>.</SPAN> <SPAN>As part of </SPAN><SPAN>the ongoing development and implementation of improvements, </SPAN><SPAN>the </SPAN><SPAN>knowledge graph creation</SPAN><SPAN> process has been repeatedly revised</SPAN><SPAN> and enhanced </SPAN><SPAN>.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-1106668449"><STRONG><SPAN>Entity Extraction and Matching</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>Users naturally phrase </SPAN><SPAN>questions</SPAN> <SPAN>in different ways</SPAN><SPAN> (e.g., </SPAN><I><SPAN>SAP BTP vs. SAP Business Technology Platform</SPAN></I><SPAN>). Since each skill has a canonical name in Summarix, </SPAN><SPAN>robust entity matching</SPAN><SPAN> is required</SPAN><SPAN>.</SPAN><SPAN> </SPAN></P><P><SPAN>O</SPAN><SPAN>ur approach, </SPAN><SPAN>involved</SPAN><SPAN> the following steps:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Extract potential entities (skills, employees, projects</SPAN><SPAN> and</SPAN><SPAN> customers) from the question via an LLM using few-shot learning.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Normalize entity names (singular form, multiple known variations).</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Search with regex-based case-insensitive filters for the existing entities.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>After matching the entities, we store</SPAN><SPAN> them in the agent state for subsequent steps.</SPAN><SPAN> </SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="fhueni_4-1761227317514.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/331670iC87A8D824879E195/image-size/large?v=v2&px=999" role="button" title="fhueni_4-1761227317514.png" alt="fhueni_4-1761227317514.png" /></span></P><H3 id="toc-hId-910154944"><STRONG><SPAN>SPARQL Query Generation</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>After identifying the </SPAN><SPAN>entities </SPAN><SPAN>, the next step is to generate a SPARQL query. To guide the LLM, we did a lot of context engineering and provide:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>Context such as identity, instructions, and current date/time.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>The </SPAN><STRONG><SPAN>full ontology</SPAN></STRONG><SPAN> of the knowledge graph, including attributes and relationships.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Detailed </SPAN><STRONG><SPAN>descriptions</SPAN></STRONG> <SPAN>of </SPAN><SPAN>all attributes, entities and relations.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Examples of valid SPARQL queries.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>For this step, we (currently) rely on </SPAN><STRONG><SPAN>GPT-4.1</SPAN></STRONG><SPAN> for its stronger reasoning capabilities, while lighter calls use </SPAN><STRONG><SPAN>GPT-4o-mini</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></P><P><SPAN>We chose to use a knowledge graph approach, although it would also be possible to directly generate SQL and execute queries against the relational SAP HANA Cloud database. In our experiments, answers based on SQL queries showed slightly lower performance compared to those derived from the knowledge graph approach.</SPAN><SPAN> </SPAN></P><H3 id="toc-hId-713641439"><STRONG><SPAN>Evaluation and Observability</SPAN></STRONG><SPAN> </SPAN></H3><P><SPAN>From day one,</SPAN><SPAN>, we prioritized evaluation </SPAN><SPAN>at inpeek</SPAN> <SPAN>to avoid a “black-box” development process and </SPAN><SPAN>to establish</SPAN> <SPAN>clear KPIs </SPAN><SPAN>and track </SPAN><SPAN>their development over time. The key steps of our evaluation include</SPAN><SPAN>:</SPAN><SPAN> </SPAN></P><UL><LI><SPAN>C</SPAN><SPAN>urate</SPAN><SPAN>d</SPAN><SPAN> a </SPAN><STRONG><SPAN>baseline dataset (~30 items)</SPAN></STRONG><SPAN> with questions and ground truth answers.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Built</SPAN> <SPAN>an </SPAN><STRONG><SPAN>evaluation API endpoint</SPAN></STRONG><SPAN> to automatically re</SPAN><SPAN>run the dataset after each iteration.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Use </SPAN><STRONG><SPAN>LLM-as-a-judge</SPAN></STRONG><SPAN> scoring (0–1) to assess the correctness of a</SPAN><SPAN>n</SPAN><SPAN> answer compared to </SPAN><SPAN>the</SPAN> <SPAN>ground truth answer.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Introduce</SPAN><SPAN>d</SPAN> <STRONG><SPAN>user feedback loops</SPAN></STRONG><SPAN>: flagged chatbot answers are added to new datasets, enabling continuous improvement based on previous </SPAN><SPAN>incorrect </SPAN><SPAN>answers</SPAN><SPAN>.</SPAN><SPAN> </SPAN></LI><LI><SPAN>Track additional metrics</SPAN><SPAN>,</SPAN><SPAN> such </SPAN><SPAN>as</SPAN><SPAN> </SPAN><STRONG><SPAN>latency, costs</SPAN></STRONG><SPAN> and </SPAN><STRONG><SPAN>self-correction steps</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><H2 id="toc-hId-388045215"><STRONG><SPAN>Key Technical Considerations</SPAN></STRONG><SPAN> </SPAN></H2><UL><LI><STRONG><SPAN>Multi-tenancy</SPAN></STRONG><SPAN>: </SPAN><SPAN>Since </SPAN><SPAN>Summarix is a SaaS product, </SPAN><SPAN>we implemented our own multi-tenant logic to support different environments. </SPAN><SPAN>So far </SPAN><SPAN>, the SAP HANA Knowledge Graph </SPAN><SPAN>does</SPAN> <SPAN>not </SPAN><SPAN>offer</SPAN> <SPAN>multi-tenancy capabilit</SPAN><SPAN>es</SPAN><SPAN>. </SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Automated KG ingestion</SPAN></STRONG><SPAN>: A tenant-aware Python job daily updates the knowledge graph as data evolves in Summarix.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>CAP instead of Python</SPAN></STRONG><SPAN>: By keeping everything in CAP (Node.js), we simplified integration with authentication, lifecycle management, and SAP tooling.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Context engineering</SPAN></STRONG><SPAN>: Providing a complete ontology with extended descriptions </SPAN><SPAN>is </SPAN><SPAN>crucial for reliable SPARQL generation.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><STRONG><SPAN>Evaluation-first approach</SPAN></STRONG><SPAN>: Systematic evaluation ensured objective improvements and prevented regressions.</SPAN></LI></UL><H2 id="toc-hId-191531710"><SPAN><STRONG>Conclusion</STRONG></SPAN><SPAN> </SPAN></H2><P><SPAN>With </SPAN><STRONG><SPAN>Summy</SPAN></STRONG><SPAN>, we explored how the </SPAN><STRONG><SPAN>SAP HANA Knowledge Graph Engine</SPAN></STRONG><SPAN> can serve as a foundation for Retrieval-Augmented Generation (RAG) on structured data. By combining knowledge graphs, SPARQL, LLMs and agentic workflows, we created a chatbot </SPAN><SPAN>that can provide </SPAN><SPAN>precise answers to complex business </SPAN><SPAN>questions</SPAN><SPAN>.</SPAN> <SPAN>This means consultants</SPAN><SPAN> and</SPAN> <SPAN>sales reps </SPAN><SPAN>can now answer tender-related questions in seconds instead of manually filtering data</SPAN><SPAN>!</SPAN><SPAN> </SPAN></P><P><SPAN>This </SPAN><SPAN>project demonstrated that success </SPAN><SPAN>hinges </SPAN><SPAN>on</SPAN> <SPAN>more than just</SPAN><SPAN> the agent workflow</SPAN><SPAN>, </SPAN><SPAN>it </SPAN><SPAN>also </SPAN><SPAN>depends </SPAN><SPAN>on </SPAN><STRONG><SPAN>data quality, entity mapping and evaluation pipelines</SPAN></STRONG><SPAN>. </SPAN><SPAN>We had an excellent</SPAN> <SPAN>experience with the SAP HANA Knowledge Graph Engine </SPAN><SPAN>and </SPAN><SPAN>look forward to </SPAN><SPAN>building on this foundation</SPAN><SPAN> for future use cases</SPAN><SPAN>.</SPAN><SPAN> </SPAN></P><P><SPAN>There is also a <A href="https://medium.com/@fabianhueni/how-we-leveraged-knowledge-graphs-for-a-rag-based-chatbot-on-structured-data-c417556b3fe1" target="_blank" rel="noopener nofollow noreferrer">Medium blog post</A> that delves deeper into the AI Engineering aspect, as well as a <A href="https://www.sap.com/asset/dynamic/2025/10/62792c1d-247f-0010-bca6-c68f7e60039b.html" target="_blank" rel="noopener noreferrer">SAP Success Story</A> highlighting the use of the knowledge graph for the chatbot and our SaaS solution, Summarix</SPAN></P><P><SPAN>If you’d like to learn more about this project or discuss AI in the SAP ecosystem, feel free to conntact </SPAN><SPAN>us</SPAN> <SPAN>on LinkedIn:</SPAN><SPAN> </SPAN></P><P><A href="https://www.linkedin.com/in/fabian-h%C3%BCni/ " target="_blank" rel="noopener nofollow noreferrer">Fabian Hüni (AI Engineer)</A><BR /><A href="https://www.linkedin.com/in/fabian-k%C3%BCng-87b444107/" target="_blank" rel="noopener nofollow noreferrer">Fabian Küng (SAP Expert Consultant)</A></P>2025-12-22T08:01:04.045000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/how-to-test-oauth-enabled-inbound-communication-in-sap-using-an-api-client/ba-p/14077042How to Test OAuth-Enabled Inbound Communication in SAP Using an API Client2025-12-22T11:34:21.364000+01:00Eugenieyahttps://community.sap.com/t5/user/viewprofilepage/user-id/1803647<H1 id="toc-hId-1579316738">Introduction</H1><P><SPAN>This blog post explains how to test OAuth-enabled inbound communication in SAP using an API client. It walks through the configuration steps and provides practical examples for validating the setup.</SPAN></P><P><SPAN>When working with inbound communication in SAP S/4HANA Cloud or on-premises systems, developers and integration specialists often need a lightweight tool to test APIs secured with OAuth 2.0. </SPAN></P><P>While there are many API testing tools available today, this guide focuses on <STRONG>Bruno</STRONG>—a modern, open-source similar to the more established <STRONG>Postman</STRONG>. Personally I think of Postman as the old guard, and Bruno as the new pup on the block: sleek, efficient, and eager to fetch your tokens! <EM>(And yes, my love for dogs may have influenced this choice just a little.) </EM><BR /> </P><P class="lia-align-center" style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Image was created by the author using an AI illustration tool and is original content." style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/354623i7F1B810BDECDDEA9/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="Eugenieya_0-1766399067671.png" alt="Image was created by the author using an AI illustration tool and is original content." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image was created by the author using an AI illustration tool and is original content.</span></span></P><P> <span class="lia-unicode-emoji" title=":paw_prints:">🐾</span><SPAN> </SPAN><SPAN>In this post, I’ll walk you through how to:</SPAN></P><OL><LI><SPAN>Configure a custom communication arrangement based on an existing communication scenario for OAuth 2.0</SPAN></LI><LI><SPAN>Use an API client to test inbound communication secured with OAuth 2.0</SPAN></LI></OL><P><STRONG><SPAN>Note:</SPAN></STRONG><SPAN> This guide assumes you already have a communication scenario that supports OAuth 2.0.<BR /><BR /></SPAN></P><H1 id="toc-hId-1382803233"><SPAN>Prerequisites</SPAN></H1><UL><LI><SPAN>A communication scenario in your S/4HANA system that supports OAuth 2.0</SPAN></LI><LI><SPAN><SPAN>An API client installed on your operating system </SPAN></SPAN>(for example, Bruno<SPAN>, </SPAN>Postman<SPAN>, or any equivalent tool)</SPAN></LI></UL><P><SPAN>So, grab your treats and let’s get our paws dirty—Bruno’s ready to sniff out those OAuth tokens and help you unleash smoother integrations!<BR /><BR /></SPAN></P><H1 id="toc-hId-1186289728"><SPAN>Step 1: Enable a Custom Communication Arrangement for OAuth</SPAN></H1><H2 id="toc-hId-1118858942"><SPAN>1. Create a Communication System and Communication User</SPAN></H2><P><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span> <EM>Think of the Communication System as Bruno’s doghouse—it needs to be cozy, secure, and ready for visitors. </EM><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span></P><P><SPAN>Open the <STRONG>Communication Systems</STRONG> app and create a new system:<BR /><BR /></SPAN></P><TABLE border="1" width="100%"><TBODY><TR><TD width="50%" height="30px"><SPAN><STRONG>System ID</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN><EM>Z_BRUNO</EM></SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>System Name</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN><EM>Z_BRUNO</EM></SPAN></TD></TR></TBODY></TABLE><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Bruno1.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287795i75DBF04F18F36CB1/image-size/large?v=v2&px=999" role="button" title="Bruno1.png" alt="Bruno1.png" /></span></P><P><SPAN><BR />Under </SPAN><STRONG>Technical Data</STRONG><SPAN>:</SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Bruno2.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287802i64E76A747645169E/image-size/large?v=v2&px=999" role="button" title="Bruno2.png" alt="Bruno2.png" /></span></P><OL><LI><SPAN>Enable <STRONG>Inbound Only</STRONG>.</SPAN></LI><LI><SPAN>In OAuth 2.0 Settings:</SPAN><SPAN> </SPAN><UL><LI><SPAN>Set <STRONG>Client Redirect URI Type</STRONG> to <EM><STRONG>Loopback</STRONG></EM></SPAN></LI><LI><SPAN>Set a path in the <STRONG>Path</STRONG> field, e.g., <EM>/bruno/redirect<BR /></EM> → This becomes your <STRONG>Callback URL</STRONG>: <EM><A href="http://127.0.0.1/bruno/redirect" target="_blank" rel="noopener nofollow noreferrer">http://127.0.0.1/bruno/redirect</A></EM> (used in Step 2)</SPAN></LI></UL></LI></OL><P><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span> <EM>When you set the redirect path to <FONT face="helvetica"><FONT face="lucida sans unicode,lucida sans">/</FONT><FONT face="mingliu,biaukai"><FONT face="lucida sans unicode,lucida sans">b</FONT>runo/redirect</FONT></FONT>, you’re basically telling Bruno where to wag his tail when he’s done fetching the token. </EM><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span></P><P><SPAN>Under <STRONG>Users for Inbound Communication</STRONG></SPAN></P><P><EM><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Bruno3.1.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287810iEA937BBABCC3ACA0/image-size/large?v=v2&px=999" role="button" title="Bruno3.1.png" alt="Bruno3.1.png" /></span></EM></P><OL><LI><SPAN> Add a new communication user</SPAN></LI><LI> <SPAN>Set <STRONG>Authentication Method</STRONG> to <EM>OAuth 2.0</EM></SPAN></LI><LI> <SPAN><SPAN>Define <STRONG>OAuth 2.0 Client ID</STRONG>, e.g., <EM>Z_BRUNO</EM></SPAN></SPAN><SPAN><EM>_ID</EM></SPAN></LI><LI>Enable <STRONG>Refresh Allowed</STRONG><SPAN> and set </SPAN><STRONG>Refresh Token Expiry Time</STRONG><SPAN>, e.g., <EM>5 Days</EM></SPAN></LI></OL><P><SPAN>Save your communication system and inbound communication user for your future testing with Bruno acting as OAuth client. This user will be later used for the <STRONG>Authorization Code Grant with PKCE</STRONG>.</SPAN></P><H2 id="toc-hId-922345437"><SPAN>2. Create a Communication Arrangement<BR /></SPAN></H2><P><SPAN>Open the <STRONG>Communication Arrangements</STRONG> app and create a new arrangement based on a communication scenario that supports OAuth 2.0 as an authentication method, e.g., <FONT face="lucida sans unicode,lucida sans">SAP_COM_0053</FONT> (<EM>Purchase Order Integration</EM>).<BR /><BR /></SPAN></P><TABLE border="1" width="100%"><TBODY><TR><TD width="50%" height="30px"><SPAN><STRONG>Scenario</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN><EM>SAP_COM_0053</EM> (exists only in S/4HANA Cloud)</SPAN></TD></TR><TR><TD width="50%" height="30px"><STRONG>A</STRONG><SPAN><STRONG>rrangement Name</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN><EM>TEST_COM_0053_WITH_BRUNO</EM></SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Communication System</STRONG></SPAN></TD><TD width="50%" height="30px"> <SPAN><EM>Z_BRUNO </EM>(defined earlier)</SPAN></TD></TR></TBODY></TABLE><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Bruno4.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287805iEAD7C74CB7A46A07/image-size/large?v=v2&px=999" role="button" title="Bruno4.png" alt="Bruno4.png" /></span></P><P><SPAN>Under <STRONG>Inbound Communication</STRONG>:<BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2025-05-26 at 14.01.56.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/289855iFB9C9233A1D50658/image-size/large?v=v2&px=999" role="button" title="Screenshot 2025-05-26 at 14.01.56.png" alt="Screenshot 2025-05-26 at 14.01.56.png" /></span></P><P> </P><OL><LI><SPAN>Enter the <STRONG>OAuth 2.0 Client ID</STRONG> you defined earlier: <EM>Z_BRUNO_ID.</EM></SPAN></LI><LI>Save your communication arrangement.</LI></OL><P><SPAN><STRONG>Note:</STRONG> Only after saving the arrangement, the previously defined OAuth client gets actually created. A clickable label <STRONG>OAuth 2.0 Details</STRONG> will apear.<BR /><BR /></SPAN><SPAN>In <STRONG>OAuth 2.0 Details,</STRONG> you will find all the necessary data for your API client:</SPAN></P><OL><LI><SPAN>Client ID</SPAN></LI><LI><SPAN>Token Service URL</SPAN></LI><LI><SPAN> Authorization URL<BR /></SPAN></LI></OL><P><SPAN>These are needed to fetch a token with Bruno.<BR /></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Jane13_5-1752675913592.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287345iFE1C16A0468F53DB/image-size/large?v=v2&px=999" role="button" title="Jane13_5-1752675913592.png" alt="Jane13_5-1752675913592.png" /></span></P><P><SPAN>Now you are ready to test your OAuth-secured API using Bruno!</SPAN></P><H1 id="toc-hId-596749213"><SPAN><BR /></SPAN><SPAN>Step 2: Use Bruno to Test the OAuth Communication<BR /></SPAN></H1><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Image was created by the author using an AI illustration tool and is original content." style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/289857i9B523AAF8196F79B/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="Screenshot 2025-07-22 at 14.52.37.png" alt="Image was created by the author using an AI illustration tool and is original content." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Image was created by the author using an AI illustration tool and is original content.</span></span></P><P> </P><P> </P><H2 id="toc-hId-529318427"><SPAN>1. Create a New Collection</SPAN></H2><P><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span> <SPAN><EM>Creating a new collection in Bruno is like giving your dog a new toy—name it, set it up, and let the fun begin! </EM> </SPAN><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span></P><OL><LI><SPAN>Open Bruno</SPAN></LI><LI><SPAN>Create a <STRONG>New Collection</STRONG></SPAN></LI><LI><SPAN>Name it, e.g., <EM>My_test_collection</EM></SPAN></LI></OL><H2 id="toc-hId-332804922"><SPAN>2. Set Up OAuth in the Auth Tab</SPAN></H2><OL><LI><SPAN>Go to the <STRONG>Auth</STRONG> tab and choose <STRONG>OAuth 2.0</STRONG></SPAN></LI><LI><SPAN>Set up the configurations:</SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2025-05-26 at 14.57.27.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/289856i4355B5383D364050/image-size/large?v=v2&px=999" role="button" title="Screenshot 2025-05-26 at 14.57.27.png" alt="Screenshot 2025-05-26 at 14.57.27.png" /></span><P> </P></LI></OL><TABLE border="1" width="100%"><TBODY><TR><TD width="50%" height="30px"><SPAN><STRONG>Grant Type</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN>Authorization Code</SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Callback URL</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN><EM><A href="http://127.0.0.1/bruno/redirect" target="_blank" rel="noopener nofollow noreferrer">http://127.0.0.1/bruno/redirect</A></EM></SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Authorization URL</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN>From Communication Arrangement</SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Access Token URL</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN>From Communication Arrangement</SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Client ID</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN>From Communication Arrangement</SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>State</STRONG></SPAN></TD><TD width="50%" height="30px"><SPAN>dummy</SPAN></TD></TR><TR><TD width="50%" height="30px"><SPAN><STRONG>Use PKCE</STRONG> check box</SPAN></TD><TD width="50%" height="30px"><SPAN><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>enabled</SPAN></TD></TR></TBODY></TABLE><P><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span> <EM>Don’t forget to enable PKCE—it’s like putting a leash on your OAuth flow. Keeps things secure and under control. </EM><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span></P><H2 id="toc-hId-136291417"><SPAN>3. Send a Test Request</SPAN></H2><OL><LI><SPAN>Choose <STRONG>Get Access Token </STRONG></SPAN></LI><LI><SPAN>A browser window will open - logon as a <STRONG>Business User</STRONG></SPAN></LI><LI><SPAN>Approve the request by clicking <STRONG>Allow<BR /><BR /></STRONG></SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Jane13_7-1752675913682.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287349iED8E702C3F077EFF/image-size/large?v=v2&px=999" role="button" title="Jane13_7-1752675913682.png" alt="Jane13_7-1752675913682.png" /></span></LI><LI><SPAN>If successful, you’ll receive an <STRONG>Access Token</STRONG>. <BR /></SPAN><SPAN>The response typically appears in the </SPAN><STRONG>Response</STRONG><SPAN> tab in JSON format like this:</SPAN><SPAN><SPAN><BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="BrunoToken.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/290378i8446F574B74757AA/image-size/large?v=v2&px=999" role="button" title="BrunoToken.png" alt="BrunoToken.png" /></span></SPAN></SPAN><P> </P></LI><LI><SPAN>Use this token to call your API services:<BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot 2025-07-18 at 00.04.06.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/288205i90696A990F59E779/image-size/large?v=v2&px=999" role="button" title="Screenshot 2025-07-18 at 00.04.06.png" alt="Screenshot 2025-07-18 at 00.04.06.png" /></span></SPAN></LI></OL><H1 id="toc-hId--189304807"><STRONG><SPAN><BR /></SPAN></STRONG>Troubleshooting Tips</H1><UL><LI>Make sure you log in with a<SPAN> </SPAN><STRONG>Business User</STRONG><SPAN> </SPAN>assigned to the correct<SPAN> </SPAN><STRONG>Business Roles</STRONG><SPAN> </SPAN>for the API.</LI><LI>If you see the following screen after logging in:</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Jane13_8-1752675913684.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/287348i0A338B29F1531D03/image-size/large?v=v2&px=999" role="button" title="Jane13_8-1752675913684.png" alt="Jane13_8-1752675913684.png" /></span></P><P><SPAN>it likely means the user lacks the required authorizations or the wrong user was used.</SPAN></P><H1 id="toc-hId-383921771"><SPAN>Summary<BR /></SPAN></H1><P><SPAN>With this setup, you can securely test OAuth-protected endpoints in SAP using Bruno or similar tools. This approach is useful for validating integration scenarios and troubleshooting authentication issues.<BR /><BR />Bruno provides a clean and open-source alternative to Postman for testing OAuth-secured APIs in SAP S/4HANA. With just a few configuration steps, you can simulate real-world OAuth flows and validate your communication scenarios with ease.<BR /><BR /><span class="lia-unicode-emoji" title=":paw_prints:">🐾</span> <EM>With Bruno by your side, testing OAuth APIs is less of a chore and more of a walk in the park. Just don’t forget the treats (a.k.a. access tokens)!</EM> <span class="lia-unicode-emoji" title=":paw_prints:">🐾</span></SPAN></P><P> </P><H2 id="toc-hId--105994741"><STRONG>Image Attribution and Licensing<BR /></STRONG></H2><P>All illustrations in this post were created by the author using Microsoft Copilot (AI-based design tool) for educational purposes. The images are original content and free to use under the author’s permission for non-commercial, educational purposes. No external branding or promotional elements are included.</P><P><SPAN><BR /><BR /><BR /></SPAN></P>2025-12-22T11:34:21.364000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/leveraging-sap-ai-core-to-build-custom-ai-agents-with-crewai/ba-p/14279604Leveraging SAP AI Core to Build Custom AI Agents with CrewAI2025-12-24T06:41:18.594000+01:00Manisha_19https://community.sap.com/t5/user/viewprofilepage/user-id/1695623<H1 id="toc-hId-1636640266">Introduction</H1><P>AI agents are becoming the go-to pattern for building modular, autonomous assistants that can think, act, and collaborate. <STRONG>CrewAI</STRONG> gives you a developer-friendly way to orchestrate these agents, while <STRONG>SAP AI Core</STRONG> provides enterprise-grade access to LLMs with proper governance, scalability, and security.</P><P>This quickstart shows you how to connect <STRONG>CrewAI</STRONG> to <STRONG>SAP AI Core</STRONG>, create a custom LLM interface, and build your first working agent — all in under 10 minutes.</P><BLOCKQUOTE><P>By the end, you’ll have a simple <STRONG>CrewAI</STRONG> agent powered by an LLM hosted in <EM>SAP AI Core</EM>.</P></BLOCKQUOTE><P> </P><H1 id="toc-hId-1440126761">How SAP Facilitates Building Custom Agents</H1><P>CrewAI handles the <EM>agent orchestration layer</EM>, while SAP AI Core hosts and manages your LLMs. </P><P>Connecting them allows you to use SAP’s AI infrastructure directly inside your CrewAI workflows — no external API juggling.</P><P>Here’s the high-level flow:</P><P>User Prompt → CrewAI Agent → Custom LLM Class → SAP AI Core Endpoint → Model Response</P><P><STRONG>Flow summary:</STRONG></P><OL><LI>User sends a prompt → CrewAI Agent receives it.</LI><LI>Agent forwards it to a custom LLM wrapper (`SAPAILLM`).</LI><LI>The wrapper authenticates using the token and sends the request to SAP AI Core.</LI><LI>SAP AI Core executes the model and returns a response to the agent.</LI><LI>CrewAI handles the output and delivers the final answer.</LI></OL><P>This setup ensures enterprise-level reliability with full control over which models your agents use.</P><P> </P><H1 id="toc-hId-1243613256">Prerequisites</H1><P>Before jumping in, make sure you have:</P><UL><LI>Access to <STRONG>SAP AI Core</STRONG> with deployed LLM models (via AI Launchpad)</LI><LI><STRONG>Python 3.10+</STRONG></LI><LI><STRONG>VS Code</STRONG> or your favorite IDE</LI><LI>Installed libraries:</LI></UL><pre class="lia-code-sample language-bash"><code>pip install ipykernel
pip install crewai
pip install crewai_tools
pip install generative-ai-hub-sdk[all]
pip install langchain_community</code></pre><UL><LI>Your SAP AI Core Service Key downloaded as a .json file</LI><LI>Your A-game to create some amazing agents</LI></UL><P>*If you want to learn how to deploy your LLM model on launchpad, you can go through this <A href="https://developers.sap.com/tutorials/ai-core-generative-ai.html" target="_self" rel="noopener noreferrer">course.</A></P><P> </P><H1 id="toc-hId-1047099751">Step-by-Step Guide</H1><P>Let us now dive into building our own custom agent which utilizes SAP's AI Core.</P><H2 id="toc-hId-979668965">Step 1: Get Your SAP AI Core Credentials</H2><P>In your SAP BTP Cockpit → AI Core Instance → Service Keys, create a new key and save it locally as credentials.json.</P><P>Example structure:</P><pre class="lia-code-sample language-json"><code>{
"clientid": "your-client-id",
"clientsecret": "your-client-secret",
"url": "https://<your-region>.authentication.sap.hana.ondemand.com",
"serviceurls": {
"AI_API_URL": "https://***.aws.ml.hana.ondemand.com"
}
}</code></pre><P>This file contains everything needed to authenticate and access your deployed LLM.</P><H2 id="toc-hId-783155460">Step 2: Get Your BTP LLM Access Token</H2><P>We’ll use this token to authenticate each request to SAP AI Core.</P><pre class="lia-code-sample language-python"><code>import json, requests
# open credentials file
with open("<path-to-file>/credentials.json", "r") as key_file:
svcKey = json.load(key_file)
authUrl = svcKey["url"]
clientid = svcKey["clientid"]
clientsecret = svcKey["clientsecret"]
apiUrl = svcKey["serviceurls"]["AI_API_URL"]
# request token
params = {"grant_type": "client_credentials" }
resp = requests.post(f"{authUrl}/oauth/token",
auth=(clientid, clientsecret),
params=params)
BtpLlmAccessToken = resp.json()["access_token"]
print("Token retrieved successfully!")</code></pre><BLOCKQUOTE><P>Tokens usually expire after 1 hour. Make sure to refresh them before making multiple requests.</P></BLOCKQUOTE><H2 id="toc-hId-586641955">Step 3: Create a Custom LLM Class Template for CrewAI</H2><P>CrewAI lets you define your own LLM wrappers, which means we can make one that talks directly to SAP AI Core.</P><P>In this class, we can define the procedure to use the deployed models from SAP AI Core.</P><P>You can now import this class into any CrewAI workflow to use SAP’s hosted LLMs.</P><pre class="lia-code-sample language-python"><code>from crewai import BaseLLM
from typing import Any, Dict, List, Optional, Union
class CustomLLM(BaseLLM):
def __init__(self, model: str, api_key: str, endpoint: str, temperature: Optional[float] = None):
super().__init__(model=model, temperature=temperature)
self.api_key = api_key
self.endpoint = endpoint
def call(
self,
messages: Union[str, List[Dict[str, str]]],
tools: Optional[List[dict]] = None,
**kwargs
) -> Union[str, Any]:
"""Call the LLM with the given messages."""
# Convert string to message format if needed
if isinstance(messages, str):
messages = [{"role": "user", "content": messages}]
payload = {
"messages": messages,
"temperature": self.temperature,
"max_tokens": 1000 # This can be modified as per your use case. You can parameterize this as well.
}
headers = {
'AI-Resource-Group': "<your-resource-group-name>",
'Content-Type': 'application/json',
'Authorization': f'Bearer {BtpLlmAccessToken}',
}
# Make API call
response = requests.post(
self.endpoint,
headers= headers,
json=payload,
timeout=30
)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]</code></pre><H2 id="toc-hId-390128450">Step 4: Initialize your Custom LLM Class</H2><P>Alright, time to bring that class to life. </P><P>The cool thing here is that you can hook it up to any model you’ve deployed in SAP AI Core, whether it’s GPT-4o, Claude, or your own fine-tuned beast.</P><P>In this example, we’ll wire it up to a **<STRONG>GPT-4o</STRONG>** deployment and let CrewAI start chatting through it. You can get the deployment endpoint from your SAP AI Launchpad. You can find the payload formats for different models <A href="https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/example-payloads-for-inferencing-sap-ai-core-hosted" target="_self" rel="noopener noreferrer">here</A>.</P><pre class="lia-code-sample language-python"><code>deployment_url = "<deployment URL>" + "/chat/completions?api-version=2024-02-01"
# instantiating Custom LLM object
custom_llm = CustomLLM(
model="gpt-4o",
api_key=BtpLlmAccessToken,
endpoint=deployment_url,
temperature=0.7
)</code></pre><H2 id="toc-hId-193614945">Step 5: Build a Simple CrewAI Agent</H2><P>Now that your LLM class is ready, let’s create a basic Research CrewAI agent that uses it.</P><pre class="lia-code-sample language-python"><code>from crewai import Agent, Task, Crew
# Load credentials
agent = Agent(
role="Research Assistant",
goal="Find and analyze information",
backstory="You serve as a research assistant, helping to gather and analyze information about SAP tools and related topics.",
llm=custom_llm
)
# Create and execute tasks
task = Task(
description="Research the latest developments in SAP AI Core",
expected_output="A comprehensive summary",
agent=agent
)
crew = Crew(agents=[agent], tasks=[task])
result = crew.kickoff()</code></pre><P>If all goes well, you’ll see a clean response from your SAP-hosted LLM, directly through your CrewAI agent. Here is a sample output:</P><BLOCKQUOTE><P>CrewOutput(raw="The latest developments in SAP AI Core focus on enhancing the capabilities for deploying and managing AI models in a scalable and efficient manner. SAP AI Core is part of SAP Business Technology Platform (BTP) and is designed to integrate with other SAP applications to streamline AI operat....ding tools that enhance productivity, and driving innovation through AI-powered solutions.", pydantic=None, json_dict=None, agent='Research Assistant', output_format=<OutputFormat.RAW: 'raw'>)], token_usage=UsageMetrics(total_tokens=0, prompt_tokens=0, cached_prompt_tokens=0, completion_tokens=0, successful_requests=0))</P></BLOCKQUOTE><H1 id="toc-hId--131981279"> </H1><H1 id="toc-hId-441245299">Conclusion</H1><P>And that’s it — you just built your first CrewAI agent powered by <STRONG>SAP AI Core</STRONG>.</P><P>From here, you can:</P><UL><LI>Add tools for the agent to interact with SAP APIs (e.g., fetching invoice data).</LI><LI>Add memory and multi-agent orchestration.</LI><LI>Deploy it on <STRONG>SAP BTP</STRONG> or your internal infrastructure for enterprise use.</LI></UL><P>This quick start gives you the foundation — the next step is making your agent truly <EM>yours</EM>.</P><P> </P><H1 id="toc-hId-244731794">References</H1><UL><LI><A href="https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/example-payloads-for-inferencing-sap-ai-core-hosted" target="_blank" rel="noopener noreferrer">Example Payloads for Inferencing - SAP AI Core Hosted | SAP Help Portal</A></LI><LI><A href="https://developers.sap.com/tutorials/ai-core-generative-ai.html" target="_blank" rel="noopener noreferrer">Prompt LLMs in the generative AI hub in SAP AI Core & Launchpad | SAP Tutorials</A></LI></UL><P> </P><P> </P>2025-12-24T06:41:18.594000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702From Docs to Dialogue: Unlock AI Self-Service Learning for Your Internal Teams2026-01-08T23:27:21.357000+01:00joshlang412https://community.sap.com/t5/user/viewprofilepage/user-id/161843<P><FONT size="5"><STRONG><U>From Docs to Dialogue:</U> </STRONG><U><STRONG>Unlock AI Self-Service Learning for Your Internal Teams</STRONG></U></FONT></P><UL><LI>Every organization wants onboarding to be <STRONG>faster, clearer, and less dependent on tribal knowledge</STRONG>.</LI><LI>NotebookLM enables the <STRONG>conversion of trusted product documentation and internal project artifacts into an AI-based conversational resource</STRONG> that employees can engage with naturally, as if they were chatting with their most knowledgeable coworker. Instead of searching across files, PDFs, and web pages, teams can ask direct questions about product behavior, setup requirements, and configuration considerations, and instantly turn answers into internal training materials.</LI><LI>This AI-driven approach<UL><LI><STRONG>scales knowledge</STRONG> without replacing expertise</LI><LI><STRONG>streamlines learning</STRONG> without adding process overhead</LI><LI>helps organizations <STRONG>build deeper product fluency</STRONG> internally</LI></UL></LI></UL><DIV class=""> </DIV><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_1-1767381261428.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357831i08A9E4309F39EE01/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="joshlang412_1-1767381261428.png" alt="joshlang412_1-1767381261428.png" /></span></P><P><FONT size="5"><U><STRONG>Table of Contents</STRONG></U></FONT></P><P><ul =""><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-1658537610">Benefits & Use Cases</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-1462024105">Gathering Trusted Knowledge</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-1394593319">SAP Product Documentation</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-1198079814">Other relevant documentation</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-872483590">Getting Started with NotebookLM</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-675970085">Chatbot-Driven Learning</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-479456580">Generating Internal Training Assets</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-412025794">Audio Overview: podcast interview for auditory learning</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-215512289">Infographic: a one-stop shop PDF for visual learning</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-18998784">Video Overview: a training video for visual learning</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-169739636">Flashcards and Quizzes: A hands-on tool for your team to learn</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-266629138">Best Practices</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId-70115633">End-to-End overview</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--126397872">A real-world example, see NotebookLM in action</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--616314384">Prompt</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--812827889">Infographic</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--1009341394">Video Overview</a></li><li style="list-style-type:disc; margin-left:15px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--1205854899">Chatbot Response</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/from-docs-to-dialogue-unlock-ai-self-service-learning-for-your-internal/ba-p/14300702#toc-hId--1108965397">Considerations & Final Takeaways</a></li></ul></P><H1 id="toc-hId-1658537610"><U>Benefits & Use Cases</U></H1><UL><LI>Faster <STRONG>employee ramp-up</STRONG> by turning existing SAP Documentation into a conversational AI learning environment</LI><LI><STRONG>Instant answers</STRONG> for distributed teams across roles, experience levels, and time zones</LI><LI>Improved <STRONG>configuration confidence</STRONG> through real-time dialogue with authoritative sources</LI><LI><STRONG>Consistent onboarding experiences</STRONG> that reduce knowledge gaps between new hires</LI><LI>Rapid <STRONG>training asset creation</STRONG> (FAQs, slide outlines, learning summaries) generated directly from source material</LI><LI><STRONG>Search less, learn more</STRONG>—teams ask questions instead of digging across files and folders.</LI><LI>More <STRONG>informed internal experts</STRONG> because foundational questions get answered efficiently and consistently</LI><LI>Support interactions become <STRONG>more efficient and solution-focused</STRONG> when they happen, backed by deeper internal product fluency.</LI></UL><H1 id="toc-hId-1462024105"><U>Gathering Trusted Knowledge</U></H1><P>To begin using <STRONG>NoteBookLM</STRONG> as your AI Self-Service learning tool, you must first gather all of the relevant knowledge on the specific SAP product you are looking to focus on. These will become your sources within NotebookLM and will act as the knowledge base that NotebookLM uses as it guides your </P><P><EM>The free version of NotebookLM is limited to 50 sources, so please consider this as you capture your list of sources</EM></P><H2 id="toc-hId-1394593319">SAP Product Documentation</H2><P><A href="https://help.sap.com/docs/" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/</A> contains all relevant product documentation for your SAP products and is the best place to start gathering the content you need. </P><OL><LI>Go to <A href="https://help.sap.com/docs/ " target="_self" rel="noopener noreferrer">SAP Help</A> </LI><LI><STRONG>Search</STRONG> for the specific SAP product you are looking to hone in on and then click on the <STRONG>Suggested Product</STRONG> that most aligns with your search. <BR />Example: SAP Business Network for Procurement </LI><LI>In the bottom right corner of the top banner, click <STRONG>Download PDF </STRONG></LI><LI><STRONG>Select all </STRONG>relevant options from the pop-up page, then click <STRONG>Download</STRONG> at the bottom of the page </LI></OL><H2 id="toc-hId-1198079814">Other relevant documentation</H2><P>In addition to the standard SAP Product Documentation, you can gather information from various sources, including webpages, documents, PDFs, video transcripts, and raw text.</P><P><EM>Consult with your internal security and compliance teams to review the NotebookLM terms of service before interacting with any non-public documentation </EM></P><UL><LI>SOPs (standard operating procedures) for business processes </LI><LI>configuration specifications and customization designs</LI><LI>Project planning and scope documentation </LI><LI>public-facing / external blogs or training content on the product or business process (YouTube videos, Blogs, industry journals, etc.) </LI><LI>Product demos or training recording transcripts </LI><LI>Company branding information </LI></UL><H1 id="toc-hId-872483590"><U>Getting Started with NotebookLM</U></H1><P>Now that you have all your sources prepared, you can begin uploading them to NotebookLM. </P><OL><LI>Open <A href="https://notebooklm.google.com/" target="_self" rel="nofollow noopener noreferrer"><STRONG>NotebookLM</STRONG></A> and complete the onboarding process to <STRONG>create a free account</STRONG>. <EM>This will be tied to your Gmail/Google account. Please consult your internal teams regarding any enterprise plans or privacy concerns.</EM></LI><LI>Create a new notebook, selecting the <STRONG>+ Create New</STRONG> button</LI><LI><STRONG>Upload the files</STRONG> that you captured in the previous step by clicking <STRONG>Upload Files </STRONG></LI><LI>Allow your sources to load. Once complete, you should see all your <STRONG>sources displayed in the left panel</STRONG>.</LI><LI>To share your notebook with other colleagues, you can click the <STRONG>Share</STRONG> button in the top right of the page. </LI></OL><H1 id="toc-hId-675970085"><U>Chatbot-Driven Learning</U></H1><P>In the <STRONG>middle section</STRONG> of the page, you can interact with the notebook via <STRONG>Chat,</STRONG> as if you were talking with a product expert. </P><UL><LI>NotebookLM will then use <STRONG>AI to develop an answer</STRONG>, intentionally prioritizing your repository of sources to provide the proper context and deliver the best answer to your question.</LI><LI>Within the AI-generated response, you will notice that <STRONG>small numbers with circles</STRONG> around them are displayed throughout the answer. You can click these, and then the left panel will take you directly into the specific source document where it pulled that part of the answer from</LI><LI>For a more visual approach, you can also click the <STRONG>Mind Map</STRONG> option in the Studio panel. Once generated, you can click to view a visual representation of the sources you uploaded. As you click deeper into the mind map, the chat will auto-update and help kick-start a conversation you can use to drill down into a specific topic area. </LI></UL><P><EM>As with any AI LLM-based tool, results may vary and should be reviewed in detail by your teams for accuracy. </EM></P><P>You can continue your conversation with the AI chatbot by asking it</P><UL><LI>additional clarifying questions specific to your needs</LI><LI>reframing your question in more complete wording to provide more context</LI><LI>to drill down deeper into specific areas of its response </LI><LI>to drill down deeper into specific documents/sources </LI><LI><A href="https://support.google.com/notebooklm/answer/16179559?hl=en&ref_topic=16164070&sjid=9358726304321028467-NA" target="_self" rel="nofollow noopener noreferrer">Additional documentation </A></LI></UL><H1 id="toc-hId-479456580"><U>Generating Internal Training Assets</U></H1><P>The <STRONG>right portion</STRONG> of the page, called <STRONG>Studio,</STRONG> will include options (depending on your plan) to generate content tailored to your team's training and enablement needs. </P><P>For each option in the studio, <SPAN>click the <STRONG>Pencil</STRONG> icon to customize the result by providing a <STRONG>prompt</STRONG> that guides the studio's content generation and is</SPAN> tailored to your needs.</P><UL><LI>If you do not click the pencil, it will auto-generate based on all your sources. Depending on the size and scope of your sources, the output may be too high-level without a custom prompt provided. </LI></UL><P>Each Studio option may also provide its own set of <STRONG>configuration toggles</STRONG> in addition to a custom prompt.</P><P>Each option also <SPAN>includes a <STRONG>Download</STRONG> option, allowing you to export</SPAN> any generated content to your internal libraries or training tools. </P><H2 id="toc-hId-412025794">Audio Overview: podcast interview for auditory learning</H2><P>The <STRONG>audio overview</STRONG> option generates a <STRONG>one-on-one podcast interview</STRONG> between two people who discuss your sources and/or your custom prompt. You can use this to create targeted podcasts that dive into specific areas, then include them in onboarding materials for teams who prefer an Auditory format. </P><P>*Pro Tip: upload your own <STRONG>transcript or outline</STRONG> of how you would like the conversation to go. This will help guide the model to create an interview that covers the areas you consider essential, rather than guessing what to highlight. </P><P>*Pro Tip: When providing your custom prompt, <STRONG>include specifics</STRONG> as to how you would like the <STRONG>tone</STRONG> of the speakers to be. By default, the podcasts may seem a bit exaggerated at times. You can ask the prompt to use more normal and realistic tones and/or ask it to avoid certain words like "massive" or "huge", for example </P><H2 id="toc-hId-215512289">Infographic: a one-stop shop PDF for visual learning</H2><P>The <STRONG>infographic</STRONG> will create a <STRONG>one-pager PDF</STRONG> with AI-generated visuals that provide a more "connect the dots" overview of the sources. It will start with a high-level overview that connects all sources (similar to a Mind Map), but if you provide a specific prompt, it can also provide a more refined overview of that topic area. </P><P>*Pro Tip: check the visual for any discrepancies. If you see issues, recreate the infographic by changing your custom prompt to call out the mistake you want to avoid. The more specifics you provide, the better </P><P>*Pro Tip: In your custom prompt, call out the audience that will be consuming the graphic itself. It can tailor the messaging and graphic style to better suit that audience. You can also ask it to reference a branding source if you have uploaded one that includes logos, color schemes, etc. </P><H2 id="toc-hId-18998784">Video Overview: a training video for visual learning</H2><P>The <STRONG>video overview </STRONG>will provide a more <STRONG>traditional training format</STRONG>, with one speaker presenting the topic while an AI-generated video plays in the background. The video will resemble a PowerPoint presentation. You can use this to create a high-level video overview of the sources, or to provide a custom prompt that goes into one area in more detail. </P><P>*Pro Tip: The graphics or numbers on the video itself need to be reviewed for accuracy. If you see discrepancies between what is displayed and what is mentioned in the video, you will need to refine your custom prompt and regenerate the video to close the gap. The more specific you are (e.g., an outline or script and dos and don'ts), the better the results you will get. You are unable to edit the video itself or adjust the slides/visuals after it is generated </P><H2 id="toc-hId-169739636">Flashcards and Quizzes: A hands-on tool for your team to learn </H2><P>The <STRONG>Flashcards</STRONG> option searches for primary and recurring keywords and definitions in your sources and automatically generates flashcards to help learners study/memorize key terms. </P><P>The <STRONG>Quiz</STRONG> option will also search for primary and recurring keywords across the sources and generate an interactive question-and-answer quiz. </P><P><EM>*Quizzes cannot be downloaded externally </EM></P><H1 id="toc-hId-266629138"><U>Best Practices</U></H1><UL><LI>Depending on your sources and the sheer volume of data within the notebook, the AI Models can and will hallucinate. Having <STRONG>detailed, specific prompts</STRONG> to guide each conversation or studio creation toward a specific goal will produce the best results. It is better to break your content needs into smaller groupings with detailed prompts than to blindly ask it to create high-level overviews. </LI><LI>Be sure to consult with your internal privacy and compliance teams before uploading any non-public information into NotebookLM. As a general rule, only upload <STRONG>publicly available content</STRONG> to NotebookLM. </LI><LI><STRONG>YouTube videos</STRONG> can be effective at providing an additional perspective through a more expert-focused, human-guided lens on your dataset of sources. Collect long-form YouTube interviews with experts in your area and upload the URLs as additional sources to your notebook. </LI><LI>Create your own <STRONG>scripts or outlines</STRONG> beforehand and provide those to the chat or studio in addition to your primary ask it. This will help ensure it follows the correct path you are looking for and does not hallucinate. You can also use other LLMs or AI Chatbots to help you come up with ideas and draft a high-level script or outline to follow. </LI><LI>If you share your notebook with end users, ensure they understand that chatbot results must be <STRONG>reviewed and refined</STRONG> to provide guidance that helps kick-start an investigation into a topic. End users should be wary of copying/pasting AI-generated content externally. </LI></UL><H1 id="toc-hId-70115633"><U>End-to-End overview</U></H1><P><STRONG>Find your product at SAP Help</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_0-1767380178000.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357824i08FEC6355E0CB9D2/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_0-1767380178000.png" alt="joshlang412_0-1767380178000.png" /></span></P><P><STRONG>Download the SAP product documentation to PDF </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_0-1767380280271.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357825i25496D86921872B2/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_0-1767380280271.png" alt="joshlang412_0-1767380280271.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_1-1767380331957.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357826i2F1C96D3E7861357/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_1-1767380331957.png" alt="joshlang412_1-1767380331957.png" /></span></P><P><STRONG>Create a notebook and upload files to NotebookLM </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_2-1767380457944.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357827iB85BF649CBEDD5F6/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_2-1767380457944.png" alt="joshlang412_2-1767380457944.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_3-1767380504131.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357828i621122F749A19968/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_3-1767380504131.png" alt="joshlang412_3-1767380504131.png" /></span></P><P><STRONG>Observe your sources and begin interacting with the chatbot & studio</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_0-1767380632096.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357829i106F3C090D530D2B/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_0-1767380632096.png" alt="joshlang412_0-1767380632096.png" /></span></P><H1 id="toc-hId--126397872"><U>A real-world example, see NotebookLM in action</U></H1><P>The SAP Business Network for Procurement is used globally to connect buyers and suppliers and enable interaction and transactions. This results in a variety of country-specific invoicing laws and portals that vary by country. The application itself includes standard invoicing features and built-in support for various country-specific E-Invoicing mandates. </P><P>In this example, we will investigate how <SPAN>we can learn more about this topic using NotebookLM with a <U><STRONG>sing</STRONG><STRONG>le</STRONG></U></SPAN><U><STRONG> prompt</STRONG></U> to generate a <STRONG>text answer</STRONG>, an<STRONG> infographic</STRONG>, and a <STRONG>video overview</STRONG> file. </P><H2 id="toc-hId--616314384">Prompt</H2><PRE>I want to learn more about e-invoicing and country-specific invoicing rules within the SBN for the Procurement application. My team has a basic understanding of the core SBN Procurement flows, but is seeking more guidance on the e-invoicing and country-specific invoicing logic and features within the product. Can you help produce content that clarifies this for us and explains the topic in detail as it relates to the larger product? Id like to know what is supported in the app, how i would interact with these features, if there are any specific setup or customization requirements to use this, and a list of which countries are currently supported. <BR />The audience here is a group of corporate employees who know procurement as a whole. Keep things professional and short/sweet. Don't use any exaggerations and don't overhype with any big, bold words. Use simpler, down-to-earth professional wording. </PRE><H2 id="toc-hId--812827889">Infographic</H2><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joshlang412_0-1767384252319.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/357839i345D2CAA82B6C556/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="joshlang412_0-1767384252319.png" alt="joshlang412_0-1767384252319.png" /></span></P><H2 id="toc-hId--1009341394">Video Overview</H2><P><A href="https://community.sap.com/source-Ids-list" target="1_7wzzvc9a" rel="nofollow noopener noreferrer"> </A></P><H2 id="toc-hId--1205854899">Chatbot Response</H2><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Here is an overview of how SAP Business Network (SBN) handles e-invoicing and country-specific compliance.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">In the context of the Procurement application, e-invoicing on SBN goes beyond emailing a PDF. The system uses structured data (cXML) to validate invoices against your business rules and the legal regulations of the supplier’s country before the invoice reaches your internal systems.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">The core feature that manages this is Country/Region-Based Invoice Rules. When a supplier submits an invoice, SBN detects the originating country based on the supplier’s VAT ID prefix or address, then applies a specific set of validation rules that override default transaction rules. This ensures that suppliers in different countries — for example Italy vs the United States — see different required fields and localized behaviors.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">The platform supports several compliance models depending on the region.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">For VAT-regulated regions, particularly in the EU, Australia, and Canada, SBN enforces required tax information such as VAT IDs, supply dates, tax breakdowns, and local currency requirements.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">In countries using government clearance models, where the government must approve an invoice before the buyer receives it, SBN integrates directly with national invoice platforms. Examples include KSeF (Poland), SDI (Italy), and the E-Invoice system (India).</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">SBN automatically applies digital signatures to invoices for many countries to ensure document integrity and authenticity.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">The system also converts standard cXML invoice data into legally mandated formats required by specific governments or networks. Examples include FatturaPA (Italy) and Peppol UBL (supported in countries such as Belgium and Singapore).</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Buyers interact with these features primarily through configuration rather than manual invoice transactions.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">To configure rules, buyers navigate to Country-Based Invoice Rules within the Administration and Configuration settings. Here, countries can be activated and compliance requirements can be enabled or disabled, such as requiring suppliers to send tax invoice documents.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Buyers can monitor invoice submission status in their inbox. For government-connected countries, invoice history will include status updates showing government validation progress and results.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Buyers can also generate compliance visibility through built-in reporting. One example is the Supplier Legal Profile report, which shows which suppliers have completed the required tax compliance setup for their country or region.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Suppliers see a dynamic invoice creation experience based on their country. During invoice creation, suppliers are shown required local tax fields based on their location — for example GST IDs and HSN codes in India, or Stamp Duty fields in Italy.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Suppliers in countries with strict mandates must complete a Legal Profile or Country Configuration in their account settings. This is where they enter tax registration numbers and upload any required digital certificates or identifiers for government or network compliance.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">To enable these features in an SBN Buyer account, buyers must update Default Transaction Rules to allow country-based invoice rules to take precedence instead of being ignored.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Buyers must also add and activate the specific countries they trade with in Country-Based Invoice Rules.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">For countries that route invoices through external networks like Peppol, buyers must register their Peppol Participant ID or equivalent identifier in their profile so suppliers can send invoices successfully.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">For countries where SBN retrieves invoices directly from government portals (inbound invoice flow), buyers must configure External Document Service settings to establish a connection with the national tax authority. Examples include Poland and Romania.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">SBN supports country-specific compliance logic, VAT validation, or government integration for the following countries.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Americas: Brazil, Canada, Chile, Colombia, Mexico, Peru, United States.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Asia/Pacific: Australia, China, India, Japan, Malaysia, New Zealand, Singapore, South Korea, Taiwan, Thailand.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Europe: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Russia, Slovakia, Slovenia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom.</FONT></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><FONT size="2">Middle East/Africa: Israel, Saudi Arabia, South Africa, UAE.</FONT></P><H1 id="toc-hId--1108965397"><U>Considerations & Final Takeaways</U></H1><P>Every organization wants onboarding to be faster, clearer, and built on trusted knowledge rather than fragmented internal know-how. NotebookLM enables teams to convert authoritative product documentation, SOPs, configuration specs, SOWs, demos, and project artifacts into a secure AI-powered conversational resource that employees can interact with naturally—like chatting with their most knowledgeable coworker. Instead of manually searching across files and web pages, teams can ask direct questions about product capabilities, setup requirements, and configuration considerations, then instantly convert those answers into internal enablement assets such as training outlines, FAQs, flashcards, infographics, and learning summaries. When setting up your NotebookLM hub, prioritize source quality, clear document labeling, and intentional prompting during training to reduce ambiguity and improve accuracy. This approach scales product fluency across teams, streamlines learning without adding process overhead, and creates more confident, self-sufficient users—while ensuring that expert conversations, when needed, are informed, efficient, and outcome-driven.</P>2026-01-08T23:27:21.357000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/sap-joule-data-quality-kpis-auto-generates-and-measures-kpi-s-to-fix-broken/ba-p/14305262SAP Joule Data Quality KPIs: Auto-Generates and Measures KPI's to fix broken SAP Master Data Records2026-01-11T12:32:39.711000+01:00STALANKIhttps://community.sap.com/t5/user/viewprofilepage/user-id/13911<H2 id="toc-hId-1787764665">How SAP Joule can Turn Data Quality KPIs into Real Business Control</H2><P><FONT color="#FF0000"><SPAN>*Views expressed here are my own and doesn't represent any entity.</SPAN></FONT></P><P class="">Every SAP-driven organization knows this truth — even if it’s rarely said out loud:</P><BLOCKQUOTE><P class="">Most business risks don’t start with strategy.</P></BLOCKQUOTE><BLOCKQUOTE><P class="">They start with bad data.</P></BLOCKQUOTE><P class="">A missing tax code. A duplicate customer. An outdated BOM. An untraceable ESG metric. Individually, they look small. Collectively, they erode compliance, margins, and trust.</P><P class="">This is where SAP Joule changes the game — not by adding more dashboards, but by turning data quality into an intelligent, business-aware control system.</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joule.jpg" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360270i69828FF8EBA37B8C/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="joule.jpg" alt="joule.jpg" /></span></P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-1591251160">The Old World: Measuring Data Too Late</H2><P class="">In traditional SAP environments, data quality is:</P><UL class=""><LI><P class="">Measured after the fact</P></LI><LI><P class="">Reported at aggregate level</P></LI><LI><P class="">Owned by IT, not the business</P></LI></UL><P class="">By the time leadership sees a data quality issue, the impact has already landed:</P><UL class=""><LI><P class="">Regulatory rework</P></LI><LI><P class="">Production delays</P></LI><LI><P class="">Revenue leakage</P></LI><LI><P class="">Audit findings</P></LI></UL><P class="">Joule can flip this model by working at the level where risk is actually created — the SAP data object.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-1394737655">How Joule can Generate KPIs for Real SAP Objects (Not Abstract Metrics)?</H2><P class="">SAP landscapes are built on concrete objects:</P><UL class=""><LI><P class="">Material Master (MARA, MARC)</P></LI><LI><P class="">Business Partner (BUT000)</P></LI><LI><P class="">Bills of Material (STPO)</P></LI><LI><P class="">Pricing Conditions (KONV)</P></LI><LI><P class="">Excise & Customs data (SAP GTS)</P></LI><LI><P class="">ESG attributes (SAP EH&S)</P></LI></UL><P class="">Joule doesn’t need to guess what to measure — it reads how the business actually uses these objects.</P><H3 id="toc-hId-1327306869">Example 1: Material Master (SAP S/4HANA – PP/MM)</H3><P class=""><STRONG>The problem: </STRONG>A manufacturing site experiences frequent production stoppages due to incorrect or incomplete material data.</P><P class="">What Joule can do :</P><UL class=""><LI><P class="">Identifies which material fields are mandatory for production and quality</P></LI><LI><P class="">Generates KPIs such as:</P></LI></UL><PRE><CODE>Material master completeness %</CODE></PRE><PRE><CODE>BOM consistency across plants</CODE></PRE><PRE><CODE>Production posting error rate</CODE></PRE><P class="">What leadership sees:</P><P class="">“Three material attributes are causing 80% of production rework in two factories.”</P><P class="">Not a metric — a decision trigger.</P><H2 id="toc-hId-1001710645">Enter SAP Joule: Data Quality That Thinks Like a Business Partner</H2><P class="">SAP Joule brings generative AI into the SAP stack — not as a chatbot, but as an intelligent orchestration layer across data, processes, and controls.</P><P class="">When applied to data quality, Joule can enable three game-changing capabilities:</P><PRE><CODE>Automatic KPI generation for every SAP data object</CODE></PRE><PRE><CODE>Context-aware thresholds instead of one-size-fits-all rules</CODE></PRE><PRE><CODE>Business-language explanations of data risk</CODE></PRE><P class="">Let’s break this down.</P><H2 id="toc-hId-805197140">Generating Data Quality KPIs for Every SAP Data Object</H2><P class="">SAP systems are built on standard data objects. Traditionally, teams manually decide:</P><PRE><CODE>Which fields matter</CODE></PRE><PRE><CODE>Which KPIs to track</CODE></PRE><PRE><CODE>Which thresholds apply</CODE></PRE><P class="">With Joule, this can become intelligent and automated.</P><H2 id="toc-hId-608683635">How Joule can do in future?</H2><P class="">Joule analyzes:</P><PRE><CODE>SAP object metadata (tables, fields, relationships)</CODE></PRE><PRE><CODE>Usage patterns (how often data is used downstream)</CODE></PRE><PRE><CODE>Process criticality (e.g., regulatory vs operational)</CODE></PRE><PRE><CODE>Historical data issues and corrections</CODE></PRE><P class="">From this, Joule can propose KPIs automatically, such as:</P><PRE><CODE>Completeness (% of mandatory fields populated)</CODE></PRE><PRE><CODE>Accuracy (rule-based and cross-object consistency)</CODE></PRE><PRE><CODE>Duplication rates</CODE></PRE><PRE><CODE>Timeliness (data freshness vs SLA)</CODE></PRE><PRE><CODE>Conformity to country-specific rules</CODE></PRE><P class="">And it does this per object, not per report. It can measure Thresholds That Actually Reflect Business Risk. A 95% completeness score doesn’t mean the same thing for:</P><PRE><CODE>ESG reporting</CODE></PRE><PRE><CODE>Excise tax determination</CODE></PRE><PRE><CODE>Internal planning data</CODE></PRE><P class="">This is where Joule goes beyond traditional data quality tools.</P><H2 id="toc-hId-412170130">Dynamic, risk-based thresholds</H2><P class="">Joule can:</P><PRE><CODE>Adjust thresholds by market, plant, or regulation</CODE></PRE><PRE><CODE>Flag materiality (which data issues truly matter)</CODE></PRE><PRE><CODE>Learn from outcomes (e.g., audit findings, rework, penalties)</CODE></PRE><P class=""><STRONG>Example: </STRONG>A missing material attribute in a non-regulated market → warning</P><P class="">The same issue in a regulated market → critical breach</P><P class="">Instead of “red/amber/green and technical language for issues” , leaders see the impact of data quality issue on business.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-215656625">Making Data Quality Understandable to Humans</H2><P class="">Data quality often fails because it’s explained in technical language.</P><P class="">Joule changes that from metrics to meaning. Rather than showing:</P><PRE><CODE>“2.1% duplication rate in Business Partner data”</CODE></PRE><P class="">Joule can explain:</P><PRE><CODE>“Duplicate customer records are increasing invoice rework and delaying cash collection in two key markets.”</CODE></PRE><P class="">This turns data governance into decision support and KPIs into actionable insight and IT controls into business confidence.</P><H2 id="toc-hId-19143120">Continuous Monitoring, Not Periodic Audits</H2><P class="">With Joule integrated into SAP:</P><PRE><CODE>KPIs update continuously</CODE></PRE><PRE><CODE>Threshold breaches trigger explanations, not just alerts</CODE></PRE><PRE><CODE>Data owners receive context, not noise</CODE></PRE><P class="">This supports:</P><PRE><CODE>S/4HANA transformations</CODE></PRE><PRE><CODE>SOX and regulatory assurance</CODE></PRE><PRE><CODE>ESG credibility</CODE></PRE><P class="">Data quality becomes a living system, not a quarterly report.</P><H2 id="toc-hId-169883972">Why This Matters for Leadership</H2><P class="">For executives, the value proposition is straightforward and compelling:</P><PRE><CODE>Mitigate Risk: Significantly reduces regulatory surprises and enable swift, proactive responses to emerging threats.</CODE></PRE><PRE><CODE>Optimise Operations: Eliminates operational friction, driving greater efficiency and smoother processes.</CODE></PRE><PRE><CODE>Bolster Trust: Cultivates unwavering confidence in critical business data and insights.</CODE></PRE><DIV class=""><HR /><DIV class=""> </DIV></DIV><P class="">In essence, SAP Joule transforms high-quality data from a desirable aspiration into a foundational, strategic operating capability.</P><P class="">This marks a profound shift in data quality management. We move beyond outdated practices of merely policing data fields, escalating issues it IT language, enforcing rigid rules, or assigning blame. Instead, Joule empowers organisations to:</P><PRE><CODE>Understand Impact: Clearly articulate the business consequences of data issues.</CODE></PRE><PRE><CODE>Prioritise Strategically: Focus resources on what truly matters to the business.</CODE></PRE><PRE><CODE>Protect Value: Safeguard enterprise assets and drive growth at scale.</CODE></PRE><P class="">This isn't just about achieving better data quality; it's about establishing superior business control.</P>2026-01-11T12:32:39.711000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/sap-rpt-1-why-is-it-essential-for-predicting-business-outcomes-in-today-s/ba-p/14314375SAP RPT‑1: Why is it Essential for Predicting Business Outcomes in Today’s and Future Generative AI?2026-01-26T05:55:37.670000+01:00MajoMartinezhttps://community.sap.com/t5/user/viewprofilepage/user-id/14892<P>Ever since the introduction of ChatGPT in November 2022, Generative AI has reshaped the AI industry. Companies like Amazon, Google, Microsoft, and of course SAP have accelerated innovation, recognizing the enormous value Generative AI and Large Language Models (LLMs) bring to enterprise operations. </P><P>General‑purpose LLMs excel at understanding language, reasoning over text, and identifying patterns in unstructured information. They are creative, adaptive, and capable of leveraging data from files, documents, and multimedia across diverse data systems.</P><P><FONT size="5">The Motive</FONT></P><DIV><P>However, when it comes to actually <STRONG>predicting business outcomes</STRONG>, LLMs fall short. Why?<BR />Because they are not designed for high‑precision, multi‑step reasoning over large, enterprise‑grade <STRONG>tabular datasets</STRONG>.</P><P>And this is where the majority of business data resides: structured tables, such as GL accounts, invoices, inventory, sales records, financial transactions, and countless others. Not in free‑form text.</P><DIV><P>Everyone wants to be able to predict business outcomes like:</P><DIV><UL><LI>What is the probability of converting a sale?</LI><LI>Which customers are likely to pay late?</LI><LI>Who is at risk of churn?</LI><LI>...</LI></UL></DIV><DIV>Historically, answering these questions required traditional machine learning like classification models, linear regression, etc. (a.k.a Traditional AI or Narrow AI). The problem with this is that classical ML requires to train a model per task, which easily can lead to hundreds of separate models hard to maintain in the long run, making it very cumbersome, expensive and extremely difficult to scale.</DIV><DIV> </DIV><DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rpt blog.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365028iEA5B2D61306B8EB7/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="rpt blog.png" alt="rpt blog.png" /></span></DIV><DIV> </DIV><DIV><FONT size="5">The Solution: SAP RPT-1</FONT></DIV><DIV> </DIV><DIV>To solve this, SAP introduced in TechEd 2025 the first foundation model specifically designed for structured enterprise data: Relational Pre‑Trained Transformer 1 (RPT‑1). The model is trained natively on tabular business datasets and is engineered to understand rows, columns, joins and business semantics out of the box. As the name suggests, RPT‑1 is:</DIV><UL><LI><STRONG>Relational</STRONG>: optimized for structured relational business data</LI><LI><STRONG>Pre‑trained</STRONG>: powered by tens of thousands of GPU hours (no more endless classical ML training cycles)</LI><LI><STRONG>Transformer</STRONG>‑based: performs logical, not linguistic, transformations (e.g., filtering, joining, aggregating, and multi‑step reasoning)</LI></UL><P>Instead of hundreds of ML models, you now can use one single foundation model for many predictive tasks.</P><P>One of the most disruptive features of RPT‑1 is its in‑context learning capability. Instead of training or fine‑tuning multiple modes, you simple provide historical rows of data and request predictions for new rows. Prediction cases like:</P><UL><LI>Customer churn prediction</LI><LI>Late delivery prediction</LI><LI>Late payment prediction</LI><LI>Sales conversion prediction</LI><LI>And many more</LI></UL><P><STRONG>RPT‑1 Performance Benchmark</STRONG></P><P>RPT-1 outperforms both LLM and classical ML for tabular data and high value agentic cases. Here are some stats so far:</P><UL><LI>50x faster than LLMs</LI><LI>Up to 2x Prediction Quality vs Narrow AI/ML Models</LI><LI>Up to 3.5x Prediction Quality vs LLMs</LI><LI>100,000x fewer GPU FLOPs</LI><LI>50,000x Less Energy consumption vs LLMs*</LI></UL><P data-unlink="true">*comparable tasks on a NVIDIA H100, as a benchmark. Sources (<A href="https://youtu.be/X9qHsLmPMk4?t=946&si=1--wB78x9otamQHy" target="_self" rel="nofollow noopener noreferrer">TechEd</A>, <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/a-new-paradigm-for-enterprise-ai-in-context-learning-for-relational-data/ba-p/14260221" target="_self">Blog</A>) .</P><P><STRONG>Why are these stats so important?</STRONG> On top of the benefits you can already imagine like time and resource savings; according to Gartner 40% of agentic-AI projects are likely to be canceled by end of 2027, mainly because of high cost, complexity, or unclear business value (<A href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027" target="_self" rel="nofollow noopener noreferrer">source)</A>. RPT‑1 directly addresses these challenges.</P><P> </P><P><FONT size="5">Sales Conversion Prediction Example Use Case</FONT></P><DIV><P>During SAP TechEd, a compelling demo showcased how RPT‑1 integrates with SAP’s Agentic AI.</P><P>Imagine a sales team wanting to prioritize leads based on their likelihood to convert. Instead of manually training an ML model, a data analyst can use Joule to generate SQL code leveraging <STRONG>RPT‑1’s <CODE>PREDICT</CODE> function</STRONG>.</P><P>Steps include:</P><OL><LI>Join sales inquiries, historical performance, customer attributes, and other relevant data into a Data Product in SAP Business Data Cloud.</LI><LI>Expose the Data Product to SAP HANA Cloud via zero‑copy.</LI><LI>Use RPT‑1 to instantly generate a new prediction column "sales_conversion_probability"; without training or tuning.</LI></OL><P>The model derives patterns directly from the historical data and produces high‑quality prediction scores.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RPT1 on HANA Cloud - joule sql code.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365060iB4B5A3378FD6036C/image-size/large?v=v2&px=999" role="button" title="RPT1 on HANA Cloud - joule sql code.png" alt="RPT1 on HANA Cloud - joule sql code.png" /></span></P><P data-unlink="true">You can watch the full demo <A href="https://youtu.be/X9qHsLmPMk4?t=2123&si=pmMuIfGmZJ3U5MsL" target="_self" rel="nofollow noopener noreferrer">here</A> (jump to minute 35:23).</P><P data-unlink="true"> </P><P><FONT size="5">Available RPT‑1 Versions</FONT></P><P>SAP RPT‑1 has been generally available since Q4 2025. You can choose from:</P><DIV><UL><LI><STRONG>SAP RPT‑1 Small</STRONG>: optimized for speed and efficiency</LI><LI><STRONG>SAP RPT‑1 Large:</STRONG> optimized for highest accuracy by using more capacity</LI><LI><STRONG>SAP RPT‑1 OSS</STRONG> (Open Source): available on <A href="https://huggingface.co/sap/sap-rpt-1-oss" target="_self" rel="nofollow noopener noreferrer">HuggingFace</A> and <A href="https://github.com/SAP-samples/sap-rpt-1-oss" target="_self" rel="nofollow noopener noreferrer">Github</A> for exploration and learning</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rpt on genai hub.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365061iB9E0A36A349BD445/image-size/large?v=v2&px=999" role="button" title="rpt on genai hub.png" alt="rpt on genai hub.png" /></span></P><P><FONT size="5">Try it yourself!</FONT></P><P>Don’t have access to a BTP account yet? here is an entry point:</P><UL><LI>Sign up for a 30-day SAP Generative AI Hub <A href="https://www.sap.com/products/artificial-intelligence/generative-ai-hub-trial.html" target="_self" rel="noopener noreferrer">trial</A></LI><LI>Follow this step-by-step <A href="https://community.sap.com/t5/artificial-intelligence-blogs-posts/sap-rpt-1-a-step-by-step-guide-on-getting-started/ba-p/14290171" target="_self">guide</A> to get started by <a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1698208">@sherene_tan</a> </LI></UL><P> </P><P><FONT size="5">Conclusion</FONT></P><P>SAP RPT‑1 represents a major shift in how enterprises will deliver predictive insights in the GenAI era. While LLMs excel at language understanding, they are not built for structured, relational business data. RPT‑1 closes this gap and eliminates the complexity of traditional machine learning, drastically reducing cost and compute.</P><P>With SAP RPT-1's foundation model purpose‑built for enterprise-grade tabular data, the objective<SPAN> is to enable organizations to operationalize predictive insights faster, more efficiently, and more accurately than ever before.</SPAN></P></DIV></DIV></DIV></DIV>2026-01-26T05:55:37.670000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/i-built-an-ai-agent-with-litellm-crewai-and-sap-rpt-1-here-s-what-actually/ba-p/14315663I built an AI Agent with LiteLLM, CrewAI and SAP-RPT-1: Here’s What Actually Worked2026-01-27T12:29:54.420000+01:00noravonthenenhttps://community.sap.com/t5/user/viewprofilepage/user-id/7343<P>I hope everyone found their way into the new year and had some peaceful time over the holidays! Being back at work, I started to try out all the cool announcements from last year. One of them is SAP contributing to the third-party open-source Python library <A href="https://www.litellm.ai/" target="_self" rel="nofollow noopener noreferrer">LiteLLM</A>, where SAP now functions as an <A href="https://docs.litellm.ai/docs/providers/sap" target="_self" rel="nofollow noopener noreferrer">official LLM provider</A>. That means you can now build AI Agents with the LiteLLM gateway and all the AI Agent frameworks that are compatible and provide your Generative AI Hub credentials to connect to LLMs via SAP's AI Foundation.</P><P>I tried out <A href="https://www.crewai.com/" target="_self" rel="nofollow noopener noreferrer">CrewAI</A> first because it is such a powerful and widely used framework and I have been curious and willing to try it out for some time now!</P><H2 id="toc-hId-1788692031">First things first: What is LiteLLM?</H2><P>LiteLLM is a library that provides a unified, provider-agnostic API for calling large language models (LLMs) and handling common tasks (completion, chat, streaming, multimodal inputs). It standardizes request/response handling and includes utilities that speed up integration with agent frameworks and tooling. Essentially it is a gateway between LLM providers and AI Agent frameworks.</P><P>That means you can use your Generative AI Hub credentials to build state of the art AI Agents with any of the models available through GenAI Hub and any of the AI Agent frameworks compatible with LiteLLM. This combination is extremely powerful because that means you can use LLMs hosted or managed by SAP (Mistral, Llama, Nvidia) and models from our partners such as Azure OpenAI, Amazon Bedrock (including Anthropic) and Gemini.</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span>check all the available models here: <A href="https://me.sap.com/notes/3437766" target="_blank" rel="noopener noreferrer">https://me.sap.com/notes/3437766</A></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span>And all the AI Agent frameworks including: CrewAI, LangGraph, PydanticAI, Google ADK, Strands SDK, OpenAI ADK, LlamaIndex, Microsoft Agents and many more. You can find a list with examples here: <A href="http://sap-contributions.github.io/litellm-agentic-examples/" target="_blank" rel="noopener nofollow noreferrer">http://sap-contributions.github.io/litellm-agentic-examples/</A></P><H2 id="toc-hId-1592178526">How to get started</H2><P>Now to get started you need an SAP AI Core instance with the extended plan (on SAP BTP), that means you have access to Generative AI Hub and all the LLMs available.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="SAP BTP Cockpit, Instances and Subscriptions, SAP AI Core – extended plan" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365761i3D3F5E02637302A2/image-size/large?v=v2&px=999" role="button" title="noravonthenen_0-1769511986373.png" alt="SAP BTP Cockpit, Instances and Subscriptions, SAP AI Core – extended plan" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">SAP BTP Cockpit, Instances and Subscriptions, SAP AI Core – extended plan</span></span></P><P>LiteLLM is directing calls to the LLMs through the orchestration service on Generative AI Hub. That means you do not need to deploy your models on SAP AI Core. You only need the out of the box deployment of the orchestration service. This way you can easily switch between all the models available via the orchestration service.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshot from Model Library of SAP AI Launchpad showing models available via Orchestration Service." style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365760iB995D323F1242A1A/image-size/large?v=v2&px=999" role="button" title="noravonthenen_1-1769511986379.png" alt="Screenshot from Model Library of SAP AI Launchpad showing models available via Orchestration Service." /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Screenshot from Model Library of SAP AI Launchpad showing models available via Orchestration Service.</span></span></P><P> <SPAN>On a side note: you do not need SAP AI Launchpad as the UI to do any of this. You can also find your endpoints via API calls (more info: </SPAN><A href="https://developers.sap.com/tutorials/ai-core-genaihub-provisioning.html" target="_blank" rel="noopener noreferrer">developer tutorial</A><SPAN> or </SPAN><A href="https://api.sap.com/api/AI_CORE_API/resource/Scenario" target="_blank" rel="noopener noreferrer">documentation</A><SPAN>). or use the </SPAN><A href="https://sap.github.io/ai-sdk/" target="_blank" rel="noopener nofollow noreferrer">SAP Cloud SDK for AI</A><SPAN>.</SPAN></P><H2 id="toc-hId-1395665021">Build your crew of agents</H2><P>Now you only need to install <A href="https://www.python.org/downloads/" target="_blank" rel="noopener nofollow noreferrer">Python</A>. Once that is done you need to install the LiteLLM library and in my case also CrewAI:</P><pre class="lia-code-sample language-bash"><code>pip install litellm crewai</code></pre><P>Ok now we can actually start with building our AI Agent with CrewAI. As the name suggests you build a crew of agents that have a set of tools available to accomplish certain tasks. CrewAI uses tasks to bridge the gap between high-level goals and concrete agent actions, assigning specific objectives and expected outputs.</P><P>You also need to create a .env file to store the credentials from the Generative AI Hub service key. I also stored another variable RPT1_DEPLOYMENT_URL in that file, for the agent to use in the tool. Finally, it should look something like this:</P><pre class="lia-code-sample language-python"><code>AICORE_AUTH_URL="https://#######.ondemand.com/oauth/token"
AICORE_CLIENT_ID="sb-############-|aicore!b540"
AICORE_CLIENT_SECRET="#######"
AICORE_RESOURCE_GROUP="default"
AICORE_BASE_URL="https://api.ai.prod.###.ml.hana.ondemand.com"
RPT1_DEPLOYMENT_URL="https://api.ai.prod.###.ml.hana.ondemand.com/v2/inference/deployments/###/predict"</code></pre><H3 id="toc-hId-1328234235">Agents</H3><P>CrewAI recommends storing your agent configuration in and agents.yaml file. Important are the parameters: role, goal, backstory and llm. The parameter llm defines which LLM provider you want to use. We want to use SAP as the provider and GPT-4o as the model (provider/model -> sap/gpt-4o). As already described above you can use any model here that is available via the orchestration service. {payload} is a variable that will be filled during runtime.</P><pre class="lia-code-sample language-yaml"><code>theft_crime_scene_investigator:
role: >
Theft Crime Scene Investigator
goal: >
Predict the missing values of stolen items using the SAP-RPT-1 model via the call_rpt1 tool use this payload {payload} as input.
backstory: >
An expert theft crime investigator specialised on predicting the value of stolen luxury goods.
llm: sap/gpt-4o</code></pre><H3 id="toc-hId-1131720730">Tasks</H3><P>In the tasks.yaml you define the name of the task, the description, the expected output and which agent can do this task. {payload} is again the variable that will be filled during runtime.</P><pre class="lia-code-sample language-yaml"><code>inspection_task:
description: >
Analyze the theft crime scene and predict the missing values of stolen items using the SAP-RPT-1 model via the call_rpt1 tool.
Use this dict {payload} as input.
expected_output: >
JSON with predicted values for the stolen items.
agent: theft_crime_scene_investigator</code></pre><H3 id="toc-hId-935207225">Tools</H3><P>You can also define custom tools in Python and hand them over to your agents to use. The following tool for example gives the model access to the <A href="https://www.sap.com/products/artificial-intelligence/sap-rpt.html" target="_self" rel="noopener noreferrer">SAP-RPT-1</A> model (try it out here: <A href="http://rpt.cloud.sap" target="_self" rel="nofollow noopener noreferrer">rpt.cloud.sap</A>).</P><pre class="lia-code-sample language-python"><code>from rpt1.call_rpt1 import RPT1Client
rpt1_client = RPT1Client()
@tool("call_rpt1")
def call_rpt1(payload: dict) -> str:
"""Function to call RPT-1 model via RPT1Client"""
response = rpt1_client.post_request(json_payload=payload)
if response.status_code == 200:
return response.json()
else:
return f"Error: {response.status_code} - {response.text}"</code></pre><P>If you want to know more about how to make the request to SAP-RPT-1 check out <A href="https://github.com/noravth/agentic-ai-codejam-collection/blob/main/crewai/rpt1/call_rpt1.py" target="_self" rel="nofollow noopener noreferrer">this code</A>.</P><H3 id="toc-hId-738693720">Building the Crew</H3><P>You might wonder how this all comes together? Now you plug all those pieces together and build your crew. The crew needs at least one agent, but can have more, any tools you want the agent to use, you can also assign here. And each agent has at least one task.</P><pre class="lia-code-sample language-python"><code>from crewai import Agent, Crew, Task, Process
from crewai.project import CrewBase, agent, task, crew, tool
@CrewBase
class MurderMystery():
"""MurderMystery crew"""
agents_config = "config/agents.yaml"
tasks_config = 'config/tasks.yaml'
def theft_crime_scene_investigator(self) -> Agent:
return Agent(
config=self.agents_config['theft_crime_scene_investigator'],
verbose=True,
tools=[call_rpt1]
)
def inspection_task(self) -> Task:
return Task(
config=self.tasks_config['inspection_task'] # type: ignore[index]
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents, # Automatically collected by the decorator
tasks=self.tasks, # Automatically collected by the decorator.
process=Process.sequential,
verbose=True,
)</code></pre><H3 id="toc-hId-542180215">Run the crew</H3><pre class="lia-code-sample language-python"><code>murder_mystery = MurderMystery()
result = murder_mystery.crew().kickoff(inputs={'payload': payload})
print("\n📘 Result:\n", result)</code></pre><H2 id="toc-hId-216583991"> </H2><H2 id="toc-hId-20070486">Additional Infos</H2><UL><LI>If you need any help in setting up your Python environment, check out the <A href="https://github.com/SAP-samples/generative-ai-codejam/blob/main/exercises/02-setup-python-environment.md" target="_blank" rel="noopener nofollow noreferrer">exercise 02 -Setup Python environment</A> from my CodeJam.</LI><LI>Full code here: <SPAN><A href="https://github.com/noravth/agentic-ai-codejam-collection" target="_blank" rel="noopener nofollow noreferrer">https://github.com/noravth/agentic-ai-codejam-collection</A></SPAN></LI><LI><A href="https://sap-contributions.github.io/litellm-agentic-examples/_notebooks/examples/crewai_litellm_lib.html" target="_self" rel="nofollow noopener noreferrer"><SPAN>Documentation SAP and LiteLLM </SPAN></A></LI></UL><P>Happy Coding and feel free to reach out anytime!</P>2026-01-27T12:29:54.420000+01:00https://community.sap.com/t5/sap-codejam-blog-posts/sap-codejam-roadshow-2026-brazil-edition-kicking-off-in-march-and/ba-p/14317926SAP CodeJam Roadshow 2026 - Brazil edition 🇧🇷 (Kicking off in March and continuing in April)2026-01-30T12:35:15.511000+01:00ajmaradiagahttps://community.sap.com/t5/user/viewprofilepage/user-id/107<P>Oi SAP Developers!</P><P data-unlink="true">I've got some exciting news to share with you. The SAP CodeJam roadshow is coming back to Latin America! This time, we will do a special edition in Brazil. We will have many stops on the roadshow starting the 1st week of March, doing CodeJams as part of SAP Inside Tracks, organised by local community members, and then continuing in April. Also, we'll organise a couple of meet-ups as we've done in previous year. :-).</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><STRONG><EM><FONT color="#3366FF">Keep an eye on this blog post as more events will be announced as part of the roadshow shortly</FONT></EM></STRONG></P><H3 id="toc-hId-1917837094"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="brazil-roadshow-map.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370537i3BD0309CC0223DBA/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="brazil-roadshow-map.png" alt="brazil-roadshow-map.png" /></span></STRONG></H3><P> </P><H3 id="toc-hId-1721323589"><STRONG>SAP CodeJam events and meetups part of the roadshow (Register Now!)</STRONG></H3><P>Below is a listing of the SAP CodeJam events and quedadas (meetups) on the roadshow schedule:</P><TABLE width="705px"><TBODY><TR><TD width="157px" height="50px"><P><STRONG>Dates</STRONG></P></TD><TD width="243px" height="50px"><P><STRONG>CodeJam topic</STRONG></P></TD><TD width="201px" height="50px"><P><STRONG>Location</STRONG></P></TD><TD width="104px" height="50px"><P><STRONG>Registration<BR /></STRONG></P></TD></TR><TR><TD width="157px" height="77px">2 Mar</TD><TD width="243px" height="77px"><P>Build AI services using the CAP and Generative AI Hub</P></TD><TD width="201px" height="77px"><P>Campinas, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/build-ai-services-using-the-cap-and-generative-ai-hub-campinas-brazil/ev-p/14317691" target="_blank">Link</A></P></TD></TR><TR><TD width="157px" height="97px">3 Mar</TD><TD width="243px" height="97px"><P>Joule Studio: Create Joule Skills and Agents Without Coding</P></TD><TD width="201px" height="97px"><P>São Paulo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P><P><span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="97px"><P><A href="https://community.sap.com/t5/sap-codejam/joule-studio-create-joule-skills-and-agents-without-coding-s%C3%A3o-paulo-brazil/ev-p/14317678" target="_blank">Link</A></P></TD></TR><TR><TD width="157px" height="97px">3 Mar</TD><TD width="243px" height="97px"><P><EM><FONT color="#3366FF">São Paulo meetup</FONT></EM></P></TD><TD width="201px" height="97px"><P>São Paulo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="97px"><P><A href="https://community.sap.com/t5/s%C3%A3o-paulo-events/meet-up-s%C3%A3o-paulo-2026-03-03-brazilroadshow2026/ec-p/14319322#M38" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>4 Mar</P></TD><TD width="243px" height="77px"><P>SAP Build: Create Event-Based Processes</P></TD><TD width="201px" height="77px"><P>São Paulo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/sap-build-create-event-based-processes-s%C3%A3o-paulo-brazil/ev-p/14317705" target="_blank">Link</A></P></TD></TR><TR><TD width="157px" height="97px"><P>5 Mar</P></TD><TD width="243px" height="97px"><P>Getting started with Generative AI Hub on SAP AI Core</P></TD><TD width="201px" height="97px"><P>Rio de Janeiro, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="97px"><P><A href="https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core-rio-de-janeiro-brazil/ev-p/14317716" target="_blank">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>13 Mar</P></TD><TD width="243px" height="77px"><P>Getting started with Generative AI Hub on SAP AI Core</P></TD><TD width="201px" height="77px">Joinville, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core-joinville-brazil/ev-p/14322316" target="_self">Link</A></P></TD></TR><TR><TD width="157px"><P>14 Mar</P></TD><TD width="243px"><P>SAP Inside Track - Joinville</P></TD><TD width="201px"><P>Joinville, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px"><P><A href="https://community.sap.com/t5/joinville-events/eb-p/joinvilleoccasion-board" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>20 Mar</P></TD><TD width="243px" height="77px"><P>Getting started with Generative AI Hub on SAP AI Core</P></TD><TD width="201px" height="77px">Belo Horizonte, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core-belo-horizonte-brazil/ev-p/14322967" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>21 Mar</P></TD><TD width="243px" height="77px"><P>SAP Inside Track - Belo Horizonte</P></TD><TD width="201px" height="77px">Belo Horizonte, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/belo-horizonte-events/sap-inside-track-belo-horizonte-21-mar%C3%A7o-de-2026/ev-p/14317172" target="_self">Link</A></P></TD></TR><TR><TD width="157px"><P>6 Apr</P></TD><TD width="243px"><P>Combine SAP CAP with SAP HANA Cloud to Create Full-Stack App</P></TD><TD width="201px" height="97px"><P>São Paulo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="97px"><P><A href="https://community.sap.com/t5/sap-codejam/combine-sap-cap-with-sap-hana-cloud-to-create-full-stack-app-s%C3%A3o-paulo/ev-p/14322980" target="_self">Link</A></P></TD></TR><TR><TD width="157px"><P>7 Apr</P></TD><TD width="243px"><P>ABAP AI: Capabilities in SAP Joule for Developers CodeJam</P></TD><TD width="201px" height="77px"><P>São Paulo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/abap-ai-capabilities-in-sap-joule-for-developers-codejam-s%C3%A3o-paulo-brazil/ev-p/14322983" target="_blank">Link</A></P></TD></TR><TR><TD width="157px"><P>8 Apr</P></TD><TD width="243px"><P><SPAN>ABAP AI: Capabilities in SAP Joule for Developers CodeJam</SPAN></P></TD><TD width="201px">Londrina, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px"><P><A href="https://community.sap.com/t5/sap-codejam/abap-ai-capabilities-in-sap-joule-for-developers-codejam-londrina-brazil/ec-p/14324622#M1084" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>10 Apr</P></TD><TD width="243px" height="77px"><P>Getting started with Generative AI Hub on SAP AI Core</P></TD><TD width="201px" height="77px">Curitiba, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px" height="77px"><P><A href="https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core-curitiba-brazil/ev-p/14322976" target="_self">Link</A></P></TD></TR><TR><TD><P>10 Apr</P></TD><TD><P>SAP HANA Cloud for AI Applications: ML, Vectors, and Knowledge Graphs</P></TD><TD><P>Ribeirão Preto, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></P></TD><TD><P><A href="https://community.sap.com/t5/sap-codejam/sap-hana-cloud-for-ai-applications-ml-vectors-and-knowledge-graphs-ribeir%C3%A3o/ev-p/14330764" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>13 Apr</P></TD><TD width="243px" height="77px"><P>SAP HANA Cloud for AI Applications: ML, Vectors, and Knowledge Graphs</P></TD><TD width="201px" height="77px">São Leopoldo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px"><P><A href="https://community.sap.com/t5/sap-codejam/sap-hana-cloud-for-ai-applications-ml-vectors-and-knowledge-graphs-s%C3%A3o/ev-p/14330771" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>14 Apr</P></TD><TD width="243px" height="77px"><P>Build code-based AI Agents on SAP BTP</P></TD><TD width="201px" height="77px">São Leopoldo, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px"><P><A href="https://community.sap.com/t5/sap-codejam/build-code-based-ai-agents-on-sap-business-technology-platform-s%C3%A3o-leopoldo/ev-p/14330791" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>16 Apr</P></TD><TD width="243px" height="77px"><P>SAP HANA Cloud for AI Applications: ML, Vectors, and Knowledge Graphs</P></TD><TD width="201px" height="77px">Recife, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px"><P><A href="https://community.sap.com/t5/sap-codejam/sap-hana-cloud-for-ai-applications-ml-vectors-and-knowledge-graphs-recife/ev-p/14330773" target="_self">Link</A></P></TD></TR><TR><TD width="157px" height="77px"><P>17 Apr</P></TD><TD width="243px" height="77px"><P>Build code-based AI Agents on SAP BTP</P></TD><TD width="201px" height="77px">Recife, Brazil <span class="lia-unicode-emoji" title=":brazil:">🇧🇷</span></TD><TD width="104px"><P><A href="https://community.sap.com/t5/sap-codejam/build-code-based-ai-agents-on-sap-business-technology-platform-recife/ev-p/14330796" target="_self">Link</A></P></TD></TR></TBODY></TABLE><P>Make sure to register to secure your place. It is likely that the places available will go fast.</P><P>What if the CodeJam topics do not cover the SAP technology that you focus on? We've organised additional local meetups just to get together and have a chat. See you all somewhere in Brazil on the roadshow!</P>2026-01-30T12:35:15.511000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/connecting-sap-genai-hub-to-n8n-a-complete-guide/ba-p/14320010Connecting SAP GenAI Hub to n8n: A Complete Guide2026-02-02T16:33:01.109000+01:00YangYue01https://community.sap.com/t5/user/viewprofilepage/user-id/1409138<BLOCKQUOTE><P class="">A step-by-step tutorial on calling SAP GenAI Hub's large language models from self-hosted n8n to build enterprise AI automation workflows.</P></BLOCKQUOTE><P><STRONG>SAP Build</STRONG> is the primary, recommended solution for enterprise automation and orchestration within the SAP ecosystem. However, one of the key strengths of <STRONG>SAP GenAI Hub</STRONG> is its standardization—it uses standard REST APIs and OAuth2, meaning it can be consumed by <I>any</I> system, regardless of the tech stack.</P><P>This tutorial demonstrates that openness. We will use <STRONG>n8n</STRONG> (a popular self-hosted workflow tool) to show how developers can integrate SAP's enterprise-grade GenAI capabilities into external or hybrid environments for rapid prototyping and testing.</P><P class="">Since n8n doesn't have a native SAP GenAI Hub node yet. No worries — we can easily make the connection using <STRONG>HTTP Request + OAuth2</STRONG>.</P><HR /><H2 id="toc-hId-1789460673">Prerequisites</H2><P class="">Before we start, make sure you have:</P><OL class=""><LI><STRONG>A running n8n instance</STRONG> (Docker or npm or k8s)</LI><LI><STRONG>SAP BTP account</STRONG> with SAP AI Core service enabled (Extended Plan for GenAI Hub)</LI><LI><STRONG>A deployed LLM model</STRONG> (configured in SAP AI Launchpad)</LI></OL><H3 id="toc-hId-1722029887"> </H3><H3 id="toc-hId-1525516382">Get Your SAP AI Core Service Key</H3><P class="">Navigate to SAP BTP Cockpit → Your Subaccount → Instances and Subscriptions → SAP AI Core → Create Service Key</P><P class="">You'll get a JSON like this:</P><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""> </DIV></DIV></DIV></DIV><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"clientid"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"sb-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx!b12345|aicore!b540"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"clientsecret"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx$xxxxxxxxx"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"url"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"https://your-tenant.authentication.eu10.hana.ondemand.com"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"serviceurls"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"AI_API_URL"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com"</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class=""><STRONG>Keep these four fields handy — you'll need them all.</STRONG></P><H3 id="toc-hId-1329002877"> </H3><H3 id="toc-hId-1132489372">Get Your Deployment ID</H3><P class="">In SAP AI Launchpad:</P><OL class=""><LI>Go to <STRONG>ML Operations → Deployments</STRONG></LI><LI>Find your deployed model</LI><LI>Note the <CODE>Deployment ID</CODE> (or the full <CODE>deploymentUrl</CODE>)</LI></OL><HR /><H2 id="toc-hId-806893148">Step 1: Create OAuth2 Credential</H2><P class="">SAP GenAI Hub uses OAuth2 Client Credentials authentication. Let's configure the credential in n8n first.</P><OL class=""><LI>Open n8n, go to <STRONG>Settings → Credentials</STRONG></LI><LI>Click <STRONG>Add Credential</STRONG>, search for <STRONG>OAuth2 API</STRONG></LI><LI>Fill in the following:</LI></OL><DIV class="lia-indent-padding-left-60px" style="padding-left : 60px;"><TABLE><TBODY><TR><TD><STRONG>Field<BR /></STRONG></TD><TD><STRONG>Value</STRONG></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Credential Name</STRONG></TD><TD width="617.203px" height="30px"><CODE>SAP-GenAI-Hub</CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Grant Type</STRONG></TD><TD width="617.203px" height="30px"><CODE>Client Credentials</CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Access Token URL</STRONG></TD><TD width="617.203px" height="30px"><CODE><A href="https://your-tenant.authentication.eu10.hana.ondemand.com/oauth/token" target="_blank" rel="noopener nofollow noreferrer">https://your-tenant.authentication.eu10.hana.ondemand.com/oauth/token</A></CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Client ID</STRONG></TD><TD width="617.203px" height="30px">Copy the full <CODE>clientid</CODE> from Service Key</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Client Secret</STRONG></TD><TD width="617.203px" height="30px">Copy the full <CODE>clientsecret</CODE> from Service Key</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Scope</STRONG></TD><TD width="617.203px" height="30px">Leave empty</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Authentication</STRONG></TD><TD width="617.203px" height="30px"><STRONG>Select <CODE>Header</CODE></STRONG></TD></TR></TBODY></TABLE></DIV><DIV class="lia-indent-padding-left-30px" style="padding-left : 30px;">4. Click <STRONG>Save</STRONG></DIV><BLOCKQUOTE><P class=""><span class="lia-unicode-emoji" title=":light_bulb:">💡</span><STRONG>Common Mistake</STRONG>: Selecting "Send as Basic Auth Header" will cause 401 authentication failures. SAP requires credentials to be sent in the body.</P></BLOCKQUOTE><HR /><H2 id="toc-hId-610379643">Step 2: Build a Simple Q&A Workflow</H2><P class="">Let's create a basic LLM query workflow.</P><H3 id="toc-hId-542948857">2.1 Add Manual Trigger</H3><P class="">Start with a manual trigger for testing:</P><UL class=""><LI>Add a <STRONG>Manual Trigger</STRONG> node</LI></UL><H3 id="toc-hId-346435352">2.2 Add HTTP Request Node</H3><P class="">This is the core node. Configure as follows:</P><P class=""><STRONG>Basic Settings:</STRONG></P><UL class=""><LI><STRONG>Method</STRONG>: <CODE>POST</CODE></LI><LI><STRONG>URL</STRONG>: </LI></UL><pre class="lia-code-sample language-abap"><code>https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/{your_Deployment_ID}/chat/completions</code></pre><P><SPAN> <span class="lia-unicode-emoji" title=":warning:">⚠️</span> </SPAN><SPAN class=""><SPAN>Note</SPAN></SPAN><SPAN>: If using </SPAN><SPAN class=""><SPAN>OpenAI models</SPAN></SPAN><SPAN>, you</SPAN><SPAN> should</SPAN><SPAN> add the </SPAN><SPAN class=""><SPAN>api-version</SPAN></SPAN><SPAN> parameter, e.g.</SPAN></P><pre class="lia-code-sample language-abap"><code>https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/{your_Deployment_ID}/chat/completions?api-version=2023-05-15</code></pre><P class=""><STRONG>Authentication:</STRONG></P><UL class=""><LI><STRONG>Authentication</STRONG>: <CODE>Predefined Credential Type</CODE></LI><LI><STRONG>Credential Type</STRONG>: <CODE>OAuth2 API</CODE></LI><LI><STRONG>OAuth2 API</STRONG>: Select <CODE>SAP-GenAI-Hub</CODE></LI></UL><P class=""><STRONG>Headers:</STRONG></P><DIV class=""><TABLE><TBODY><TR><TD><STRONG>Name</STRONG></TD><TD><STRONG>Value</STRONG></TD></TR><TR><TD><CODE>Content-Type</CODE></TD><TD><CODE>application/json</CODE></TD></TR><TR><TD><CODE>AI-Resource-Group</CODE></TD><TD><CODE>default</CODE>, or your own Resource Group ID</TD></TR></TBODY></TABLE></DIV><P class=""><STRONG>Body:</STRONG></P><UL class=""><LI><STRONG>Body Content Type</STRONG>: <CODE>JSON</CODE></LI><LI><STRONG>Specify Body</STRONG>: <CODE>Using JSON</CODE></LI></UL><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional assistant. Please provide helpful and accurate answers."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"What is SAP S/4HANA? Please summarize in three sentences."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId-149921847">2.3 Test It</H3><P class="">Click <STRONG>Test Workflow</STRONG>. If configured correctly, you'll see a response like:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"choices"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"message"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"assistant"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"SAP S/4HANA is SAP's next-generation intelligent ERP suite..."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"usage"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"prompt_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">45</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"completion_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">128</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"total_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">173</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class=""><span class="lia-unicode-emoji" title=":party_popper:">🎉</span><STRONG>Congratulations! You've successfully called SAP GenAI Hub from n8n!</STRONG></P><HR /><H2 id="toc-hId-171579980">Step 3: Build an Interactive Chat Workflow</H2><P class="">Let's upgrade to a conversational version.</P><H3 id="toc-hId--318336532">Workflow Structure</H3><DIV class=""><P class=""><STRONG>Chat Trigger</STRONG> → <STRONG>HTTP Request (SAP GenAI Hub)</STRONG> → <STRONG>Code (Extract Reply)</STRONG></P></DIV><H3 id="toc-hId--514850037">3.1 Chat Trigger Node</H3><P class="">Add a <STRONG>Chat Trigger</STRONG> node (found under Advanced AI category):</P><UL class=""><LI>Keep default settings</LI><LI>It provides an embedded chat interface for testing</LI></UL><H3 id="toc-hId--711363542">3.2 Modify HTTP Request Node</H3><P class="">Change the Body to dynamically get user input:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional enterprise consultant specializing in SAP and digital transformation topics."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"{{ $json.chatInput }}"</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><BLOCKQUOTE><P class=""><CODE>{{ $json.chatInput }}</CODE> automatically retrieves the user's input from the chat interface.</P></BLOCKQUOTE><H3 id="toc-hId--907877047">3.3 Add Code Node to Extract Response</H3><P class="">Add a <STRONG>Code</STRONG> node to extract the AI's reply:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">const</SPAN> response <SPAN class="">=</SPAN> $input<SPAN class="">.</SPAN><SPAN class="">all</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> aiMessage <SPAN class="">=</SPAN> response<SPAN class="">.</SPAN><SPAN class="">choices</SPAN><SPAN class="">?.</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">?.</SPAN>message<SPAN class="">?.</SPAN>content <SPAN class="">||</SPAN> <SPAN class="">"Sorry, no response received."</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">response</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId--1104390552">3.4 Test the Chat</H3><OL class=""><LI>Activate the Workflow</LI><LI>Click the <STRONG>Chat</STRONG> button to open the chat interface</LI><LI>Start chatting!</LI></OL><HR /><H2 id="toc-hId--1007501050">Step 4: Add Conversation Memory (Optional)</H2><P class="">The above version treats each message independently without context. We can use n8n's <STRONG>Workflow Static Data</STRONG> to implement simple memory.</P><H3 id="toc-hId--1497417562">Improved Workflow</H3><P class="">Add a <STRONG>Code</STRONG> node before HTTP Request to manage conversation history:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">// Get session ID and user input</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> sessionId <SPAN class="">=</SPAN> $json<SPAN class="">.</SPAN><SPAN class="">sessionId</SPAN> <SPAN class="">||</SPAN> <SPAN class="">'default'</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> userInput <SPAN class="">=</SPAN> $json<SPAN class="">.</SPAN><SPAN class="">chatInput</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Get history from global static data</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> staticData <SPAN class="">=</SPAN> <SPAN class="">$getWorkflowStaticData</SPAN><SPAN class="">(</SPAN><SPAN class="">'global'</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">if</SPAN> <SPAN class="">(</SPAN><SPAN class="">!</SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">)</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN> <SPAN class="">=</SPAN> <SPAN class="">{</SPAN><SPAN class="">}</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">if</SPAN> <SPAN class="">(</SPAN><SPAN class="">!</SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">)</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN> <SPAN class="">=</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional enterprise consultant specializing in SAP and digital transformation topics."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN>
</SPAN>
<SPAN><SPAN class="">// Add user message</SPAN>
</SPAN><SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">push</SPAN><SPAN class="">(</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> userInput</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Keep only the last 20 messages (avoid token limit)</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> messages <SPAN class="">=</SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">slice</SPAN><SPAN class="">(</SPAN><SPAN class="">-</SPAN><SPAN class="">20</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> sessionId<SPAN class="">,</SPAN>
</SPAN><SPAN> messages</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Then modify the HTTP Request Body:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN><SPAN class="">{</SPAN> JSON.stringify($json.messages) <SPAN class="">}</SPAN><SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Finally, save the assistant's reply in the response extraction Code node:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">const</SPAN> response <SPAN class="">=</SPAN> $input<SPAN class="">.</SPAN><SPAN class="">all</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> sessionId <SPAN class="">=</SPAN> <SPAN class="">$</SPAN><SPAN class="">(</SPAN><SPAN class="">'Prepare Messages'</SPAN><SPAN class="">)</SPAN><SPAN class="">.</SPAN><SPAN class="">first</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">.</SPAN><SPAN class="">sessionId</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> aiMessage <SPAN class="">=</SPAN> response<SPAN class="">.</SPAN><SPAN class="">choices</SPAN><SPAN class="">?.</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">?.</SPAN>message<SPAN class="">?.</SPAN>content <SPAN class="">||</SPAN> <SPAN class="">"Sorry, no response received."</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Save assistant reply to history</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> staticData <SPAN class="">=</SPAN> <SPAN class="">$getWorkflowStaticData</SPAN><SPAN class="">(</SPAN><SPAN class="">'global'</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">push</SPAN><SPAN class="">(</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"assistant"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">response</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Now your chatbot remembers context!</P><HR /><H2 id="toc-hId--1400528060">Practical Use Cases</H2><H3 id="toc-hId--1890444572">Use Case 1: Automated Email Replies</H3><DIV class=""><DIV><P class=""><STRONG>Email Trigger</STRONG> → <STRONG>Extract Content</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Send Reply</STRONG></P></DIV></DIV><P class="">Sample Prompt:</P><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""> </DIV></DIV></DIV></DIV><DIV><PRE><CODE><SPAN>Based on the following customer email, generate a professional reply:</SPAN>
<SPAN>Email content:</SPAN><SPAN>{{ $json.emailBody }}</SPAN>
<SPAN>Requirements:</SPAN><SPAN>1. Professional and friendly tone</SPAN><SPAN>2. If it's a price inquiry, mention that detailed pricing will follow within 24 hours</SPAN><SPAN>3. If it's a complaint, express apology and commitment to follow up</SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId--1918774386">Use Case 2: Document Summarization</H3><DIV class=""><DIV><P class=""><STRONG>Webhook</STRONG> → <STRONG>Read PDF</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Save to Notion</STRONG></P><H3 id="toc-hId--2115287891"> </H3></DIV></DIV><H3 id="toc-hId-1983165900">Use Case 3: Data Analysis Assistant</H3><P class="">Combine SAP APIs to fetch business data and let AI generate analysis reports:</P><P class=""><STRONG>Schedule Trigger</STRONG> → <STRONG>SAP S/4HANA</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Slack Message</STRONG></P><HR /><H2 id="toc-hId-2080055402">Troubleshooting</H2><H3 id="toc-hId-1590138890"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>401 Unauthorized</H3><P class=""><STRONG>Cause</STRONG>: Authentication failed</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>Client ID and Secret are fully copied (including special characters like <CODE>!</CODE>, <CODE>|</CODE>, <CODE>$</CODE>)</LI><LI>Authentication is set to <CODE>Send In Body</CODE></LI><LI>Token URL format is correct (ends with <CODE>/oauth/token</CODE>)</LI></UL><H3 id="toc-hId-1393625385"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>404 Not Found</H3><P class=""><STRONG>Cause</STRONG>: API path error</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>Deployment ID is correct</LI><LI>Model is successfully deployed (status shows "Running" in AI Launchpad)</LI><LI>API URL path format: <CODE>/v2/inference/deployments/{id}/chat/completions</CODE></LI></UL><H3 id="toc-hId-1197111880"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>403 Forbidden</H3><P class=""><STRONG>Cause</STRONG>: Insufficient permissions</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI><CODE>AI-Resource-Group</CODE> header is set (default is <CODE>default</CODE>)</LI><LI>User has permission to access the deployment</LI></UL><H3 id="toc-hId-1000598375"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>400 Bad Request</H3><P class=""><STRONG>Cause</STRONG>: Invalid request body</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>JSON format is correct</LI><LI><CODE>messages</CODE> is an array</LI><LI>Each message has both <CODE>role</CODE> and <CODE>content</CODE></LI></UL><HR /><H2 id="toc-hId-1097487877">Optimization Tips</H2><OL class=""><LI><STRONG>Limit conversation history</STRONG>: Keep only the last N turns to avoid exceeding context window</LI><LI><STRONG>Add error handling</STRONG>: Use n8n's Error Trigger to handle API failures gracefully</LI><LI><STRONG>Use variables</STRONG>: Store API URLs and Deployment IDs in n8n Variables for easier management</LI></OL><HR /><H2 id="toc-hId-900974372">Summary</H2><P class="">In this tutorial, you learned:</P><P class=""><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to configure OAuth2 authentication for SAP GenAI Hub in n8n<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to use HTTP Request node to call LLM APIs<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to build a chatbot with conversation memory<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to troubleshoot common errors</P><P class="">The power of n8n lies in its flexibility — you can seamlessly integrate SAP GenAI Hub with hundreds of other applications (Email, Databases, SAP S/4HANA, etc.) to build truly enterprise-grade AI automation workflows.</P><P class=""><STRONG>Happy Automating! <span class="lia-unicode-emoji" title=":rocket:">🚀</span></STRONG></P>2026-02-02T16:33:01.109000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/on-the-rpt-1-podcast-practical-ai-for-sap-data-demos-lessons-and-what-s/ba-p/14299887On the RPT-1 podcast: practical AI for SAP data (demos, lessons, and what’s next) 🎙️2026-02-13T10:05:44.365000+01:00amitlalmicrosofthttps://community.sap.com/t5/user/viewprofilepage/user-id/686206<P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="amitlalmicrosoft_0-1770787059764.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371298iC27BEDA06AE30A67/image-size/large?v=v2&px=999" role="button" title="amitlalmicrosoft_0-1770787059764.png" alt="amitlalmicrosoft_0-1770787059764.png" /></span></P><P><BR />If you work in SAP, you already know the truth: <STRONG>our world is rows, columns, master data, transactional history, and process context</STRONG> — not creative writing, in other words - SAP is not a “write me a poem” platform. That’s why I really enjoyed this conversation on <STRONG>SAP RPT-1</STRONG>, where we went beyond “AI hype” and into <STRONG>what’s genuinely useful for SAP practitioners</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="amitlalmicrosoft_1-1770787223523.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371299iE2D23C3CA07772DD/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="amitlalmicrosoft_1-1770787223523.png" alt="amitlalmicrosoft_1-1770787223523.png" /></span></P><P><STRONG>RPT-1 is built for structured enterprise data.</STRONG> In plain terms: it’s designed to reason over the kinds of datasets we live in every day—think finance, supply chain, procurement, operations—where accuracy, traceability, and consistency matter more than fancy wording.</P><H3 id="toc-hId-1896654919">What you’ll get from the episode</H3><UL><LI><P><STRONG>A practical view of “AI for SAP data”</STRONG> — where it fits (and where it doesn’t)</P></LI><LI><P><STRONG>Real demos and prototypes</STRONG> focused on outcomes, not theory</P></LI><LI><P>How to think about <STRONG>predictive + analytical use cases</STRONG> that start small but scale (without turning into a 6-month science project)</P></LI><LI><P>A clear message: <STRONG>move from chatbot experiments to measurable business impact</STRONG></P></LI></UL><P class="lia-align-center" style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="amitlalmicrosoft_2-1770787318215.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371300i00671C6DE0A21746/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="amitlalmicrosoft_2-1770787318215.png" alt="amitlalmicrosoft_2-1770787318215.png" /></span></P><H3 id="toc-hId-1700141414">Why this matters for the SAP community</H3><P>Most AI discussions ignore a key point: <STRONG>SAP value is locked in structured data + process discipline</STRONG>. The moment you align AI with that reality—tabular signals, business semantics, governance—you shift from “cool demo” to <STRONG>real operational leverage</STRONG>.</P><P>If you’re exploring AI for SAP and you want something that’s grounded in <STRONG>enterprise reality</STRONG> (data quality, governance, repeatability, outcomes), this episode is a solid watch:</P><P><span class="lia-unicode-emoji" title=":movie_camera:">🎥</span>Watch here my podcast with Holger and Goran from Microsoft:<BR /><A class="" href="https://youtu.be/CbUDRgEO0yI?si=J4vE-BZ0Xig_srYo" target="_new" rel="noopener nofollow noreferrer">https://youtu.be/CbUDRgEO0yI?si=J4vE-BZ0Xig_srYo</A></P><P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FCbUDRgEO0yI%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DCbUDRgEO0yI&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FCbUDRgEO0yI%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube" width="640" height="360" scrolling="no" title="#276 - ToW Exploring SAP RPT-1 (Amit Lal) | SAP on Azure Video Podcast" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div><BR /><BR />Please share your thoughts and feedback! <BR /><BR />Cheers,<BR /><A href="https://www.amit-lal.com" target="_blank" rel="noopener nofollow noreferrer">Amit Lal</A><BR /><BR /></P>2026-02-13T10:05:44.365000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/decision-logic-modernization-in-sap-integrations-using-sap-rpt-1-for/ba-p/14329617Decision Logic Modernization in SAP Integrations: Using SAP-RPT-1 for Intelligent Routing Decisions2026-02-16T19:54:29.244000+01:00arunmaarirajha_kvhttps://community.sap.com/t5/user/viewprofilepage/user-id/2033343<P><STRONG>Integration Modernization</STRONG> concept is often discussed in two dimensions:<BR />1)<STRONG> Platform modernization</STRONG> – moving from older stacks like SAP NetWeaver PI/PO, SAP Neo, or legacy 3rd party platforms to SAP Integration Suite.<BR />2) <STRONG>Scenario modernization</STRONG> – shifting from legacy protocols to modern integration patterns such as APIs & event-driven architectures; upgrading ABAP / Java mapping to graphical mapping / groovy script; or moving from basic auth to OAuth 2.0 or client certificate based authentication, and so on.</P><P>But there is another Modernization opportunity at the scenario level: <STRONG>decision logic modernization</STRONG>.</P><P>In this blog, we explore how routing decision logic, as an orchestration step — can be modernized using SAP-RPT-1, SAP’s latest foundation model hosted on BTP AI Core (Generative AI Hub) — applied to an outbound delivery routing scenario.</P><P><STRONG>The Traditional Approach: Static Rules, Lookups, and Growing Complexity:<BR /></STRONG>In SAP PI/PO and Integration Suite, routing decisions are typically implemented using deterministic, rule-based logic. These decisions often control how a scenario behaves – including which receiver system to call, or which transformation branch to follow. The common approaches include:</P><UL><LI>Static routing via Routers based on payload elements or fields</LI><LI>Dynamic configurations, such as externalized parameters passed at runtime</LI><LI>Lookups to external systems (e.g., via JDBC, SOAP, RFC lookups) to evaluate conditions at runtime — achieved in Integration Suite via patterns like <EM>Request-Reply</EM> or <EM>Content Enricher</EM></LI></UL><P>These mechanisms work — but they come with growing challenges:</P><UL><LI>Rules get combinatorially complex with more conditions and evolving business scenarios</LI><LI>Lookup tables become burdensome to maintain</LI><LI>Decisions become rigid and hard to evolve over time</LI></UL><P>As a result, rule-based decisioning becomes harder to maintain as business requirements evolve. Agility suffers. Adaptation becomes expensive.</P><P>What if, instead of relying on static rule-based logic, iflows could make decisions based on historical business patterns — applying predictive machine learning without the overhead of training and managing custom models? With the in-context learning capabilities of SAP’s latest foundation model, SAP-RPT-1, such predictions can be performed on-the-fly at runtime.</P><P><STRONG>Enter SAP-RPT-1: Foundation Model for Predictive Tasks:<BR /></STRONG>SAP-RPT-1 is a pre-trained foundation model available via BTP AI Core (Generative AI Hub). Unlike traditional ML services that require dedicated data science setup, SAP-RPT-1 offers:</P><UL><LI>Pre-trained model, ready for “plug and play” in your scenario</LI><LI>No ML infrastructure setup</LI><LI>No data science expertise required</LI><LI>No-code consumption via API</LI><LI>Faster time-to-market</LI><LI>Democratization of AI capabilities across IT functions</LI></UL><P>SAP-RPT-1 is pre-trained specifically for predictive tasks such as classification and regression over ERP tabular data, <SPAN>unlike many general-purpose GenAI models that are primarily trained on large corpora of natural language text. </SPAN>Instead of building and managing a full ML lifecycle, integration developers can now leverage predictive intelligence directly into iflows — without any data science related heavy-lifting.</P><P>If you are new to SAP-RPT-1, I recommend that you read <SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/a-new-paradigm-for-enterprise-ai-in-context-learning-for-relational-data/ba-p/14260221" target="_blank">this introductory blog</A></SPAN>.<BR />Also, explore this <SPAN><A href="https://rpt.cloud.sap/" target="_blank" rel="noopener nofollow noreferrer">SAP-RPT-1 playground</A></SPAN>.</P><P><STRONG>Business Scenario: Intelligent Outbound Delivery Routing with SAP-RPT-1:<BR /></STRONG>SmartSense Technologies (SST), a global provider of smart security devices, faces challenges in routing outbound deliveries from its manufacturing plants to distribution centers, retail stores, and end customers. Their current routing logic, embedded in integration flows, relies on static rules and lookup tables – employing a combination of Material, Quantity, Delivery priority, Material group, Ship to location, and so on. This leads to frequent shipping delays, cost overruns, and SLA violations whenever logistics conditions change and sub-optimal 3PL partner is chosen.</P><P>To overcome these limitations, SST decides to leverage SAP Integration Suite together with SAP-RPT-1 foundation model. By embedding machine learning–driven decision-making into its integration flow, SST aims to dynamically select the optimal logistics partner for each delivery based on historical performance, cost, and delivery times. While SAP-<SPAN>RPT-1 is capable of solving predictive tasks such as classification and regression, we will be using it for classification task — i.e. to classify given delivery (defined by delivery header & item attributes) to most optimal 3PL logistics provider.</SPAN></P><P>The following diagram illustrates the target architecture implemented by SST for intelligent outbound delivery routing using SAP Integration Suite and SAP-RPT-1 model, hosted on BTP’s Generative AI hub:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_11-1771266938443.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373226i4CCAFD40AD592E0E/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_11-1771266938443.png" alt="arunmaarirajha_kv_11-1771266938443.png" /></span></P><P><STRONG><BR />Step 1: Outbound Delivery Creation<BR /></STRONG>An outbound delivery is created in SAP S/4HANA, which triggers an outbound notification event. The event is consumed by the iflow, which then calls S/4HANA OData API to fetch the following delivery related header and item attributes.</P><P><U>Delivery header fields:</U> DeliveryDate, SalesOrganization, ShipToParty, SoldToParty, DeliveryDocumentType, CreationDate, ShippingPoint, DeliveryPriority, IncotermsClassification, TransactionCurrency</P><P><U>Delivery item fields:</U> Material, DeliveryDocumentItemText, MaterialGroup, ActualDeliveryQuantity, DeliveryQuantityUnit, Plant, StorageLocation, ItemGrossWeight, ItemWeightUnit, ControllingArea, DistributionChannel, GoodsMovementType, ProfitCenter</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_12-1771266938447.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373225i9B0E4B75AE051BE8/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_12-1771266938447.png" alt="arunmaarirajha_kv_12-1771266938447.png" /></span></P><P> </P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_13-1771266938452.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373224iF4BEC5389147947E/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_13-1771266938452.png" alt="arunmaarirajha_kv_13-1771266938452.png" /></span></P><P><STRONG>Step 2: Context Selection – Random Historical Records<BR /></STRONG>We store historic labeled data i.e. delivery headers, items and corresponding optimal 3PL partner data in SAP HANA Cloud. The iflow is configured to fetch a random set of this historic data from HANA Cloud. As recommended in <SPAN><A href="https://community.sap.com/t5/artificial-intelligence-blogs-posts/sap-rpt-1-a-step-by-step-guide-on-getting-started/ba-p/14290171" target="_blank">this blog</A></SPAN>, for most enterprise use-cases, where you have about hundred thousand to million lines of labelled data, depending on how diverse your scenario data is, you may choose the context size, by iterating and finalizing the apt number of historical records to pick. By picking random records, we are able to provide statistically diverse examples, improve prediction quality and avoid bias from sequential records.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_14-1771266938461.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373231i5B72696901521D40/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_14-1771266938461.png" alt="arunmaarirajha_kv_14-1771266938461.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_15-1771266938475.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373232iC11B3E383903BD1E/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_15-1771266938475.png" alt="arunmaarirajha_kv_15-1771266938475.png" /></span></P><P>The Iflow executes the following query on the HANA Cloud DB:</P><pre class="lia-code-sample language-sql"><code>SELECT TOP ${property.NumberofRecords}
"DeliveryDate", "DeliveryDocumentNumber", "CreationDate", "DeliveryPriority", "IncotermsClassification", "Material", "DeliveryDocumentItemText", "MaterialGroup", "ActualDeliveryQuantity", "DeliveryQuantityUnit", "ItemGrossWeight", "ItemWeightUnit", "DistributionChannel", "PartnerNumber"
FROM "RPT1"."historic_3pl_selections"
ORDER BY RAND()</code></pre><P><STRONG><BR />Step 3: Constructing the SAP-RPT-1 Request<BR /></STRONG>We now combine:</P><UL><LI>Historical random records (known 3PL partner)</LI><LI>The current outbound delivery (3PL partner to be predicted)</LI></UL><P>The current transaction includes a placeholder: [PREDICT]</P><P>The request payload contains:</P><UL><LI>900 historical labeled rows</LI><LI>1 new outbound delivery row marked for prediction</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_16-1771266938479.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373230iDB50F5FD9567EFC8/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_16-1771266938479.png" alt="arunmaarirajha_kv_16-1771266938479.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_17-1771266938482.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373235iEE67725F5F92224C/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_17-1771266938482.png" alt="arunmaarirajha_kv_17-1771266938482.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_18-1771266938487.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373236i224D7ABAF3DC24FC/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_18-1771266938487.png" alt="arunmaarirajha_kv_18-1771266938487.png" /></span></P><P>For detailed info on payload construction, refer to the <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/example-payloads-for-inferencing-sap-rpt-1#request-payloads" target="_blank" rel="noopener noreferrer">official documentation page</A></SPAN> and the <SPAN><A href="https://github.com/SAP-samples/sap-rpt-samples" target="_blank" rel="noopener nofollow noreferrer">GitHub repository</A></SPAN> (sample Postman/Bruno collections available for experimentation).</P><P>Further, while creating the inferencing request to SAP-RPT-1, we trim the columns from original 24 down to only 14 columns of delivery attributes, by removing fields like Sales Org, Controlling Area, Profit centre that do not influence partner selection. This is one of the best practices to achieve best possible predictive outputs. Read more recommended <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/example-payloads-for-inferencing-sap-rpt-1#best-practices" target="_blank" rel="noopener noreferrer">best practices here</A></SPAN>.<BR /><STRONG> </STRONG></P><P><STRONG>Step 4: Deploying SAP-RPT-1 model in Generative AI Hub</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_19-1771266938496.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373237i2ABECD87E5ACEC4B/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_19-1771266938496.png" alt="arunmaarirajha_kv_19-1771266938496.png" /></span></P><P><STRONG><BR />Step 5: SAP-RPT-1 inference response<BR /></STRONG>The response contains Number of predicted records, predicted field (here optimal 3PL partner) along with Confidence score.</P><P>Response from our example scenario:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_20-1771266938500.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373238i6A7834DDBC4CB100/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_20-1771266938500.png" alt="arunmaarirajha_kv_20-1771266938500.png" /></span></P><P>You can also send multiple unknown records in one batch. To read more about the number of max. records and columns supported by SAP-RPT-1, check out <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/sap-rpt-1#sap-rpt-models" target="_blank" rel="noopener noreferrer">this help page</A></SPAN>.</P><P><STRONG>Step 6: Intelligent Routing in the Iflow<BR /></STRONG>The Iflow now:</P><OL><LI>Reads the predicted PartnerNumber</LI><LI>Evaluates confidence threshold (optional safeguard)</LI><LI>Routes outbound delivery to the optimal 3PL partner</LI><LI>Optionally, logs decision + confidence for traceability in HANA Cloud.</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_21-1771266938507.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373239iADC538832CCAAFFB/image-size/large?v=v2&px=999" role="button" title="arunmaarirajha_kv_21-1771266938507.png" alt="arunmaarirajha_kv_21-1771266938507.png" /></span></P><P><STRONG><BR />When AI in Iflows Brings Real Business Value</STRONG></P><UL><LI><STRONG>When routing decisions directly impact business outcomes</STRONG><BR />If sub-optimal routing can lead to SLA breaches, shipment delays, higher costs, or customer dissatisfaction, intelligent decision logic creates measurable advantage.</LI><LI><STRONG>When rule-based logic becomes complex and brittle</STRONG><BR />As business variables multiply (products, regions, priorities, partners), static rules become hard to maintain and scale. ML handles multi-dimensional patterns more effectively.</LI><LI><STRONG>When historical ERP data contains predictive patterns</STRONG><BR />If past delivery and fulfillment data reflects consistent business behavior, ML can learn from it and improve future routing decisions beyond deterministic conditions.</LI><LI><STRONG>When business context evolves over time</STRONG><BR />With changing product mixes, customer bases, or distribution models, retrainable ML models adapt dynamically — avoiding constant manual rule maintenance.</LI></UL><P><SPAN>Not every integration scenario requires ML capabilities. But where routing decisions carry operational or financial impact, replacing rigid rule sets with adaptive decision logic can significantly improve resilience and maintainability. Decision logic modernization is therefore not about adding AI everywhere — it is about applying it where it meaningfully improves orchestration.</SPAN></P><P><EM><SPAN>Note: This blog revisits </SPAN></EM><SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/intelligent-orchestration-in-iflows-business-aware-routing-with-sap-btp-s/ba-p/14162148" target="_blank"><EM>our earlier implementation</EM></A><EM> in light of the planned deprecation of SAP BTP Data Attribute Recommendation service, demonstrating an updated approach using SAP-RPT-1 model.</EM></SPAN></P>2026-02-16T19:54:29.244000+01:00