https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-AI-Services-blog-posts.xmlSAP Community - SAP AI Services2026-02-23T00:11:33.052233+00:00python-feedgenSAP AI Services blog posts in SAP Communityhttps://community.sap.com/t5/technology-blog-posts-by-sap/roadblocks-to-ai-adoption/ba-p/14281074Roadblocks to AI Adoption2025-12-01T13:28:01.063000+01:00MIKE210https://community.sap.com/t5/user/viewprofilepage/user-id/1952764<P class="lia-align-justify" style="text-align : justify;"><U><STRONG>Introduction</STRONG></U></P><P class="lia-align-justify" style="text-align : justify;">In an era defined by rapid technological advancement, businesses face both significant challenges and valuable opportunities as they pursue digital transformation and integrate AI-driven solutions. As organisations work to maintain a competitive edge, AI’s impact on business strategy, customer experience, and operational efficiency has become increasingly pivotal.</P><P class="lia-align-justify" style="text-align : justify;">As part of my recent doctoral studies in Digitalisation, specialising in Technology Adoption and AI Integration at IAE Nice, Graduate School of Management, Université Côte d’Azur, I explored this evolving landscape in depth through research focused on SAP customers. I am Happy to share the key findings through a series of insightful and practical articles, each offering guidance for both SAP customer leadership and SAP executives navigating the complexities of AI adoption and technology transformation.</P><P class="lia-align-justify" style="text-align : justify;"> </P><OL class="lia-align-justify" style="text-align : justify;"><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/maximising-ai-potential-a-blueprint-for-business-success/ba-p/14281058" target="_blank">Maximising AI Potential: A Blueprint for Business Success</A></LI><LI>Roadblocks to AI Adoption <STRONG>(THIS ARTICLE)</STRONG></LI><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/is-sap-business-data-cloud-the-answer-to-your-ai-ambitions/ba-p/14268363" target="_blank">Is SAP Business Data Cloud the Answer to Your AI Ambitions?</A></LI></OL><P class=""><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="1760795787743.png" style="width: 850px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/347032iF8A6D8889B46C80B/image-size/large?v=v2&px=999" role="button" title="1760795787743.png" alt="1760795787743.png" /></span></SPAN></P><P class=""><SPAN>Photo by adventtron Unsplash</SPAN></P><P class="">With advancements in technology continually reshaping the business landscape, artificial intelligence (AI) stands out as a transformative force across industries. However, despite its potential, many organizations stumble on the road to successful AI adoption. Understanding these barriers is critical for any business hoping to leverage AI effectively.</P><P class=""><U><STRONG>Organizational Readiness and Resistance to Change</STRONG></U></P><P class="">One of the foremost challenges in AI adoption is the readiness of an organization to integrate new technology. According to recent studies conducted by the author within SAP, involving 29 SAP customers and their SAP counterparts, many businesses lack the necessary infrastructure, skilled personnel, and strategic vision to implement AI successfully. Additionally, the cultural resistance within organizations can be a significant hurdle. Employees often fear job displacement due to automation or feel overwhelmed by the pace of technological change, thereby resisting new implementations. A customer from the Oil & Gas industry in EMEA South as part of an interview with the author stated that " The transformation is happening in three areas: people, processes, and technology. One significant challenge is that people often resist change, which is something we frequently notice when adopting new technologies. Sometimes, these technology is introduced without a proper change management. As a result, the leadership faces langes in later stages to manage the change and convince the end users to accept the AI automation and overcome their initial hesitations and fear."</P><P class=""><U><STRONG>Cost and Complexity Considerations</STRONG></U></P><P class="">The cost involved in deploying AI solutions is another significant barrier. Setting up AI systems requires substantial initial investment in technology and training, which can be a deterrent especially for SME enterprises. Moreover, the complexity of AI technology itself poses a challenge. Companies must ensure they have the expertise to not just implement but also maintain and scale AI solutions, which often necessitates costly ongoing training and development. A customer in the Chemicals sector from EMEA South reports to the author of this article that "Yeah, it's an easy question because in the end if there's a SAP competitor upcoming next year with a cheaper alternative, then we will think about going for the SAP’s competition." According to a CSM from EMEA South team share his experience saying with the author "A customer that I managed a couple of years ago has moved away from us and they moved to a SAP’s competitor due to the cost of our solutions"</P><P class=""><U><STRONG>Data Privacy and Security Concerns</STRONG></U></P><P class="">Data is the lifeblood of AI. However, the increasing stringency of data protection regulations such as GDPR in Europe presents a compliance maze for companies to navigate. Ensuring data privacy and securing AI interactions becomes a critical concern that companies must address, adding another layer of complexity to AI adoption. A Higher Education industry customer based in EMEA North shares that with the author "Data Privacy and security is absolutely a concern even today, thought it really depends very much on the industry."</P><P class=""><U><STRONG>Lack of Clear ROI</STRONG></U></P><P class="">The uncertainty about the return on investment (ROI) from AI projects also serves as a barrier. AI initiatives can be experimental in nature, making it difficult to predict outcomes precisely. This uncertainty can make stakeholders hesitant to commit the required resources, slowing down or even halting AI adoption processes.</P><P class=""><U><STRONG>Vendor Selection and Integration Challenges</STRONG></U></P><P class="">Choosing the right technology provider and ensuring the integration of AI with existing systems is another challenge for our customers. Businesses often struggle with choosing between SAP and multiple other options, each promising superior capabilities. Making the wrong choice can lead to integration issues, wasted resources, and failed projects. MEA North’s Oil & Gas industry customer expresses in an interview with the Author that "It also plays a very important role in the industry where I'm working right now, where we have to leaverage technology and AI to lead the market and to deliver the requirement of our demanding customers worldwide."</P><P class=""><U><STRONG>Moving Forward: Recommendations for Overcoming AI Adoption Barriers</STRONG></U></P><P class="">1. At SAP, we understand the importance of cultivating a culture receptive to innovation. We must work with your business to develop comprehensive change management strategies. Our support will help alleviate resistance and ensure a smooth integration of AI into our customer’s existing processes, fostering continuous learning and adaptability.</P><P class="">2. At SAP, we recommend allocating resources for essential infrastructure upgrades such as RISE with SAP. Our team should assist in building AI competency through employee training programs which our leadership is already heavely investing. Additionally, we can help establish partnerships with universities and tech institutes, ensuring a steady flow of skilled talent to support your AI initiatives.</P><P class="">3. At SAP, we prioritise data security. Its necessary for us to help strengthen our customer’s cybersecurity measures and ensure compliance with data protection laws. By safeguarding customer operations, we will build trust among stakeholders, thereby smoothing the path for successful AI integration.</P><P class="">4. To demonstrate the value of AI projects, we ought to work with customers to establish clear metrics for evaluating their performance. This will help secure ongoing support from stakeholders by showcasing tangible results and a positive return on investment.</P><P class="">5. We believe in transparency and performance. By implementing a robust assessment process from our solutions, including pilot projects and performance benchmarks, we will ensure that our collaboration best matches our customers's needs. Our commitment is to be our customer’s trusted partner, guiding customers through successful AI adoption and implementation is the key.</P><P class=""><U><STRONG>Conclusion</STRONG></U></P><P class="">In summary, while artificial intelligence promises transformative benefits for businesses, the journey to successful AI adoption is fraught with challenges. These roadblocks range from organisational readiness and resistance to change, cost and complexity considerations, data privacy and security concerns, uncertainty about ROI, to vendor selection and integration issues. To overcome these hurdles, businesses must approach AI adoption strategically and holistically.</P><P class="">At SAP, we understand these challenges and are committed to helping our customers navigate them effectively. In the light of above information, I do recommend to focusing on cultural receptivity, infrastructure upgrades, data security, clear ROI metrics, and transparent in our sales strategies, and strive to be a trusted partner in our customers' AI adoption journey. Through comprehensive change management, skill development, robust cybersecurity measures, and performance benchmarking, SAP can aim to smooth the path to successful AI integration. By addressing these barriers head-on and fostering a collaborative approach, businesses can unlock the full potential of AI and drive meaningful growth and innovation.</P><P class="">If you'd like to explore further, my full research article with reproach is <A class="" href="https://sap-my.sharepoint.com/:b:/p/mike_popal/EWOViZXA_oNNlQ5_Pg1UgpkBw5wG65O0nkKOcKH4NCnfdg?e=WmYM2z" target="_self" rel="nofollow noopener noreferrer">available here</A>. Please don't hesitate to contact me if you wish to discuss the research findings <span class="lia-unicode-emoji" title=":slightly_smiling_face:">🙂</span></P><P class=""><SPAN>Find me on Linked-in: </SPAN><A href="https://www.linkedin.com/in/mi4po/" target="_blank" rel="noopener nofollow noreferrer">https://www.linkedin.com/in/mi4po/</A></P>2025-12-01T13:28:01.063000+01:00https://community.sap.com/t5/human-capital-management-blog-posts-by-sap/bridging-data-quality-gaps-in-sap-successfactors-tih-with-csvvalidationapp/ba-p/14283170Bridging Data Quality Gaps in SAP SuccessFactors TIH with CSVValidationApp and Vibe Coding2025-12-03T16:06:10.964000+01:00DivyaTiwarihttps://community.sap.com/t5/user/viewprofilepage/user-id/17995<DIV>
<H3 id="toc-hId-1895545887"><STRONG>Introduction</STRONG></H3>
<P>Data quality challenges in SAP SuccessFactors Talent Intelligence Hub (TIH) often go unnoticed until they impact user experience. When thousands of attributes are imported without proper validation, issues like HTML tags, broken formatting, and unescaped characters can disrupt Growth Portfolios and compromise analytics/employee experience. This blog introduces <STRONG>CSVValidationApp</STRONG>, a solution designed to bridge this gap using <STRONG>functional expertise and modern development practices like Vibe Coding</STRONG>—and opens the door for community collaboration on similar innovations.</P>
<HR />
<H3 id="toc-hId-1699032382"><STRONG>Why This Matters</STRONG></H3>
<P> <SPAN>SAP Talent Intelligence Hub is a strategic enabler for skills-based workforce planning. It helps organizations create dynamic career paths, leverage AI-driven skill recommendations, and empower employees through Growth Portfolios. These capabilities unlock smarter talent decisions and a more agile workforce.</SPAN></P>
<DIV>
<P>To fully realize these benefits, <STRONG>data integrity during imports is critical</STRONG>. According to <A href="https://help.sap.com/docs/successfactors-platform/using-talent-intelligence-hub/importing-entities?locale=en-US&q=successfactors+attribute+import+talent+intelligence&version=LATEST" target="_self" rel="noopener noreferrer">SAP Help Portal,</A> importing entities like attributes, tags, and behaviors into TIH requires strict adherence to structure and format. Common issues include:</P>
<UL>
<LI>Incorrect headers or missing mandatory fields in CSV files</LI>
<LI>Improper encoding (non-UTF-8) causing character display errors</LI>
<LI>Unescaped quotes or HTML tags in descriptions, which can truncate or break content</LI>
<LI>Large file sizes impacting performance (recommended limit: 25,000 records per file)</LI>
<LI>Incorrect date formats for proficiencyAssignedDate when updating Growth Portfolio attributes</LI>
</UL>
<P>These challenges can lead to incomplete or inaccurate data in the <STRONG>Attributes Library</STRONG> and <STRONG>Growth Portfolio</STRONG>, affecting AI-driven recommendations and user experience. Adding a validation step before import ensures clean, consistent data—reducing manual troubleshooting and enabling HR teams to focus on strategic initiatives.</P>
</DIV>
<H3 id="toc-hId-1502518877"><STRONG>My Perspective and Approach</STRONG></H3>
<P>Having worked extensively across <STRONG>implementations, integrations, and functional solution design</STRONG> in SuccessFactors, I’ve seen how small data issues can create big challenges downstream. This experience shaped my approach: build a tool that doesn’t just validate data but actively improves it before it enters TIH.</P>
<P>I chose <STRONG>Vibe Coding</STRONG> as the development approach because it accelerates delivery through real-time collaboration, instant previews, and iterative design—all within SAP Business Application Studio. This allowed me to focus on business logic and user experience rather than boilerplate coding, ensuring the solution was delivered quickly without compromising quality.</P>
<HR />
<H3 id="toc-hId-1306005372"><STRONG>How CSVValidationApp Fits into the TIH Workflow</STRONG></H3>
<P>CSVValidationApp acts as a <STRONG>data quality gatekeeper</STRONG> in the integration process. In the following example workflow after data is exported from Integration Center, the file passes through CSVValidationApp for cleansing and validation. Once clean, the data is transformed and mapped to TIH-compatible attributes before being imported back into the system.</P>
<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="DivyaTiwari_0-1764771625951.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/348134iF204DFBBBA9998B1/image-size/medium?v=v2&px=400" role="button" title="DivyaTiwari_0-1764771625951.png" alt="DivyaTiwari_0-1764771625951.png" /></span>
<P>This workflow ensures that what enters TIH is not just valid but optimized for rendering and usability. The result? A clean, enriched skills library that supports AI-driven recommendations and future enhancements.</P>
<HR />
<H3 id="toc-hId-1109491867"><STRONG>What Makes CSVValidationApp Powerful</STRONG></H3>
<P>CSVValidationApp performs comprehensive checks tailored for SuccessFactors Integration Center requirements. It detects encoding issues, validates structure, identifies problematic characters, and removes HTML tags and entities. Beyond validation, it offers <STRONG>auto-correction</STRONG>, generating clean content with proper quoting and escaping. A preview feature and detailed statistics give administrators confidence before import.</P>
<P>By automating these steps, CSVValidationApp transforms a tedious, error-prone process into a seamless experience—empowering HR teams to focus on strategic tasks rather than data cleanup.</P>
<HR />
<H3 id="toc-hId-912978362"><STRONG>Looking Ahead: Unlocking the Full Potential of TIH</STRONG></H3>
<P>SAP Talent Intelligence Hub is continuously evolving to help organizations stay ahead in a skills-based economy. Upcoming innovations include <STRONG>AI-driven skill inference</STRONG>, <STRONG>integration with external taxonomies</STRONG>, <STRONG>expanded APIs</STRONG>, and <STRONG>advanced analytics for workforce agility</STRONG>. These capabilities will empower HR teams to make smarter decisions and deliver personalized employee experiences.</P>
<P>Clean, validated data is the foundation for these advancements. By ensuring data integrity from the start, organizations can confidently leverage AI recommendations, predictive insights, and future integrations without disruption.</P>
<P>CSVValidationApp supports this vision by acting as a proactive data quality gatekeeper. Future enhancements will include <STRONG>AI-powered attribute correction suggestions</STRONG>, <STRONG>direct SAP API integration for one-click imports</STRONG>, and <STRONG>support for additional modules like Learning and Recruiting</STRONG>—making the process even more seamless and intelligent.</P>
<HR />
<H3 id="toc-hId-716464857"><STRONG>Getting Started</STRONG></H3>
<P>CSVValidationApp is built as an SAP Fiori application using SAP Business Application Studio. To run it locally, simply clone the ***REMOVED BY MODERATION***, install dependencies, and start the app.<BR /><BR />Here is an example screen shot from the application. The Attribute text file I validate below had many data rows with competency description such as: <BR /><EM>"1000009","de_DE","BUSINESS & TRANSFORMATION..","<p>Sie planen Ihre Vorhaben, definieren herausfordernde Ziele und entscheiden mutig.</p><p>Herausforderungen im &bdquo;VUCA&ldquo;-Umfeld (Volatilit&auml;t &ndash; Unsicherheit &ndash; Komplexit&auml;t &ndash; Ambiguit&auml;t) begegnen Sie aktiv und entwickeln eine klare Vision f&uuml;r Ihren Verantwortungsbereich.</p><p><a href= xyzurl target=_blank>Kriterien und Verhaltensanker</a> anzeigen.</p>","COMPETENCY"</EM><BR /><BR /><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="result.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/348487iD76E0615EBEE0887/image-size/medium?v=v2&px=400" role="button" title="result.png" alt="result.png" /></span></P>
<P> <SPAN>This description gets corrected to:</SPAN></P>
<P><EM>"1000009","de_DE","BUSINESS & TRANSFORMATION..","Sie planen Ihre Vorhaben, definieren herausfordernde Ziele und entscheiden mutig. Bei der Umsetzung agieren Sie lösungsorientiert, bewerten Ergebnisse und Prozesse und nehmen notwendige Anpassungen vor. Herausforderungen im „VUCA\"-Umfeld (Volatilität - Unsicherheit - Komplexität - Ambiguität) begegnen Sie aktiv und entwickeln eine klare Vision für Ihren Verantwortungsbereich. ","COMPETENCY"</EM><BR /><BR /></P>
</DIV>
<DIV class="">
<DIV class="">
<DIV class="">
<DIV class="">
<DIV class="">
<DIV class="">
<DIV class=""><HR />
<H3 id="toc-hId-519951352"><STRONG>Final Thoughts & Community Ask</STRONG></H3>
</DIV>
</DIV>
</DIV>
<DIV class="">
<DIV class="">
<P>Data quality may seem like a technical detail, but in reality, it’s a strategic enabler. By combining functional expertise with modern development practices, CSVValidationApp delivers a solution that not only solves today’s challenges but positions organizations for tomorrow’s opportunities.</P>
<P>I’d love to hear from the SAP Community:</P>
<UL>
<LI>Have you built similar tools using <STRONG>Vibe Coding</STRONG> or other rapid development approaches?</LI>
<LI>What other gaps do you see in <STRONG>data validation for SuccessFactors integrations</STRONG>, and how can we bridge them together?</LI>
</UL>
<P>Share your thoughts, examples, and ideas in the comments. Let’s collaborate to make data quality a strength, not a challenge.</P>
</DIV>
</DIV>
</DIV>
</DIV>
</DIV>
</DIV>2025-12-03T16:06:10.964000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/the-hidden-behavior-of-llms-prompt-caching-and-determinism/ba-p/14285663The Hidden Behavior of LLMs - Prompt Caching and Determinism2025-12-08T08:17:36.779000+01:00santhosini_Khttps://community.sap.com/t5/user/viewprofilepage/user-id/138505<P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG><BR /><FONT size="7">The Hidden Behavior of LLMs - The LLM Prompt Caching and Determinism<BR /><FONT size="5">Are LLMs Really Stateless?</FONT><BR /></FONT></STRONG></FONT></P><P><EM><FONT face="arial,helvetica,sans-serif" size="3">A Developer’s Deep Dive Into the <STRONG>Hidden Behavior of LLMs</STRONG> (with code-gen as the use case)</FONT></EM></P><P><EM><FONT face="arial,helvetica,sans-serif" size="3">In this article, I explore three questions that emerged during my testing: why LLMs appear stateful, why prompt updates improve accuracy, and why outputs vary even with temperature set to zero.<BR /></FONT></EM></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>Introduction: The "Wait, What?" Moment</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">I recently started building a code generation tool using Large Language Models (LLMs) via API calls. Like many of you, I learned the golden rule of LLM development early on: <STRONG>LLMs are stateless.</STRONG> They don’t "remember" past requests; every API call is a fresh start.</FONT></P><P><FONT face="arial,helvetica,sans-serif">But as I tested my tool, I noticed something that confused me.</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Whenever I made changes in my prompt instructions:</STRONG> The output was perfect!</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>When I changed <EM>nothing</EM> and sent the exact same request:</STRONG> The output still changed!</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif">I expected that if the inputs were identical (and the model is stateless), the result should be consistent. This inconsistency led me down a rabbit hole of research where I landed on a feature that completely changed how I optimize my workflows: <STRONG>Prompt Caching.</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">If you are building AI Agents or Code Gen tools on SAP BTP (using SAP Generative AI Hub) or directly via provider APIs, this is a mechanism you need to understand—not just for speed, but to clear up the "caching" misconception.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>The Myth Buster: "Prompt Cache" NE "Response Cache"</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">Here is the inference I made during my research, which might clear up your confusion too:</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Reality:</STRONG> <STRONG>Prompt Caching does not cache the answer.</STRONG> It caches the <STRONG>question.</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">When we hear "Cache" in software development (like SAP ABAP or SAP HANA), we think of storing the <EM>result</EM> to serve it instantly next time. <STRONG>Prompt Caching is different.</STRONG> It caches the <STRONG>pre-computation of your input context</STRONG> (the Key-Value states of the attention mechanism). It essentially "pre-loads" the model's brain with your long documents, codebases, or instructions.</FONT></P><P><FONT face="arial,helvetica,sans-serif">However, the <STRONG>generation</STRONG> of the response is still calculated fresh, token by token. This brings us to the second major realization—why your output changes even when the prompt doesn't.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>The "Temperature=0" Trap: Why It’s Still Not Deterministic</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">You might think, <EM>"If the prompt is cached and I set Temperature to 0, shouldn't the output be identical every time?"</EM></FONT></P><P><FONT face="arial,helvetica,sans-serif">The short answer: No.</FONT></P><P><FONT face="arial,helvetica,sans-serif">The technical answer: Modern LLMs are only "mostly" deterministic, even at Temperature=0.</FONT></P><P><FONT face="arial,helvetica,sans-serif">While setting the temperature to 0 forces <STRONG>greedy decoding</STRONG> (always picking the highest-probability next token), real-world infrastructure introduces subtle variations:</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mixture-of-Experts (MoE) Architecture:</STRONG> Models like GPT-4 are believed to use a "Sparse MoE" design. Different "expert" subnetworks handle different parts of your input. Parallel routing and slight gating differences can introduce non-determinism in the path your data takes.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>GPU "Fuzziness":</STRONG> Modern GPUs perform massive parallel operations. Because floating-point arithmetic is non-associative, the order of operations can shift slightly based on thread scheduling. A microscopic rounding error can flip a token choice when two words have nearly identical probabilities.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>System Variance:</STRONG> Your request might hit a different backend instance or hardware generation (indicated by the system_fingerprint in OpenAI responses).</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Takeaway:</STRONG> Even with a cached prompt and zero temperature, expect minor "drift" in your outputs. The prompt cache speeds up the <EM>input</EM> processing, but the <EM>output</EM> generation remains subject to the complex physics of AI hardware.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>How It Works: Implicit vs. Explicit</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">Different vendors handle this differently. If you are using models via SAP Generative AI Hub, it's crucial to know how the underlying models behave.</FONT></P><P><FONT size="4"><STRONG><FONT face="arial,helvetica,sans-serif">Implicit Caching (The "It Just Works" Approach)</FONT></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Vendors:</STRONG> OpenAI (GPT-4.1), Google (Gemini)</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mechanism:</STRONG> You don't need to change your code. If the API detects that the first 1,024+ tokens of your prompt match a previous request sent recently, it automatically uses the cached processing.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Pros:</STRONG> Zero developer effort.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cons:</STRONG> Less control over what stays in the cache.</FONT></LI></UL><P><FONT size="4"><STRONG><FONT face="arial,helvetica,sans-serif">Explicit Caching (The "Precision" Approach)</FONT></STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Vendor:</STRONG> Anthropic (Claude 3.5 and > Sonnet/Haiku)</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Mechanism:</STRONG> You must explicitly tell the API where to stop caching. You insert a cache_control parameter at a specific "checkpoint" in your message history.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Pros:</STRONG> You guarantee that your heavy context (like a full ABAP codebase or API specification) is cached.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cons:</STRONG> Requires a small code change.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>A Common Practice for Agentic AI & Code Gen</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">In my code generation use case, I send the entire project context (libraries, style guides, and current code) with every request. Without caching, this is slow and expensive.</FONT></P><P><FONT face="arial,helvetica,sans-serif">The Strategy:</FONT></P><P><FONT face="arial,helvetica,sans-serif">Organize your prompt so the static content is at the top.</FONT></P><OL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>System Prompt</STRONG> (Static: "You are an ABAP Expert...")</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Context/Documentation</STRONG> (Static: "Here is the SAP Cloud SDK documentation...") -> <STRONG>[CACHE CHECKPOINT]</STRONG></FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>User Query</STRONG> (Dynamic: "Write a method to fetch Business Partners.")</FONT></LI></OL><P><FONT face="arial,helvetica,sans-serif">By placing the checkpoint after the documentation, the model "reads" the docs once. For every subsequent user query, it skips the heavy lifting and jumps straight to generating code.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Verifying the Speed Up: Analyze Your Logs</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">How do you know it's working? You won't see "same output," but you will see "faster time-to-first-token."</FONT></P><P><FONT face="arial,helvetica,sans-serif">Look at the usage metadata in your API response.</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>OpenAI:</STRONG> Look for cached_tokens in the usage object.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Anthropic:</STRONG> Look for cache_read_input_tokens.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Google Gemini:</STRONG> Look for cachedContentTokenCount.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><STRONG>Example Analysis:</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Request 1 (Cold):</STRONG> Input Tokens: 10,000 | Processing Time: 4.5s</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Request 2 (Warm):</STRONG> Input Tokens: 10,000 (9,900 Cached!) | Processing Time: <STRONG>0.8s</STRONG></FONT></LI></UL><P><STRONG>Why could this be a game-changer?</STRONG></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Latency:</STRONG> The model doesn't have to "read" your 5,000-line code file every time. It "remembers" the reading process.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Cost:</STRONG> You pay significantly less (often ~90% less) for the cached tokens.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif">If you see the processing time drop drastically while the output remains high-quality (and slightly varied), you have successfully implemented Prompt Caching.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>My Personal Inference: The "Bad Roll" & The Butterfly Effect</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">During my testing, I encountered a frustrating scenario that I believe many developers will face.</FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Scenario:</STRONG> I used OpenAI for code generation. When the prompt was cached (no changes), the code quality was consistently "not perfect"—it had minor bugs or incomplete logic.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Fix:</STRONG> When I added just <STRONG>one single word</STRONG> to the prompt, the code generation suddenly became perfect.</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>The Contrast:</STRONG> Switching to Anthropic (without caching) gave me complete code every time.</FONT></LI></UL><P><FONT face="arial,helvetica,sans-serif"><STRONG>Why did adding one word fix the OpenAI output?</STRONG> This is what I call the <STRONG>"Deterministic Trap."</STRONG> When you use a cached prompt with a low temperature (which we usually do for code), the model is mathematically locked into a specific "reasoning path." If the model generates a suboptimal solution (a "bad roll") on the first try, the cached state ensures it starts from the <EM>exact same mathematical position</EM> next time. It essentially "remembers" the path to the bad answer.</FONT></P><P><FONT face="arial,helvetica,sans-serif">By adding a single word, I forced a <STRONG>"Cache Miss"</STRONG> (or at least a perturbation in the attention mechanism). This acted like the "Butterfly Effect"—it shifted the token probabilities just enough to force the model to calculate a fresh path, allowing it to escape the "bad roll" and find the correct solution.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>The Lesson:</STRONG> Prompt Caching is powerful for speed, but if you notice your model is "stuck" giving you the same bad code repeatedly, don't just retry. <STRONG>Change the prompt.</STRONG> Even a single extra adjective can shake the model out of a local minimum and produce the perfect result.</FONT></P><P><FONT face="arial,helvetica,sans-serif" size="5"><STRONG>Conclusion</STRONG></FONT></P><P><FONT face="arial,helvetica,sans-serif">LLMs are stateless, but Prompt Caching gives them a "short-term memory" for processing inputs. While it won't force your outputs to be bit-for-bit identical (due to MoE and GPU nuances), it <STRONG>will</STRONG> make your applications significantly faster and cheaper to run.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>Next Step:</STRONG> Check your current API logs. If you are sending long contexts (>1024 tokens) repeatedly, you might already be benefiting from implicit caching, or you may need to add cache_control headers for Anthropic models to unlock these savings.</FONT></P><P><FONT face="arial,helvetica,sans-serif"><STRONG>References & Further Reading</STRONG></FONT></P><UL><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Anthropic Prompt Caching:</STRONG> (<A href="https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching" target="_blank" rel="noopener nofollow noreferrer">https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>OpenAI Prompt Caching:</STRONG> (<A href="https://platform.openai.com/docs/guides/prompt-caching" target="_blank" rel="noopener nofollow noreferrer">https://platform.openai.com/docs/guides/prompt-caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>Google Gemini Context Caching:</STRONG> (<A href="https://ai.google.dev/gemini-api/docs/caching" target="_blank" rel="noopener nofollow noreferrer">https://ai.google.dev/gemini-api/docs/caching</A>)</FONT></LI><LI><FONT face="arial,helvetica,sans-serif"><STRONG>SAP Generative AI Hub:</STRONG> (<A href="https://www.google.com/search?q=https://help.sap.com/docs/sap-ai-core/generative-ai-hub" target="_blank" rel="noopener nofollow noreferrer">https://help.sap.com/docs/sap-ai-core/generative-ai-hub</A>)</FONT></LI></UL>2025-12-08T08:17:36.779000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/leveraging-sap-ai-core-amp-ai-launchpad-a-code-driven-comparison-of/ba-p/14273163Leveraging SAP AI Core & AI Launchpad: A Code-Driven Comparison of LangChain and LangGraph2025-12-09T07:50:47.232000+01:00Rithikahttps://community.sap.com/t5/user/viewprofilepage/user-id/1400195<P>As AI development evolves, we see multiple frameworks emerge - LangChain, LangGraph, Agent Frameworks, RAG frameworks, and more. With so many choices, one common question developers face is: <STRONG>“Which framework should I use for my project?”</STRONG></P><P>The short answer: <STRONG>both LangChain and LangGraph are useful, but for different types of workflows.</STRONG></P><P>Think of them as part of the same family:</P><UL><LI><P><STRONG>LangChain -> the foundation</STRONG></P></LI><LI><P><STRONG>LangGraph -> the orchestration layer built on top of LangChain</STRONG></P></LI></UL><P><STRONG>LangChain - The building block</STRONG></P><P>As the name suggests, LangChain works like a <EM>chain</EM>: it follows a sequential, linear flow of tasks. Along with chaining steps, you can also integrate tools and other components during execution, making it flexible for straightforward LLM workflows.</P><P>What can you do - </P><UL><LI><STRONG>Integrate Models</STRONG> (LLMs, embeddings)</LI><LI><STRONG>Integrate Tools</STRONG> (search, calculators, APIs)</LI><LI><STRONG>Have a Memory</STRONG> (conversation history)</LI><LI><STRONG>Chains</STRONG> (a sequence of prompts or tasks)</LI></UL><P><STRONG>For example :</STRONG></P><P>You can use it for simple workflows as below to summarize the tickets</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_1-1764504237785.png" style="width: 513px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346754i9B535BBF00D1F8C6/image-dimensions/513x75?v=v2" width="513" height="75" role="button" title="Rithika_1-1764504237785.png" alt="Rithika_1-1764504237785.png" /></span></P><P>Using LangChain would make sense here since -</P><UL><LI>There are <STRONG>no decisions</STRONG></LI><LI>No human approval step is need</LI><LI>No branching or looping is required</LI><LI>The pipeline is <STRONG>fixed</STRONG></LI><LI>The steps are <STRONG>linear</STRONG></LI></UL><P><STRONG>Now lets experiment this with code:</STRONG> </P><DIV class=""><DIV><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class="">Prerequisites</DIV></DIV></DIV></DIV></DIV></DIV></DIV><UL><LI><A href="https://developers.sap.com/tutorials/ai-core-launchpad-provisioning.html" target="_blank" rel="noopener noreferrer">Access to <STRONG>SAP AI Core</STRONG>.</A></LI><LI><STRONG><A href="https://developers.sap.com/tutorials/ai-core-generative-ai.html#6c4a539e-2bdf-4ddb-97a0-0f8d0f1bd00e" target="_blank" rel="noopener noreferrer">A registered LLM deployment available in your AI Core tenant.</A></STRONG></LI><LI>Download, install Vs code and python</LI><LI>Setup a virtual environment </LI><LI><STRONG>Install sap-ai-sdk-gen, langchain and langgraph </STRONG>within this venv to isolate the dependencies</LI><LI>Please have your AI Core configured file <A href="https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/initial-setup?locale=en-us" target="_blank" rel="noopener noreferrer">Initial Setup</A><SPAN> </SPAN><P>With your service key follow the configuration section in this <A href="https://help.sap.com/doc/generative-ai-hub-sdk/CLOUD/en-US/_reference/README_sphynx.html#configuration" target="_blank" rel="noopener noreferrer">Documentation</A> to set the following environment variables:</P><UL><LI><STRONG>AICORE_CLIENT_ID:</STRONG> This represents the client ID.</LI><LI><STRONG>AICORE_CLIENT_SECRET:</STRONG> This stands for the client secret.</LI><LI><STRONG>AICORE_AUTH_URL:</STRONG> This is the URL used to retrieve a token using the client ID and secret.</LI><LI><STRONG>AICORE_BASE_URL:</STRONG> This is the URL of the service (with suffix /v2).</LI><LI><STRONG>AICORE_RESOURCE_GROUP:</STRONG> This represents the resource group that should be used. (<EM>The standard resource group is "default")</EM></LI></UL></LI></UL><P>Step 1) Import Required Packages</P><pre class="lia-code-sample language-python"><code>from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from langchain_core.prompts import PromptTemplate</code></pre><P>Step 2) Initialize the Model through SAP GenAI Hub </P><P><CODE>proxy_client</CODE> ensures authentication and routing of calls.</P><pre class="lia-code-sample language-python"><code>model = ChatOpenAI(proxy_model_name="gpt-5", proxy_client=get_proxy_client())</code></pre><P>Step 3) Create a Prompt to Classify the Issue</P><UL><LI><P><CODE>ticket_prompt</CODE> defines a template with a variable placeholder <CODE>{ticket}</CODE>.</P></LI><LI><P>The model will read the ticket text and respond with <STRONG>one category</STRONG>.</P></LI></UL><P> </P><pre class="lia-code-sample language-python"><code>ticket_prompt = PromptTemplate(
input_variables=["ticket"],
template="Classify this SAP support ticket into one category: "
"Performance, Integration, Authorization, UI, ABAP Error.\n\nTicket: {ticket}"
)</code></pre><P>Step 4) Create a Second Prompt for Severity Decision</P><UL><LI><P>This uses the model's previous output (<CODE>classification</CODE>) as input.</P></LI><LI><P>Purpose: Determine if escalation is required.</P></LI><LI><P>Expected outputs: <CODE>"Yes"</CODE> or <CODE>"No"</CODE>.</P></LI></UL><pre class="lia-code-sample language-python"><code>severity_prompt = PromptTemplate(
input_variables=["classification"],
template="Based on the classification '{classification}', decide if escalation is needed.Respond wit Yes or No"
)</code></pre><P>Step 5) Create Chains Using LCEL (LangChain Expression Language)</P><UL><LI><P><CODE>|</CODE> (pipe operator) connects prompt -> model like a flow.</P></LI><LI>Meaning - <EM>When this chain runs, take prompt -> insert input -> call model -> give output.</EM></LI></UL><pre class="lia-code-sample language-python"><code>classification_chain = ticket_prompt | model
severity_chain = severity_prompt | model</code></pre><P>Step 6) Run Classification</P><pre class="lia-code-sample language-python"><code>ticket = "The user is facing issues while logging in SAP S/4 Hana Public Cloud with thier I-User ID"
classification_result = classification_chain.invoke({"ticket": ticket})
classification = classification_result.content
print(f"Classification: {classification}")
# Check if escalation is needed
escalation_result = severity_chain.invoke({"classification": classification})
print(f"Escalation Required: {escalation_result.content}")</code></pre><P><STRONG>And your output would look something like this - </STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_0-1764233334260.png" style="width: 608px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/345760iE9652CD4128AD666/image-dimensions/608x38/is-moderation-mode/true?v=v2" width="608" height="38" role="button" title="Rithika_0-1764233334260.png" alt="Rithika_0-1764233334260.png" /></span></P><P><STRONG>But What if the workflow isn't linear??</STRONG></P><P><STRONG>Let’s extend the same scenario:</STRONG></P><P>Once a support ticket arrives, the system first extracts and classifies the issue type (e.g., <EM>Authorization, Performance, UI, Integration, ABAP Error</EM>). After classification, the model’s confidence score is evaluated.</P><UL><LI><P>If the LLM is confident enough, the system proceeds directly to the escalation decision.</P></LI><LI><P>If confidence is low, the workflow pauses and sends the ticket to a human agent for review.<BR />The agent may either approve the model’s decision or correct it and if corrected, the classification step is repeated.</P></LI></UL><P>Once a final classification is confirmed, the system evaluates the severity level:</P><UL><LI><P>If severity is <STRONG>high</STRONG>, the ticket is escalated and a notification email is automatically triggered.</P></LI><LI><P>If the severity is <STRONG>low or moderate</STRONG>, the workflow ends without escalation.</P></LI></UL><P>Now the workflow includes:</P><P>- loops<BR />- human-in-loop<BR />- branching logic<BR />- retry mechanisms</P><P>This is where <STRONG>LangChain becomes restrictive</STRONG><SPAN> <STRONG>and</STRONG></SPAN><STRONG> LangGraph comes into picture -</STRONG></P><P>LangGraph is built <STRONG>on top of LangChain</STRONG>, but instead of linear chains, it uses a graph-style architecture.<SPAN class=""><SPAN class=""> Instead of a straight pipeline, you get a flexible architecture where your AI application can branch, loop, make decisions, or even include human approvals</SPAN></SPAN><SPAN class=""> </SPAN></P><P><STRONG>It turns your workflow from:</STRONG></P><P>Step1 -> Step2 -> Step3</P><P><STRONG>into more of a graph:</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_3-1764505811914.png" style="width: 988px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346768i6EEF172BA2F955C8/image-dimensions/988x178?v=v2" width="988" height="178" role="button" title="Rithika_3-1764505811914.png" alt="Rithika_3-1764505811914.png" /></span></P><P><EM>Lets look at the code now</EM></P><pre class="lia-code-sample language-python"><code>from typing import TypedDict, Optional
from langgraph.graph import StateGraph, END
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client
from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from langchain_core.prompts import PromptTemplate
# --------------------------------------------------
# 1) Connect to SAP AI Core Model via SAP GenAI Hub
# --------------------------------------------------
# This model will be used in all decision steps (classification, confidence check, escalation decision)
model = ChatOpenAI(
proxy_model_name="gpt-5",
proxy_client=get_proxy_client()
)
# --------------------------------------------------
# 2) Define Prompts for Each AI Task
# --------------------------------------------------
# Prompt for classifying the incoming support ticket into a predefined category.
classification_prompt = PromptTemplate(
input_variables=["ticket"],
template="""
You are an SAP support assistant.
Classify the following SAP support ticket into ONE category only:
Performance, Integration, Authorization, UI, ABAP Error.
Ticket: {ticket}
"""
)
# Prompt to check how confident the model is in its classification.
confidence_prompt = PromptTemplate(
input_variables=["classification"],
template="""
Evaluate your confidence in the following classification:
"{classification}"
Respond only with: High or Low
"""
)
# Prompt to decide whether escalation is needed based on classification.
severity_prompt = PromptTemplate(
input_variables=["classification"],
template="""
Based on the issue classification "{classification}", decide if this should be escalated to Level 2 SAP Support.
Respond exactly with Yes or No.
"""
)
# Connect prompts to the SAP AI model.
classification_chain = classification_prompt | model
confidence_chain = confidence_prompt | model
severity_chain = severity_prompt | model
# --------------------------------------------------
# 3) Define the State Shape for the Workflow
# --------------------------------------------------
# This holds all data as the workflow progresses.
class TicketState(TypedDict, total=False):
ticket: str
classification: Optional[str]
confidence: Optional[str] # "high" / "low"
severity: Optional[str] # "high" / "low/moderate"
retry_count: int # Counts human corrections
escalated: bool
email_sent: bool
# --------------------------------------------------
# 4) Workflow Node Functions
# --------------------------------------------------
# Each function represents a step in the process.
# Step 1: Classify the ticket using the LLM.
def classify(state: TicketState) -> TicketState:
response = classification_chain.invoke({"ticket": state["ticket"]})
state["classification"] = response.content.strip()
print(f"Classification → {state['classification']}")
return state
# Step 2: Check how confident the model is.
def check_confidence(state: TicketState) -> TicketState:
# If a human already corrected the classification,
# we trust the human and override confidence to HIGH.
if state.get("retry_count", 0) > 0:
state["confidence"] = "high"
print("Confidence overridden → high (human validated)")
return state
# Otherwise, let the AI evaluate confidence.
response = confidence_chain.invoke({"classification": state["classification"]})
state["confidence"] = response.content.strip().lower()
print(f"Model Confidence → {state['confidence']}")
return state
# Step 3: Human-in-the-loop step if confidence was low.
def human_review(state: TicketState) -> TicketState:
print("HUMAN-IN-THE-LOOP REQUIRED:")
print(f"Suggested classification: {state['classification']}")
# Human may approve or correct the classification.
updated = input("Enter correct classification (or press enter to approve): ").strip()
if updated:
print(f"Human corrected classification → {updated}")
state["classification"] = updated
# Increase retry count so system knows review already happened.
state["retry_count"] = state.get("retry_count", 0) + 1
return state
# Step 4: Decide whether the issue needs escalation.
def decide_escalation(state: TicketState) -> TicketState:
response = severity_chain.invoke({"classification": state["classification"]})
decision = response.content.strip().lower()
# Convert response into workflow logic format.
state["severity"] = "high" if decision == "yes" else "low/moderate"
print(f"Escalation Decision → Severity: {state['severity']}")
return state
# Step 5: If severity is high, escalate the ticket.
def escalate(state: TicketState) -> TicketState:
print("Escalating ticket to Level 2 support...")
state["escalated"] = True
return state
# Step 6: After escalation, send an email notification.
def send_mail(state: TicketState) -> TicketState:
print("Sending escalation email...")
state["email_sent"] = True
return state
# Step 7: End the workflow.
def end(state: TicketState) -> TicketState:
print("Workflow Complete.")
return state
# --------------------------------------------------
# 5) Build LangGraph Workflow
# --------------------------------------------------
graph = StateGraph(TicketState)
graph.add_node("classify", classify)
graph.add_node("confidence", check_confidence)
graph.add_node("human_review", human_review)
graph.add_node("decide_escalation", decide_escalation)
graph.add_node("escalate", escalate)
graph.add_node("send_mail", send_mail)
graph.add_node("end", end)
# The workflow always begins with classification.
graph.set_entry_point("classify")
# After classifying, always check confidence.
graph.add_edge("classify", "confidence")
# Route: Confidence determines next step.
def route_confidence(state: TicketState):
return "decide_escalation" if state["confidence"] == "high" else "human_review"
graph.add_conditional_edges(
"confidence",
route_confidence,
{
"decide_escalation": "decide_escalation",
"human_review": "human_review"
}
)
# If human corrected, return to classification to retry.
graph.add_edge("human_review", "classify")
# Route after escalation decision.
def route_severity(state: TicketState):
return "escalate" if state["severity"] == "high" else "end"
graph.add_conditional_edges(
"decide_escalation",
route_severity,
{
"escalate": "escalate",
"end": "end"
}
)
# Escalation always triggers an email, then finish.
graph.add_edge("escalate", "send_mail")
graph.add_edge("send_mail", END)
# Compile workflow
app = graph.compile()
# --------------------------------------------------
# 6) Run Workflow
# --------------------------------------------------
initial_state: TicketState = {
"ticket": "User cannot log in using I-User ID in SAP S/4HANA Public Cloud.",
"retry_count": 0
}
final = app.invoke(initial_state)
print("\nFinal State:", final)</code></pre><P>This what your output will look like -</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Rithika_0-1764509911294.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/346774i11899926421B7861/image-size/large?v=v2&px=999" role="button" title="Rithika_0-1764509911294.png" alt="Rithika_0-1764509911294.png" /></span></P><P><STRONG>Conclusion:</STRONG></P><P><STRONG><SPAN>When you’re deciding between LangChain and LangGraph for your SAP AI workflows, ask yourself the following:</SPAN></STRONG><SPAN> </SPAN></P><P><STRONG><SPAN>How complex is my workflow?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>Linear and predictable? Use </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Dynamic or with branching logic? Go for </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><P><STRONG><SPAN>Do I need to persist state or approvals?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>Short-lived, single-session tasks (like a Q&A chatbot)? </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>Long-running, multi-step processes (like procurement)? </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN>.</SPAN><SPAN> </SPAN></LI></UL><P><STRONG><SPAN>What’s my optimization goal - speed or control?</SPAN></STRONG><SPAN> </SPAN></P><UL><LI><SPAN>For rapid prototypes or proof-of-concepts, </SPAN><STRONG><SPAN>LangChain</SPAN></STRONG><SPAN> is your best friend.</SPAN><SPAN> </SPAN></LI></UL><UL><LI><SPAN>For production-ready, auditable, and orchestrated AI apps, </SPAN><STRONG><SPAN>LangGraph</SPAN></STRONG><SPAN> provides the structure you need.</SPAN><SPAN> </SPAN></LI></UL><P><SPAN>Start with what’s needed, keep it maintainable, and scale when your workflow truly calls for it.</SPAN><SPAN> <BR /></SPAN><SPAN>LangChain and LangGraph aren’t competitors, but rather act </SPAN><STRONG><SPAN>complementary</SPAN></STRONG><SPAN> in our AI projects.</SPAN><SPAN> </SPAN></P><P><SPAN>Happy Building and may your next AI project be both smart and </SPAN><STRONG><SPAN>sustainable</SPAN></STRONG><SPAN>.</SPAN></P><P><STRONG><SPAN>Additional hands-on:</SPAN></STRONG><SPAN> </SPAN></P><P><A href="http://community.sap.com/t5/artificial-intelligence-blogs-posts/hands-on-tutorial-building-an-ai-agent-with-human-in-the-loop-control/ba-p/14050267" target="_blank"><SPAN>http://community.sap.com/t5/artificial-intelligence-blogs-posts/hands-on-tutorial-building-an-ai-agent-with-human-in-the-loop-control/ba-p/14050267</SPAN></A><SPAN> </SPAN></P><P><SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/the-power-of-ai-agents-my-journey-with-langgraph/ba-p/14032724" target="_blank">https://community.sap.com/t5/technology-blog-posts-by-sap/the-power-of-ai-agents-my-journey-with-langgraph/ba-p/14032724</A></SPAN><SPAN> </SPAN></P><P> </P><P> </P><DIV class=""> </DIV><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P><P> </P>2025-12-09T07:50:47.232000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/contribution-margin-forecast-with-sap-business-data-cloud/ba-p/14261075Contribution Margin Forecast with SAP Business Data Cloud2025-12-10T12:07:09.187000+01:00TobyKhttps://community.sap.com/t5/user/viewprofilepage/user-id/1517900<P><STRONG>MANAGEMENT SUMMARY</STRONG></P><P>This blog demonstrates how SAP Business Data Cloud (BDC) can be used to implement a contribution margin forecast, which serves companies as a substantial controlling tool to steer financial results in the short- and medium-term. Combining SAC Planning with Datasphere (“Seamless Planning”) allows for (near) real-time usage of actual data from SAP BW to extrapolate the variable cost component of the contribution margin, while SAC Planning is used to forecast the revenue component and report on the resulting KPIs. The combination of the actual and planning data happens without the need for redundant data replication. Furthermore, this blog will briefly touch on how BDC can be leveraged to enhance this use case through a comprehensive path for BW modernization, the re-use of SAP-managed data products, and the application of advanced machine learning methods for forecasting.</P><P><STRONG>RECAP</STRONG></P><P>Before continue reading it is recommended to familiarize with the following preceding blog posts to properly understand the fundamentals: </P><UL><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/the-value-of-sap-business-data-cloud-bdc-in-the-context-of-business/ba-p/14165827" target="_blank">The Value of SAP Business Data Cloud (BDC) in The Context of Business Steering</A> puts this blog post into perspective of a comprehensive blog series that aims to demonstrate the business value add of SAP BDC.</LI><LI><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/real-time-steering-with-live-planning/ba-p/14280932" target="_blank">Real-time steering with LIVE planning</A> uncovers the "Seamless Planning" paradigm (which is the key enabler for below-described use case) and illustrates a practical end-to-end use case that leverages External Live Versions, our latest innovation in context of Seamless Planning.</LI></UL><P><STRONG>MOTIVATION AND USE CASE</STRONG></P><P>The motivation of this blog is to demonstrate a <STRONG>tangible implementation example for how BDC can be leveraged to enable effective and efficient business steering</STRONG>. The demonstrated use case is a contribution margin forecast which estimates future revenues and variable costs, calculating expected contribution margins over time. The contribution margin is particularly relevant for financial steering because fixed costs are, by definition, largely unchangeable in the short- and medium-term. Therefore, if a company aspires to impact financial results, management’s primary focus and opportunity for action resides in maximizing the contribution margin. In essence, the contribution margin is calculated by subtracting variable costs from revenues, with the remainder – that is, the contribution margin – available to cover fixed costs. The levers that management can actively influence on a regular basis are:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 1: Calculation schema for Contribution Margin" style="width: 575px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336464iB7F64E9B7F1ABF22/image-size/large?v=v2&px=999" role="button" title="TobyK_0-1762340768084.png" alt="Picture 1: Calculation schema for Contribution Margin" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 1: Calculation schema for Contribution Margin</span></span></P><P><SPAN>Such a contribution margin forecast can be implemented in an innovative fashion leveraging the Seamless Planning capability within BDC as explained in the following paragraph.</SPAN></P><P><STRONG>IMPLEMENTATION </STRONG></P><P>Considering the above-described contribution margin levers, we are using the following <STRONG>BDC components</STRONG> for the corresponding implementation.</P><P>A <STRONG>SAC Story on top of a SAC Planning model</STRONG> is used to enter sales volume forecast and pricing assumptions <STRONG><FONT color="#3366FF">(1)</FONT></STRONG>. Upon publishing <STRONG><FONT color="#3366FF">(2)</FONT></STRONG> the forecasted values (volumes, prices) are directly persisted within Datasphere (“Seamless Planning”).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 2: SAC frontend for revenue forecast" style="width: 924px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336412i58FD3240F71625B0/image-size/large?v=v2&px=999" role="button" title="TobyK_0-1762334493243.png" alt="Picture 2: SAC frontend for revenue forecast" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 2: SAC frontend for revenue forecast</span></span><STRONG>Datasphere</STRONG> is used to acquire and model total actual variable costs and sales volumes based on SAP BW in our example <STRONG><FONT color="#3366FF">(3) </FONT></STRONG>(or any other compatible and relevant source system). Note that actual sales volumes are acquired to allow the calculation of volume-weighted actual variable costs per unit in a later step. A union view <STRONG><FONT color="#3366FF">(4)</FONT></STRONG> within Datasphere combines the forecasted sales volumes and prices (as provided via the SAC Planning frontend) with the actual sales volumes and variable costs (as acquired from SAP BW).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 3: Data lineage within Datasphere" style="width: 916px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336413iABC9A258B46C489D/image-size/large?v=v2&px=999" role="button" title="TobyK_0-1762334658809.png" alt="Picture 3: Data lineage within Datasphere" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 3: Data lineage within Datasphere</span></span></P><P><SPAN>In an </SPAN><STRONG>Analytic Model <FONT color="#3366FF">(5)</FONT> </STRONG><SPAN>on top of the union view the relevant KPIs consisting of both forecast and actual data are calculated:</SPAN></P><P><STRONG>Forecasted revenues: </STRONG>Restricted measures filter on forecasted volumes and prices originating from the SAC planning model. A calculated measure multiplies these two measures to derive the forecasted revenue [forecasted revenues = forecasted sales volumes * price assumption].</P><P><STRONG>Extrapolated variable costs: </STRONG>Restricted measures filter on actual volumes and total manufacturing costs originating from SAP BW. These measures are cumulated over the actual time horizon as selected per prompt <FONT color="#3366FF"><STRONG>(6). </STRONG></FONT>A calculated measure divides cumulated costs by cumulated volumes to derive the volume-weighted average costs per unit [weighted-average costs per unit = cumulated actual costs/cumulated actual sales volumes]. Note that this measure is kept constant over the time dimension to allow applicability to any forecast months (see below).</P><P><STRONG>Expected contribution margin: </STRONG>A calculated measure extrapolates the actual variable costs by multiplying the volume-weighted actuals costs per unit (see above) with the forecasted sales volumes [extrapolated variable costs = weighted-average actuals costs per unit * forecasted sales volumes]. A calculated measure subtracts extrapolated variable costs from forecasted revenues to finally derive the contribution margin [contribution margin forecast = forecasted revenues - extrapolated variable costs].</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 4: Analytic Model within Datasphere" style="width: 566px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336415i91F4B69FAC1BFCC6/image-size/large?v=v2&px=999" role="button" title="TobyK_1-1762334739895.png" alt="Picture 4: Analytic Model within Datasphere" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 4: Analytic Model within Datasphere</span></span><SPAN> </SPAN>A <STRONG>SAC Story on top the Analytic Model</STRONG> is used to steer the <STRONG>prompt for the weighted-average actual variable costs <FONT color="#3366FF">(6) </FONT></STRONG>and display the results -- that is, the weighted-average variable costs (left chart) and the derived contribution margin (right table) <STRONG><FONT color="#3366FF">(7)</FONT></STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 5: SAC frontend for analysis" style="width: 937px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336457i76B4E6F7CDA01DB2/image-size/large?v=v2&px=999" role="button" title="TobyK_1-1762339983942.png" alt="Picture 5: SAC frontend for analysis" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 5: SAC frontend for analysis</span></span><SPAN> </SPAN>The following diagram summarizes the architecture and data flow at a glance.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 6: Seamless Planning architecture" style="width: 831px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336411iAA402A97BEFFE332/image-size/large?v=v2&px=999" role="button" title="TobyK_0-1762334318535.png" alt="Picture 6: Seamless Planning architecture" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 6: Seamless Planning architecture</span></span></P><P><SPAN>It is now worthwhile looking at the unique highlights of the above-described setup. </SPAN>First, Seamless Planning within <STRONG>BDC combines "best of both worlds": SAC planning and reporting capabilities with Datasphere ETL and modeling capabilities.</STRONG> Second, SAC planning model entries are directly exposed within Datasphere upon publishing without the need for redundant data replication. Third, it would be generally feasible to access the actual variable costs virtually (as opposed to replicated), thus combining planning with actual data in real-time. However, note that regular snapshots (near real-time) of the actual data can be required for high-performant dashboards. Forth, the prompt to dynamically steer the time range for the weighted-average actual variable costs allows end users to sort of “smoothen” the data over time (when choosing a longer range) versus emphasizing more recent months to reflect latest changes in cost structures etc. Lastly, given the <STRONG>completely automated retrieval of the variable costs part of the calculation</STRONG>, end users only need to manually update the expected sales volumes and price assumptions. This allows for a <STRONG>fast forecast process with a high frequency (e.g., monthly rolling forecast) which is highly relevant in today’s dynamic market environment.</STRONG></P><P><STRONG>POSSIBLE ENHANCEMENTS</STRONG></P><P>In the above-described implementation setup actual data is being acquired from an SAP BW system via standard data acquisition means in Datasphere (remote tables, real-time replication, etc.). However, for SAP BW systems (both BW/4 and on HANA) BDC opens up the path for comprehensive <STRONG>BW modernization</STRONG> through means of the <STRONG>Data Product Generator</STRONG> (please see this <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/the-sap-bw-data-product-generator-for-sap-business-data-cloud/ba-p/14072413" target="_blank">blog</A> for further details).</P><P>Furthermore, the use case could be enriched with <STRONG>SAP-managed data products</STRONG> derived from an S/4 PCE system as part of a BDC formation. For instance, if the contribution margin forecast should be integrated into a broader plan/actual reporting solution, the relevant actuals could be built upon SAP-managed data products serving as significant accelerators given that the complete data orchestration is handled by SAP. You could directly start modeling within Datasphere and combine the forecast data with the actual data based on the SAP-managed data products. A detailed blog discussing how you can leverage data products in the context of planning will follow.</P><P>Finally, for more <STRONG>advanced forecasting techniques</STRONG> you could leverage <STRONG>SAP Databricks</STRONG> as part of the BDC formation. Imagine that instead of the manually provided sales volume forecast as in the above example you would generate such a forecast by means of statistical methods such as time series forecasting.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Picture 7: Enhancement options with BDC" style="width: 719px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/336424iA297857299AC64B5/image-size/large?v=v2&px=999" role="button" title="TobyK_3-1762334864181.png" alt="Picture 7: Enhancement options with BDC" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Picture 7: Enhancement options with BDC</span></span></P><P><STRONG>SUMMARY AND CALL TO ACTION</STRONG></P><P>A contribution margin forecast allows companies to proactively steer financial results and can therefore be considered a crucial controlling instrument. SAP BDC enables an innovative implementation approach with clear benefits such as rapid, high-frequency forecasts through automated calculation of variable costs and replication-free, (near) real time combination of actual and planning data. Besides that, SAP BDC opens up the path for comprehensive BW modernization, thus allowing you to follow parallel streams: Implementation of innovative use cases while modernizing your SAP BW. Sounds like a plan to you? Please reach out to your SAP representative and register for your SAP BDC Discovery Workshop!</P>2025-12-10T12:07:09.187000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/mcp-server-for-sap-ecc-amp-s-4hana-unlimited-abap-add-on-for-display-create/ba-p/14293485MCP Server for SAP ECC & S/4HANA: Unlimited ABAP Add-On for Display, Create and Update Tools2025-12-18T15:53:02.178000+01:00Siarheihttps://community.sap.com/t5/user/viewprofilepage/user-id/84286<H3 id="toc-hId-1896472327">Introduction</H3><P>It’s time to simplify integration processes between Agentic SDKs and SAP ECC & S/4HANA systems. We’d like to introduce an ABAP add-on that brings an MCP Server directly into your SAP ERP system and supports SAP ABAP releases down to 7.01.</P><P>Sounds attractive? Then let’s take a look at the architecture.</P><H3 id="toc-hId-1699958822">Architecture</H3><P>The architecture is simple. We kept in mind that you may want to avoid middleware (such as SAP BTP applications or similar solutions) while retaining full control over:</P><UL><LI><P><STRONG>Tool descriptions</STRONG><BR />These are essential for AI orchestration–friendly tool definitions. Standard SAP descriptions of services, tables, or other ABAP objects are not sufficient to build scalable Business AI solutions.</P></LI><LI><P><STRONG>Data provisioning</STRONG><BR />We provide a simple mechanism that allows you to nominate Tables, Views, and CDS Views to provision data to an Agentic SDK.</P></LI><LI><P><STRONG>Create / Update / Delete tool onboarding</STRONG><BR />We offer an easy mechanism to develop ABAP classes for create, update, and delete actions. Each tool requires only one method for the action and one method for metadata description.</P></LI></UL><P>This is how you can integrate it into any Agentic SDK:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic01.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353679iA455EFCAC184B044/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic01.png" alt="Skybuffer AI_MCP Server from ABAP_Pic01.png" /></span></P><P class="lia-align-center" style="text-align: center;"><EM>Pic. 1. MCP Server at the SAP backend system level</EM></P><H3 id="toc-hId-1503445317">What Does “Unlimited Display Tools” Mean?</H3><P>There is an ABAP view where you can onboard Tables, Views, and CDS Views for data provisioning. Each object is automatically converted into a tool by the MCP Server.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic02.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353680iB7ECCAAC6FF6A17F/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic02.png" alt="Skybuffer AI_MCP Server from ABAP_Pic02.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 2. Nomination of Tables, Views, and CDS Views for data provisioning via MCP Server</EM></P><H3 id="toc-hId-1306931812">How Do You Develop Tools for Create / Update / Delete Actions?</H3><P>We provide an ABAP interface that helps you create an ABAP class for each tool.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic03.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353681i855489374583A62D/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic03.png" alt="Skybuffer AI_MCP Server from ABAP_Pic03.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 3. MCP Server tool creation for Create / Update / Delete actions</EM></P><H3 id="toc-hId-1110418307">Really Any Agentic SDK?</H3><P>Yes, absolutely. Here is an example of MCP Server consumption in ChatGPT.</P><P>The MCP Server can be connected either directly via an SICF node or through the SAP Cloud Connector.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic04.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353682iA56515660B62E4B0/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic04.png" alt="Skybuffer AI_MCP Server from ABAP_Pic04.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 4. MCP Server SICF node</EM></P><P>In ChatGPT, you can connect the SAP MCP Server in developer mode:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic05.png" style="width: 413px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353683i2A458965D59E4AD7/image-dimensions/413x620?v=v2" width="413" height="620" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic05.png" alt="Skybuffer AI_MCP Server from ABAP_Pic05.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 5. Connecting MCP Server in ChatGPT</EM></P><P>Once connected, you have direct access to SAP backend data from ChatGPT:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic06.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353685iBA89ACF55DE75C80/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic06.png" alt="Skybuffer AI_MCP Server from ABAP_Pic06.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 6. Leave request creation from ChatGPT via Skybuffer AI MCP Server</EM></P><P><EM><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_MCP Server from ABAP_Pic07.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/353688i4ECC0FB47078BA73/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_MCP Server from ABAP_Pic07.png" alt="Skybuffer AI_MCP Server from ABAP_Pic07.png" /></span></EM></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic. 7. Purchase order data display from ChatGPT via Skybuffer AI MCP Server</EM></P><H3 id="toc-hId-913904802">Can Skybuffer AI MCP Server Work with SAP Joule?</H3><P>Yes, absolutely. You can connect the MCP Server directly to an SAP Joule tool in SAP Joule Studio.</P><H3 id="toc-hId-717391297">Conclusion</H3><P>You now have a powerful, scalable MCP Server operating directly at the ABAP layer. You can provide high-quality tool descriptions, fully control display tool provisioning, and easily onboard create, update, and delete tools without the need to build OData services on top of them.</P><P>The solution works with any Agentic SDK and is ready for enterprise deployment as an ABAP add-on.</P>2025-12-18T15:53:02.178000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/sap-joule-data-quality-kpis-auto-generates-and-measures-kpi-s-to-fix-broken/ba-p/14305262SAP Joule Data Quality KPIs: Auto-Generates and Measures KPI's to fix broken SAP Master Data Records2026-01-11T12:32:39.711000+01:00STALANKIhttps://community.sap.com/t5/user/viewprofilepage/user-id/13911<H2 id="toc-hId-1787764665">How SAP Joule can Turn Data Quality KPIs into Real Business Control</H2><P><FONT color="#FF0000"><SPAN>*Views expressed here are my own and doesn't represent any entity.</SPAN></FONT></P><P class="">Every SAP-driven organization knows this truth — even if it’s rarely said out loud:</P><BLOCKQUOTE><P class="">Most business risks don’t start with strategy.</P></BLOCKQUOTE><BLOCKQUOTE><P class="">They start with bad data.</P></BLOCKQUOTE><P class="">A missing tax code. A duplicate customer. An outdated BOM. An untraceable ESG metric. Individually, they look small. Collectively, they erode compliance, margins, and trust.</P><P class="">This is where SAP Joule changes the game — not by adding more dashboards, but by turning data quality into an intelligent, business-aware control system.</P><P class=""><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="joule.jpg" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360270i69828FF8EBA37B8C/image-size/medium/is-moderation-mode/true?v=v2&px=400" role="button" title="joule.jpg" alt="joule.jpg" /></span></P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-1591251160">The Old World: Measuring Data Too Late</H2><P class="">In traditional SAP environments, data quality is:</P><UL class=""><LI><P class="">Measured after the fact</P></LI><LI><P class="">Reported at aggregate level</P></LI><LI><P class="">Owned by IT, not the business</P></LI></UL><P class="">By the time leadership sees a data quality issue, the impact has already landed:</P><UL class=""><LI><P class="">Regulatory rework</P></LI><LI><P class="">Production delays</P></LI><LI><P class="">Revenue leakage</P></LI><LI><P class="">Audit findings</P></LI></UL><P class="">Joule can flip this model by working at the level where risk is actually created — the SAP data object.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-1394737655">How Joule can Generate KPIs for Real SAP Objects (Not Abstract Metrics)?</H2><P class="">SAP landscapes are built on concrete objects:</P><UL class=""><LI><P class="">Material Master (MARA, MARC)</P></LI><LI><P class="">Business Partner (BUT000)</P></LI><LI><P class="">Bills of Material (STPO)</P></LI><LI><P class="">Pricing Conditions (KONV)</P></LI><LI><P class="">Excise & Customs data (SAP GTS)</P></LI><LI><P class="">ESG attributes (SAP EH&S)</P></LI></UL><P class="">Joule doesn’t need to guess what to measure — it reads how the business actually uses these objects.</P><H3 id="toc-hId-1327306869">Example 1: Material Master (SAP S/4HANA – PP/MM)</H3><P class=""><STRONG>The problem: </STRONG>A manufacturing site experiences frequent production stoppages due to incorrect or incomplete material data.</P><P class="">What Joule can do :</P><UL class=""><LI><P class="">Identifies which material fields are mandatory for production and quality</P></LI><LI><P class="">Generates KPIs such as:</P></LI></UL><PRE><CODE>Material master completeness %</CODE></PRE><PRE><CODE>BOM consistency across plants</CODE></PRE><PRE><CODE>Production posting error rate</CODE></PRE><P class="">What leadership sees:</P><P class="">“Three material attributes are causing 80% of production rework in two factories.”</P><P class="">Not a metric — a decision trigger.</P><H2 id="toc-hId-1001710645">Enter SAP Joule: Data Quality That Thinks Like a Business Partner</H2><P class="">SAP Joule brings generative AI into the SAP stack — not as a chatbot, but as an intelligent orchestration layer across data, processes, and controls.</P><P class="">When applied to data quality, Joule can enable three game-changing capabilities:</P><PRE><CODE>Automatic KPI generation for every SAP data object</CODE></PRE><PRE><CODE>Context-aware thresholds instead of one-size-fits-all rules</CODE></PRE><PRE><CODE>Business-language explanations of data risk</CODE></PRE><P class="">Let’s break this down.</P><H2 id="toc-hId-805197140">Generating Data Quality KPIs for Every SAP Data Object</H2><P class="">SAP systems are built on standard data objects. Traditionally, teams manually decide:</P><PRE><CODE>Which fields matter</CODE></PRE><PRE><CODE>Which KPIs to track</CODE></PRE><PRE><CODE>Which thresholds apply</CODE></PRE><P class="">With Joule, this can become intelligent and automated.</P><H2 id="toc-hId-608683635">How Joule can do in future?</H2><P class="">Joule analyzes:</P><PRE><CODE>SAP object metadata (tables, fields, relationships)</CODE></PRE><PRE><CODE>Usage patterns (how often data is used downstream)</CODE></PRE><PRE><CODE>Process criticality (e.g., regulatory vs operational)</CODE></PRE><PRE><CODE>Historical data issues and corrections</CODE></PRE><P class="">From this, Joule can propose KPIs automatically, such as:</P><PRE><CODE>Completeness (% of mandatory fields populated)</CODE></PRE><PRE><CODE>Accuracy (rule-based and cross-object consistency)</CODE></PRE><PRE><CODE>Duplication rates</CODE></PRE><PRE><CODE>Timeliness (data freshness vs SLA)</CODE></PRE><PRE><CODE>Conformity to country-specific rules</CODE></PRE><P class="">And it does this per object, not per report. It can measure Thresholds That Actually Reflect Business Risk. A 95% completeness score doesn’t mean the same thing for:</P><PRE><CODE>ESG reporting</CODE></PRE><PRE><CODE>Excise tax determination</CODE></PRE><PRE><CODE>Internal planning data</CODE></PRE><P class="">This is where Joule goes beyond traditional data quality tools.</P><H2 id="toc-hId-412170130">Dynamic, risk-based thresholds</H2><P class="">Joule can:</P><PRE><CODE>Adjust thresholds by market, plant, or regulation</CODE></PRE><PRE><CODE>Flag materiality (which data issues truly matter)</CODE></PRE><PRE><CODE>Learn from outcomes (e.g., audit findings, rework, penalties)</CODE></PRE><P class=""><STRONG>Example: </STRONG>A missing material attribute in a non-regulated market → warning</P><P class="">The same issue in a regulated market → critical breach</P><P class="">Instead of “red/amber/green and technical language for issues” , leaders see the impact of data quality issue on business.</P><DIV class=""><HR /><DIV class=""> </DIV></DIV><H2 id="toc-hId-215656625">Making Data Quality Understandable to Humans</H2><P class="">Data quality often fails because it’s explained in technical language.</P><P class="">Joule changes that from metrics to meaning. Rather than showing:</P><PRE><CODE>“2.1% duplication rate in Business Partner data”</CODE></PRE><P class="">Joule can explain:</P><PRE><CODE>“Duplicate customer records are increasing invoice rework and delaying cash collection in two key markets.”</CODE></PRE><P class="">This turns data governance into decision support and KPIs into actionable insight and IT controls into business confidence.</P><H2 id="toc-hId-19143120">Continuous Monitoring, Not Periodic Audits</H2><P class="">With Joule integrated into SAP:</P><PRE><CODE>KPIs update continuously</CODE></PRE><PRE><CODE>Threshold breaches trigger explanations, not just alerts</CODE></PRE><PRE><CODE>Data owners receive context, not noise</CODE></PRE><P class="">This supports:</P><PRE><CODE>S/4HANA transformations</CODE></PRE><PRE><CODE>SOX and regulatory assurance</CODE></PRE><PRE><CODE>ESG credibility</CODE></PRE><P class="">Data quality becomes a living system, not a quarterly report.</P><H2 id="toc-hId-169883972">Why This Matters for Leadership</H2><P class="">For executives, the value proposition is straightforward and compelling:</P><PRE><CODE>Mitigate Risk: Significantly reduces regulatory surprises and enable swift, proactive responses to emerging threats.</CODE></PRE><PRE><CODE>Optimise Operations: Eliminates operational friction, driving greater efficiency and smoother processes.</CODE></PRE><PRE><CODE>Bolster Trust: Cultivates unwavering confidence in critical business data and insights.</CODE></PRE><DIV class=""><HR /><DIV class=""> </DIV></DIV><P class="">In essence, SAP Joule transforms high-quality data from a desirable aspiration into a foundational, strategic operating capability.</P><P class="">This marks a profound shift in data quality management. We move beyond outdated practices of merely policing data fields, escalating issues it IT language, enforcing rigid rules, or assigning blame. Instead, Joule empowers organisations to:</P><PRE><CODE>Understand Impact: Clearly articulate the business consequences of data issues.</CODE></PRE><PRE><CODE>Prioritise Strategically: Focus resources on what truly matters to the business.</CODE></PRE><PRE><CODE>Protect Value: Safeguard enterprise assets and drive growth at scale.</CODE></PRE><P class="">This isn't just about achieving better data quality; it's about establishing superior business control.</P>2026-01-11T12:32:39.711000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/why-vectorized-master-data-is-critical-for-ai-on-sap-ecc-and-s-4hana/ba-p/14305925Why Vectorized Master Data is Critical for AI on SAP ECC and S/4HANA2026-01-12T17:51:02.066000+01:00Siarheihttps://community.sap.com/t5/user/viewprofilepage/user-id/84286<H3 id="toc-hId-1916853990">Why Most AI Agents Fail with SAP Data</H3><P>AI Agents are rapidly becoming part of enterprise landscapes, and SAP ECC and S/4HANA are obvious targets for automation. However, what we see in the market today is concerning.</P><P>Most agentic solutions attempt to consume SAP data through <STRONG>custom coding</STRONG>. This approach creates three major problems:</P><OL><LI><P><STRONG>High development and maintenance costs</STRONG></P></LI><LI><P><STRONG>Multiple, inefficient calls to the SAP backend</STRONG></P></LI><LI><P><STRONG>Mass data extraction</STRONG>, where AI Agents pull <EM>entire tables</EM> just to find a few relevant records</P></LI></OL><P>In many of the solutions we reviewed, including agentic SDKs integrations such as Joule, n8n, ChatGPT, and Claude, AI Agents frequently fail, time out, or request <STRONG>all records</STRONG> from SAP ERP systems with minimal filtering. This is inefficient, risky, and completely unnecessary.</P><P>At Skybuffer AI, we believe there is a better way.</P><H3 id="toc-hId-1720340485">The Foundation: Preparing SAP Data for AI Consumption</H3><P>Data preparation is the key to successful AI-driven automation. While topics like data silos and data quality are well known, they cannot be fully solved overnight. Instead of fighting this reality, Skybuffer AI introduces a simple and reliable data layer that AI Agents can consume efficiently.</P><P>Skybuffer AI uses <STRONG>SAP master data vectorization</STRONG> to enable fast, accurate access to SAP ECC and S/4HANA data from agentic workflows. Any SAP master data table, including views and CDS views, can be vectorized and materialized using Retrieval-Augmented Generation (RAG).</P><P>Key capabilities include:</P><UL><LI><P>Support for <STRONG>full and delta loads</STRONG></P></LI><LI><P><STRONG>Near real-time synchronization</STRONG> with SAP</P></LI><LI><P>A scalable, AI-ready representation of SAP master data</P></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic01.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360566i829AF2F4027C25FD/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Skybuffer AI_SAP Data Vectorization Configuration_Pic01.png" alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic01.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM><SPAN>Pic. 1 Skybuffer AI – SAP Data Vectorization Configuration</SPAN></EM></P><H3 id="toc-hId-1523826980">Performance at Enterprise Scale</H3><P>A common concern is performance. Vectorizing SAP master data row by row would indeed be slow, if done sequentially.</P><P>Skybuffer AI solves this with parallel embedding execution, allowing vectorization at enterprise scale. On servers with 8 GPUs, Skybuffer AI can process <STRONG>up to 1 million records in 30 minutes</STRONG>. That is scalable in case even faster vectorization solution is needed.</P><P>More importantly, once initial vectorization is complete, a <STRONG>self-healing delta mechanism</STRONG> ensures that only new or changed records are processed. This keeps the data current without impacting performance.</P><H3 id="toc-hId-1327313475">Business Value: Why This Approach Matters</H3><P>The benefits are immediate and measurable:</P><UL><LI><P><STRONG>No coding</STRONG> for intelligent master data search</P></LI><LI><P><STRONG>No multiple calls</STRONG> to SAP backend systems</P></LI><LI><P><STRONG>No full-table reads</STRONG> triggered by AI Agents</P></LI><LI><P>Faster, more reliable AI-driven processes</P></LI></UL><P>Consider a real-world scenario:<BR />A vendor invoice arrives by email.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic02.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360567iBAEDEF8D7E9FCBD3/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Skybuffer AI_SAP Data Vectorization Configuration_Pic02.png" alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic02.png" /></span></P><P class="lia-align-center" style="text-align: center;"><EM><SPAN>Pic. 2 Real Consumption-Based Invoice Example</SPAN></EM></P><P>The AI Agent reads the email and attachment, extracts the vendor name exactly as written on the invoice, and needs the corresponding SAP vendor number. Instead of querying SAP repeatedly or scanning entire tables, the Agent performs <STRONG>one single RAG search</STRONG> against vectorized SAP master data.</P><P>Within milliseconds, the correct vendor record is returned. It is accurate, up to date, and ready for posting in SAP.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic03.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360568i6457C920494E49E4/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI_SAP Data Vectorization Configuration_Pic03.png" alt="Skybuffer AI_SAP Data Vectorization Configuration_Pic03.png" /></span></P><P class="lia-align-center" style="text-align: center;"><EM><SPAN>Pic. 3 Skybuffer AI – Fast and Reliable SAP Data Search Using Data Vectorization</SPAN></EM></P><P>No coding. No overload. No hallucinations.</P><H3 id="toc-hId-1130799970">Applicable Across Many SAP Use Cases</H3><P>The same approach works seamlessly across multiple scenarios, including:</P><UL><LI><P>Finding SAP users or business objects by name, email address, or other attributes</P></LI><LI><P>Creating Sales Orders in SAP from customer Purchase Orders</P></LI><LI><P>Resolving customer and material master data automatically</P></LI><LI><P>Creating Bills of Materials and routings from Excel files managed by R&D teams</P></LI><LI><P>Posting Service Entry Sheets</P></LI><LI><P>Rapidly enabling new use cases by simply nominating tables for vectorization</P></LI></UL><P>If the data exists in SAP, it can be prepared for AI consumption.</P><H3 id="toc-hId-934286465">Built-In Authorization and Security</H3><P>Security is not an afterthought.</P><P>Skybuffer AI applies the same concept used in AI assistants for Teams, Copilot, and Joule, so vectorized data accessed by users with an additional safeguard. Once the AI Agent selects relevant records via RAG, Skybuffer AI retrieves those records directly from SAP using <STRONG>SSO within the active conversation session</STRONG>, before providing data to the user.</P><P>This ensures:</P><UL><LI><P>Users only see data they are authorized to access in SAP</P></LI><LI><P>Vectorized data never bypasses SAP authorization rules</P></LI><LI><P>Enterprise-grade compliance and governance</P></LI></UL><H3 id="toc-hId-737772960">Seamless Integration with Existing AI Agents</H3><P>Already running AI Agents and just need access to SAP data?</P><P>Skybuffer AI provides an <STRONG>MCP Server</STRONG>, allowing your existing agentic solutions to consume vectorized SAP data without replacing your current platform. This makes Skybuffer AI an <STRONG>enabler</STRONG>, not a disruptor.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI – MCP Server for Integration with Agentic SDK_Pic04.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/360570iBC8E1ABFF78558AA/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Skybuffer AI – MCP Server for Integration with Agentic SDK_Pic04.png" alt="Skybuffer AI – MCP Server for Integration with Agentic SDK_Pic04.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM><SPAN>Pic. 4 Skybuffer AI – MCP Server for Integration with Agentic SDK</SPAN></EM></P><H3 id="toc-hId-541259455">Flexible Deployment: Cloud or On-Premise</H3><P>Skybuffer AI is built on <STRONG>SAP HANA</STRONG> and can be deployed:</P><UL><LI><P>On-premise within your own infrastructure</P></LI><LI><P>In your SAP BTP tenant</P></LI></UL><P>A modern SAP UI5 user interface ensures an intuitive and productive user experience for both IT and business users.</P><H3 id="toc-hId-344745950">Conclusion</H3><P>SAP ECC and S/4HANA master data vectorization is the missing link between SAP and AI Agents.</P><P>With Skybuffer AI, enterprises can:</P><UL><LI><P>Eliminate coding</P></LI><LI><P>Avoid hallucinations</P></LI><LI><P>Reduce SAP backend load</P></LI><LI><P>Enable fast, secure, and reliable AI-driven processes</P></LI></UL><P>This is how SAP data becomes truly AI-ready.</P>2026-01-12T17:51:02.066000+01:00https://community.sap.com/t5/business-transformation-blog-posts/how-is-agentic-ai-in-sap-transforming-core-business-processes-today/ba-p/14305640How Is Agentic AI in SAP Transforming Core Business Processes Today?2026-01-16T15:12:49.528000+01:00venkatesanmhttps://community.sap.com/t5/user/viewprofilepage/user-id/2270900<P><FONT size="5">From Deterministic Automation to Agentic Process Execution </FONT></P><P>Traditional SAP automation operates via predetermined workflows that trigger events under specified circumstances only. Any change from this path necessitated manual actions. The introduction of an intelligent agent into the SAP environment allows for a very different method of operating, in which the agent has the ability to "think", analyze, and resolve problems autonomously.<BR /><BR />SAP uses reasoning models and live business context to create AI Agents for SAP. An example of this is an AI agent that monitors the fulfillment of orders. Instead of just flagging an order as delinquent if it is delayed, the AI agent draws on its knowledge of upstream supply constraints and determines whether to re-route inventory or to negotiate new delivery windows with its supplier based on the contractual penalty associated with violating their contract. The logic for making these decisions is not hard-coded; it is inferred from the context surrounding the order.<BR /><BR />Additionally, SAP AI supports a feedback-driven execution process whereby each decision made by the AI agent provides a learning opportunity for the next execution cycle. For example, if an agent executes a corrective action that incurs additional costs but does not improve service level performance, it will adjust future decision-making accordingly. This results in a cyclic execution loop as opposed to single-event automation.</P><P>Some of the key execution traits include:</P><UL><LI>Goal-oriented reasoning rather than task completion</LI><LI>Context-aware evaluation across SAP modules</LI><LI>Continuous recalibration based on outcomes</LI></UL><P>SAP AI Teams designs these agents with explicit governance controls that guide every action. Clearly defined business rules determine what an agent can decide and when escalation is required. Comprehensive audit trails then record the rationale behind each decision, which helps create traceability and accountability. As a result, agent autonomy functions within clearly controlled boundaries rather than operating in isolation.</P><P><FONT size="5">Agentic Intelligence Across Finance and Controlling Functions</FONT></P><P>Finance processes within SAP are tightly governed. Yet they remain vulnerable to volume pressure, reconciliation delays, and data inconsistencies. Agentic AI in SAP introduces agents capable of interpreting transactional meaning instead of matching static fields.</P><P>For instance, agents that reconcile would review the intent of the posting as well as timing differences and relationships between companies to recommend changes rather than create exception lists. The transition from matching manually to validating decisions allows for faster closes and maintains accuracy.</P><P>Moreover, Agentic AI for SAP goes beyond finance by being used in controlling and forecasting. For example, agents that look at a Cost Center’s variance compared to plan provide warnings of potential future variances by correlating delayed purchasing, bottlenecking within operation, and price variability. Instead of standalone variance reports, the Finance team provides a context-specific recommendation with the ability to trace back to source information and assumptions.<BR /><BR />Finally, Agentic AI for SAP provides a new definition of compliance monitoring; whereas previously, compliance would check after the posting, agents will check prior to posting, validating whether the transactions align with company policy as well as prior risk history. Predictive controls reduce the need for downstream corrections and/or auditing findings.</P><P>Typical finance use cases include:</P><UL><LI>Autonomous account reconciliation proposals</LI><LI>Predictive compliance validation</LI><LI>Contextual variance interpretation</LI></UL><P>SAP AI teams emphasize explainability in these scenarios. Each agent’s decision includes rationale, data references, and confidence levels. This maintains trust in regulated environments.</P><P><FONT size="5">Supply Chain and Operations Under Agentic Control </FONT><BR /><BR />All supply chains produce large amounts of 'signals', the variation in forecasts, supplier delays, and constraints on logistics result in a chain reaction. These types of 'volatility' are not well-suited to static planning models. An agentic AI is now available as part of SAP. It will allow automated agents to evaluate or negotiate the trade-offs between plans and adjust accordingly.</P><P>In demand planning, the AI agent will evaluate forecasts based on both recent and historical consumption and market signals. Instead of sending static alerts, AI agents in the SAP demand planning module will update the planning parameters. This allows for fewer manual overrides that affect the accuracy of the plan.</P><P>Operationally, agents monitor execution milestones. When deviations occur, they evaluate options across procurement, production, and distribution. Actions may include expediting materials, reallocating stock, or revising commitments. These decisions consider cost, service impact, and contractual obligations simultaneously.</P><P>Additionally, AI for SAP supports cross-functional coordination. Agents share context across modules rather than operating in isolation. This helps reduce fragmented responses and improves execution coherence.</P><P><FONT size="5">Organizational Readiness and Governance Models </FONT><BR /><BR />The implementation of agentic AI encompasses an organization’s readiness to use the agentic tool successfully. An organization’s level of readiness is determined by the organization’s ability to ensure data integrity, clarity of its business processes, and the level of governance that has been established.</P><P>First, data quality must support reasoning. Agents depend on consistent master data and reliable event streams. Skewed data leads to skewed decisions. Enterprises investing in agentic AI services often begin with semantic data alignment and process instrumentation.</P><P>Second, with evolving governance models, organizations set limits and thresholds that allow action authorization without the need for direct action approval for every action. AI agents within those limits and thresholds operate autonomously, with escalation of the process when risk increases or when agents are doubtful in their decision-making. This helps reduce the "approval fatigue" associated with governance models but still allows for oversight.</P><P>Security models are evolving as well, whereby agents receive scoped authority and scoped access controls, not only to see the data a user sees but also to take the action the user can take. The decisions that AI agents make in SAP are recorded alongside the decisions made by the human agents, creating one single audit trail for both human and AI agents.</P><P>Core governance principles include:</P><UL><LI>Explicit decision boundaries</LI><LI>Continuous monitoring of agent behavior</LI><LI>Transparent reasoning documentation</LI></UL><P>SAP AI Teams play a critical role here. They bridge business intent and technical design, ensuring agents act responsibly and predictably.</P><P><FONT size="5">Final Thought</FONT></P><P>Agentic AI in SAP signals a structural shift in enterprise execution. SAP systems move beyond recording outcomes toward autonomous action. AI agents in SAP reason across modules, adjust to change, and act within defined constraints. Moreover, AI for SAP reshapes organizational roles and allows SAP teams to operate as agents of change rather than manual intervention.<BR /><BR />To develop these agents, SAP teams will need to implement strict guidelines and oversight within their designs. Once agentic systems have been properly developed, they will provide a dependable means of digital operations for organizations to pursue. As more organizations adopt agentic AI into their SAP teams, the companies that invested time and effort into developing mature SAP AI teams and Agentic AI will be the companies establishing the standard for how intelligent enterprise execution will take place in the future.</P>2026-01-16T15:12:49.528000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/inside-the-gcid-ai-hackathon-a-global-contest-where-ai-initiatives-came-to/ba-p/14308203Inside the GCID AI Hackathon: A Global Contest where AI Initiatives came to life.2026-01-20T18:06:17.313000+01:00VanessaRomero_AIhttps://community.sap.com/t5/user/viewprofilepage/user-id/2273659<P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="GCID AI Hackathon.jpg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/363503iDE9F8C66A329C78D/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="GCID AI Hackathon.jpg" alt="GCID AI Hackathon.jpg" /></span></P><H3 id="toc-hId-1916936572"><SPAN>A journey that came to life on stage</SPAN></H3><P>The hackathon took place on December 10 during a hybrid event in Sofia hosted by Matheus Souza, Head of AI Incubation, in Sofia. Where a global audience came together both in person and online. And more than 170 participants tuned in as the 11 finalist teams presented their work during the qualifying round and later to the finals.</P><P>Each team stepped on stage with a short, high-impact pitch supported by a live demo. In just 7 minutes, they told the story behind their solutions, showing how AI can address concrete technical and business challenges when creativity, and clear communication come together. From this round, three teams stood out and advanced to the final.</P><P>The final round opened with motivating messages from Ozren Kopajtic, Global Cloud Infrastructure and Delivery Head, Kiril Krantev representing the GCID T&I Cloud Operations, and Ricardo Silva GCID Business Delivery Excellence Head. Then the atmosphere was further energized by an inspiring message from Thomas Saueressig, and the executive jury: Radoslav Nikolov, Niraj Singh, Boris Maeck, Henning Heitkötter, and Ricardo Silva. Who encouraged teams to continue pushing their ideas.</P><H3 id="toc-hId-1720423067"><SPAN>Awarding outstanding ideas and execution</SPAN></H3><P>Throughout the hackathon, one message became clear: AI has the potential to reshape how we operate, decide, and scale across the organization. The event came an end with Radoslav, Ricardo and Matheus, announcing and celebrating the teams whose solutions stood out for their originality, technical strength, and real-world applicability:</P><UL><LI><SPAN><STRONG>First place</STRONG>:" AI-Powered AWX Analyser & Hammer Helper" by Barath Ramachandran, Suresh Kumar Sekar, and team</SPAN></LI><LI><SPAN><STRONG>Second place</STRONG>: "Smart Data Center" by Geghard Bedrosian, Philipp Matthes, and team</SPAN></LI><LI><SPAN><STRONG>Third place</STRONG>: "AI for Smarter Observability Hygiene" by Mario Romero, Duc Nguyen, and team</SPAN></LI></UL><H3 id="toc-hId-1523909562"><SPAN>What’s next for the hackathon journey</SPAN></H3><P>Rather than an ending, this first edition marked the beginning of something bigger. Announced at the end of the event, the hosting city for this year's edition of the GCID AI Hackathon will take is Budapest. There the community is set to continue growing and creating even more space for experimentation and innovation.</P><P>The excitement is already building for what the next edition will bring, and we can't wait for it to begin. <SPAN>See you there? </SPAN></P>2026-01-20T18:06:17.313000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/why-bringing-sap-to-agentic-sdk-via-regular-mcp-server-is-not-good-enough/ba-p/14313034Why Bringing SAP to Agentic SDK via Regular MCP Server Is Not Good Enough2026-01-23T00:38:39.594000+01:00Siarheihttps://community.sap.com/t5/user/viewprofilepage/user-id/84286<P><STRONG>Advocating Technology</STRONG></P><P>Let me name things properly – we have lots of noise around AI. Companies are trying to jump into the technology because they think that Business AI is simple, that it is just a kind of ChatGPT that gets access to business data. Sounds simple, doesn’t it?</P><P>I strongly disagree! Bringing SAP data to an Agentic SDK, like Claude or SAP Joule, does not mean delivering Business AI. It is far away from even an understanding of business needs. I know, you start hating me <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>. And the only reason for the internal opinion in your head is that you are most probably one of the MCP bridge providers that think that wrapping APIs (I will not write “discover OData via MCP Server,” because the whole SAP community will hate me :D), or wrapping SAP tables/views, or wrapping SAP function modules is the way to bring SAP data to Business AI. </P><P>How many posts do we see now on LinkedIn where people are happy like kids seeing data from an agentic SDK, e.g. Claude Desktop, returned by OData that is called via an MCP Server and shown as a table. Wow-wow! A list of Purchase Orders! As a table! Based on my utterance! C’mon people, we are not in 2017 when we, together with the SAP Conversational AI team, were happy to see such kinds of cases. Yes, 9 years ago it was innovation, not now. Why not post on LinkedIn that the same OData service draws an even nicer table in a standard SAP Fiori app showing a list of Purchase Orders with… OMG… DRILL DOWN!!! <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span></P><P>Why am I writing this? Just to emphasize that Business AI is an excellent technology that is now far beyond strange posts where people see SAP data in another UI than GUI or Fiori. Real Business AI returns investment, works autonomously, and understands data. For example, an AI agent that gets emails from customers with purchase orders from their systems confirming the deals we’ve made with them. That AI agent posts those emails to SAP, creating Sales Orders, for sure not using OData. Why not? Because there are two EntitySet calls you need to do to create a Sales Order via OData, and you need to be able to simulate a batch call; otherwise, you will get lots of Sales Order headers in your SAP productive system when the OData call for Sales Order Items has failed.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_MCP_Server_Pic01png.png" style="width: 620px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/364272i78750755057D7EDD/image-size/large?v=v2&px=999" role="button" title="Skybuffer_AI_MCP_Server_Pic01png.png" alt="Skybuffer_AI_MCP_Server_Pic01png.png" /></span></P><P class="lia-align-center" style="text-align: center;"><EM> <SPAN>Pic.1 Example from LinkedIn Article – Creating Sales Order via OData without Batch Mode</SPAN></EM></P><P>However, even creating a Sales Order from my example is not the biggest challenge for that AI agent.</P><P><STRONG>The Biggest Challenge</STRONG></P><P>SAP data and the “bridges” we build to connect AI agents with SAP via “regular” MCP servers, this is the biggest challenge we create for AI agents. Sometimes it seems that after allocating a really good business case, we are doing everything possible to push the AI agent to fail <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>.</P><P>SAP data preparation is the boring, but extremely essential, step here. We should think about what we feed to the AI agent when asking it to read an email and post a Sales Order in the SAP system. We need to give the AI agent a chance to query master data that is so essential for that AI automation scenario and allocate master data in a robust and reliable way, independent of how the customer names our services and products in their systems.</P><P><STRONG>SAP Master Data Vectorization</STRONG></P><P>Let’s see a simple example we have created using the ChatGPT UI showing how different AI agents will act if you prepare and only then bridge SAP data properly. We will use two MCP servers for this:</P><UL><LI><P>Regular MCP Server for SAP Data (any MCP server, we do not like to point fingers at someone’s solution; we’d like to advocate AI automation only)</P></LI><LI><P>Skybuffer AI MCP Server for SAP Data (here we have a data preparation layer where we vectorize SAP master data first)</P></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_MCP_Server_Pic02png.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/364273i6FB5961E3446B371/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Skybuffer_AI_MCP_Server_Pic02png.png" alt="Skybuffer_AI_MCP_Server_Pic02png.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM><SPAN>Pic.2 SAP Master Data Vectorization as a Data Preparation Step</SPAN></EM></P><P><STRONG>See the Difference</STRONG></P><P>Now let’s simulate a request from an AI agent that is working autonomously and has responsibility to create Sales Orders in an SAP production system with the maximum level of automation.</P><P>Here is a simple example of a regular MCP Server failure, just because the customer captured data in their system a bit differently from our SAP productive system:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_MCP_Server_Pic03.png" style="width: 983px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/364274i6496BF5A9F3CA39D/image-size/large?v=v2&px=999" role="button" title="Skybuffer_AI_MCP_Server_Pic03.png" alt="Skybuffer_AI_MCP_Server_Pic03.png" /></span></P><P class="lia-align-center" style="text-align: center;"><EM>Pic.3 Regular MCP Server for SAP Data – Failing on the Simple Case of Small Wording Differences</EM></P><P>And now let’s see how easily and robustly an AI agent gets information from the same SAP system, but via an MCP Server that reads prepared SAP master data for AI automation:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_MCP_Server_Pic04.png" style="width: 983px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/364275i1F4663D7AE74A61D/image-size/large?v=v2&px=999" role="button" title="Skybuffer_AI_MCP_Server_Pic04.png" alt="Skybuffer_AI_MCP_Server_Pic04.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM><SPAN>Pic.4 Skybuffer AI MCP Server – Easy and Robust Access to SAP Master Data</SPAN></EM></P><P>Now you see that 99% of all examples or success stories from LinkedIn about how fast we can create MCP servers to bridge SAP data and how reusable OData services are, are solutions that are far away from productive implementation (unless you are OK with getting a 50% automation rate <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>).</P><P><STRONG>Conclusion</STRONG></P><P>Indeed, MCP servers that consume OData, tables/views, etc. make sense for some steps in AI automation. However, we need to be very careful, and we should not mix hype with productive implementations. A good MCP server bridges prepared SAP data that an AI agent can consume to provide a maximal rate of automation that will reach 100% in a short time of productive usage. Saving on implementation ruins the impression of Business AI and reduces trust in it. Let’s not harm our overall revenue and investment return, and think twice whether the bridge we build for an AI agent suits the goal.</P><P>Imagine connecting the Skybuffer AI MCP Server to SAP Joule, making it able to understand user input by similarities. How powerful would your SAP Joule implementation be, and how smoothly would SAP Joule understand requests? Skybuffer AI runs on SAP BTP, providing a hidden but necessary layer for SAP Joule that can properly bridge your SAP on-premise data.</P><P><STRONG>Closing Statement</STRONG></P><P>Good luck with AI automation implementations, and do not hesitate to ask questions, collaborate, and bring technology to customers in a way they benefit from it, not just seeing the hype in it.</P>2026-01-23T00:38:39.594000+01:00https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/how-ai-automation-eliminates-master-data-duplicates-in-sap/ba-p/14318692How AI Automation Eliminates Master Data Duplicates in SAP2026-01-30T15:58:32.901000+01:00Siarheihttps://community.sap.com/t5/user/viewprofilepage/user-id/84286<H4 id="toc-hId-2046946934"><STRONG>The Business Case</STRONG></H4><P>Whether running SAP ECC or S/4HANA, most organizations struggle with duplicate master data for materials, vendors, and customers. This isn't just a "clutter" problem, poor material master data leads to users creating <STRONG>Purchase Requisitions with text items</STRONG>, bypassing procurement controls and ruining spend analytics.</P><P>Let’s explore how AI automation and <STRONG>data vectorization</STRONG> can clean up the core and prevent duplicates before they happen.</P><H4 id="toc-hId-1850433429"><STRONG>The Engine: What is Data Vectorization?</STRONG></H4><P>At Skybuffer, we are constantly exploring the benefits of SAP data vectorization. Simply put, vectorization converts data into numerical arrays (vectors) using embedding functions.</P><P>Unlike standard SAP keyword searches, vectors are <STRONG>language-independent</STRONG> and allow algorithms to perform <STRONG>similarity searches</STRONG>. While standard tables see rows and columns, a vectorized model sees the <I>relationship</I> between data points.</P><H4 id="toc-hId-1653919924"><STRONG>How Does It Help?</STRONG></H4><P>When data is vectorized, we can execute <STRONG>hybrid searches</STRONG> that are indifferent to word order or exact spelling.</P><P><STRONG>Example:</STRONG> Imagine searching for a global customer like "Skybuffer." You might have different accounts for Poland and Norway. If a user tries to create a new record, they can simply ask:</P><BLOCKQUOTE><P class="lia-align-left" style="text-align : left;"><I>"Do we have a Customer master record for Skybuffer in Poland?"</I></P></BLOCKQUOTE><P>By querying vectorized <STRONG>KNA1</STRONG> table data, the AI identifies the record even if the naming convention differs slightly.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_Duplicates_Pic01.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367414iC3AA8D0E4F0FD765/image-size/large?v=v2&px=999" role="button" title="Skybuffer_AI_Duplicates_Pic01.png" alt="Skybuffer_AI_Duplicates_Pic01.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic.1 Hybrid Search based on Vectorized KNA1 Table Data</EM></P><H4 id="toc-hId-1457406419"><STRONG>Real-World Integration Points</STRONG></H4><P>Hybrid search is most powerful when embedded directly into the SAP business process:</P><UL><LI><P><STRONG>Preventative Checks:</STRONG> During the "Save" action in SAP, a custom check against vectorized data can alert the user to a potential duplicate.</P></LI><LI><P><STRONG>External Access:</STRONG> Quick searches via Microsoft Teams or SAP Fiori to find existing masters on the go.</P></LI><LI><P><STRONG>Smart Procurement:</STRONG> Identifying an existing Material Master during the ordering process, preventing the user from resorting to a "text item."</P></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer_AI_Duplicates_Pic02.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367418iFF3D4E5FEA03092A/image-size/large?v=v2&px=999" role="button" title="Skybuffer_AI_Duplicates_Pic02.png" alt="Skybuffer_AI_Duplicates_Pic02.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <EM>Pic.2 Quick Searches of SAP Master Data based on Vectorized Data</EM></P><H4 id="toc-hId-1260892914"><STRONG>Conclusion</STRONG></H4><P>Generative AI is a powerhouse when used wisely. Many companies struggle to find the right use cases, often just mirroring existing SAP processes in a chat window. <STRONG>That isn’t true AI automation.</STRONG></P><P>Our approach is different: we focus on the gaps in standard automation. By using AI to simplify data discovery, we reduce development effort and provide an intelligent, frictionless experience for SAP users.</P><H4 id="toc-hId-1064379409"><STRONG>Closing Statement</STRONG></H4><P>Keep automating. Remember: <STRONG>smart customization is a competitive advantage</STRONG>, not a deviation from the standard. Build with AI in mind to slash support costs and maximize your ROI via the Skybuffer AI platform.</P>2026-01-30T15:58:32.901000+01:00https://community.sap.com/t5/artificial-intelligence-blogs-posts/connecting-sap-genai-hub-to-n8n-a-complete-guide/ba-p/14320010Connecting SAP GenAI Hub to n8n: A Complete Guide2026-02-02T16:33:01.109000+01:00YangYue01https://community.sap.com/t5/user/viewprofilepage/user-id/1409138<BLOCKQUOTE><P class="">A step-by-step tutorial on calling SAP GenAI Hub's large language models from self-hosted n8n to build enterprise AI automation workflows.</P></BLOCKQUOTE><P><STRONG>SAP Build</STRONG> is the primary, recommended solution for enterprise automation and orchestration within the SAP ecosystem. However, one of the key strengths of <STRONG>SAP GenAI Hub</STRONG> is its standardization—it uses standard REST APIs and OAuth2, meaning it can be consumed by <I>any</I> system, regardless of the tech stack.</P><P>This tutorial demonstrates that openness. We will use <STRONG>n8n</STRONG> (a popular self-hosted workflow tool) to show how developers can integrate SAP's enterprise-grade GenAI capabilities into external or hybrid environments for rapid prototyping and testing.</P><P class="">Since n8n doesn't have a native SAP GenAI Hub node yet. No worries — we can easily make the connection using <STRONG>HTTP Request + OAuth2</STRONG>.</P><HR /><H2 id="toc-hId-1789460673">Prerequisites</H2><P class="">Before we start, make sure you have:</P><OL class=""><LI><STRONG>A running n8n instance</STRONG> (Docker or npm or k8s)</LI><LI><STRONG>SAP BTP account</STRONG> with SAP AI Core service enabled (Extended Plan for GenAI Hub)</LI><LI><STRONG>A deployed LLM model</STRONG> (configured in SAP AI Launchpad)</LI></OL><H3 id="toc-hId-1722029887"> </H3><H3 id="toc-hId-1525516382">Get Your SAP AI Core Service Key</H3><P class="">Navigate to SAP BTP Cockpit → Your Subaccount → Instances and Subscriptions → SAP AI Core → Create Service Key</P><P class="">You'll get a JSON like this:</P><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""> </DIV></DIV></DIV></DIV><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"clientid"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"sb-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx!b12345|aicore!b540"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"clientsecret"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx$xxxxxxxxx"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"url"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"https://your-tenant.authentication.eu10.hana.ondemand.com"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"serviceurls"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"AI_API_URL"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com"</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class=""><STRONG>Keep these four fields handy — you'll need them all.</STRONG></P><H3 id="toc-hId-1329002877"> </H3><H3 id="toc-hId-1132489372">Get Your Deployment ID</H3><P class="">In SAP AI Launchpad:</P><OL class=""><LI>Go to <STRONG>ML Operations → Deployments</STRONG></LI><LI>Find your deployed model</LI><LI>Note the <CODE>Deployment ID</CODE> (or the full <CODE>deploymentUrl</CODE>)</LI></OL><HR /><H2 id="toc-hId-806893148">Step 1: Create OAuth2 Credential</H2><P class="">SAP GenAI Hub uses OAuth2 Client Credentials authentication. Let's configure the credential in n8n first.</P><OL class=""><LI>Open n8n, go to <STRONG>Settings → Credentials</STRONG></LI><LI>Click <STRONG>Add Credential</STRONG>, search for <STRONG>OAuth2 API</STRONG></LI><LI>Fill in the following:</LI></OL><DIV class="lia-indent-padding-left-60px" style="padding-left : 60px;"><TABLE><TBODY><TR><TD><STRONG>Field<BR /></STRONG></TD><TD><STRONG>Value</STRONG></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Credential Name</STRONG></TD><TD width="617.203px" height="30px"><CODE>SAP-GenAI-Hub</CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Grant Type</STRONG></TD><TD width="617.203px" height="30px"><CODE>Client Credentials</CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Access Token URL</STRONG></TD><TD width="617.203px" height="30px"><CODE><A href="https://your-tenant.authentication.eu10.hana.ondemand.com/oauth/token" target="_blank" rel="noopener nofollow noreferrer">https://your-tenant.authentication.eu10.hana.ondemand.com/oauth/token</A></CODE></TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Client ID</STRONG></TD><TD width="617.203px" height="30px">Copy the full <CODE>clientid</CODE> from Service Key</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Client Secret</STRONG></TD><TD width="617.203px" height="30px">Copy the full <CODE>clientsecret</CODE> from Service Key</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Scope</STRONG></TD><TD width="617.203px" height="30px">Leave empty</TD></TR><TR><TD width="148.68px" height="30px"><STRONG>Authentication</STRONG></TD><TD width="617.203px" height="30px"><STRONG>Select <CODE>Header</CODE></STRONG></TD></TR></TBODY></TABLE></DIV><DIV class="lia-indent-padding-left-30px" style="padding-left : 30px;">4. Click <STRONG>Save</STRONG></DIV><BLOCKQUOTE><P class=""><span class="lia-unicode-emoji" title=":light_bulb:">💡</span><STRONG>Common Mistake</STRONG>: Selecting "Send as Basic Auth Header" will cause 401 authentication failures. SAP requires credentials to be sent in the body.</P></BLOCKQUOTE><HR /><H2 id="toc-hId-610379643">Step 2: Build a Simple Q&A Workflow</H2><P class="">Let's create a basic LLM query workflow.</P><H3 id="toc-hId-542948857">2.1 Add Manual Trigger</H3><P class="">Start with a manual trigger for testing:</P><UL class=""><LI>Add a <STRONG>Manual Trigger</STRONG> node</LI></UL><H3 id="toc-hId-346435352">2.2 Add HTTP Request Node</H3><P class="">This is the core node. Configure as follows:</P><P class=""><STRONG>Basic Settings:</STRONG></P><UL class=""><LI><STRONG>Method</STRONG>: <CODE>POST</CODE></LI><LI><STRONG>URL</STRONG>: </LI></UL><pre class="lia-code-sample language-abap"><code>https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/{your_Deployment_ID}/chat/completions</code></pre><P><SPAN> <span class="lia-unicode-emoji" title=":warning:">⚠️</span> </SPAN><SPAN class=""><SPAN>Note</SPAN></SPAN><SPAN>: If using </SPAN><SPAN class=""><SPAN>OpenAI models</SPAN></SPAN><SPAN>, you</SPAN><SPAN> should</SPAN><SPAN> add the </SPAN><SPAN class=""><SPAN>api-version</SPAN></SPAN><SPAN> parameter, e.g.</SPAN></P><pre class="lia-code-sample language-abap"><code>https://api.ai.prod.eu-central-1.aws.ml.hana.ondemand.com/v2/inference/deployments/{your_Deployment_ID}/chat/completions?api-version=2023-05-15</code></pre><P class=""><STRONG>Authentication:</STRONG></P><UL class=""><LI><STRONG>Authentication</STRONG>: <CODE>Predefined Credential Type</CODE></LI><LI><STRONG>Credential Type</STRONG>: <CODE>OAuth2 API</CODE></LI><LI><STRONG>OAuth2 API</STRONG>: Select <CODE>SAP-GenAI-Hub</CODE></LI></UL><P class=""><STRONG>Headers:</STRONG></P><DIV class=""><TABLE><TBODY><TR><TD><STRONG>Name</STRONG></TD><TD><STRONG>Value</STRONG></TD></TR><TR><TD><CODE>Content-Type</CODE></TD><TD><CODE>application/json</CODE></TD></TR><TR><TD><CODE>AI-Resource-Group</CODE></TD><TD><CODE>default</CODE>, or your own Resource Group ID</TD></TR></TBODY></TABLE></DIV><P class=""><STRONG>Body:</STRONG></P><UL class=""><LI><STRONG>Body Content Type</STRONG>: <CODE>JSON</CODE></LI><LI><STRONG>Specify Body</STRONG>: <CODE>Using JSON</CODE></LI></UL><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional assistant. Please provide helpful and accurate answers."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"What is SAP S/4HANA? Please summarize in three sentences."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId-149921847">2.3 Test It</H3><P class="">Click <STRONG>Test Workflow</STRONG>. If configured correctly, you'll see a response like:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"choices"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"message"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"assistant"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"SAP S/4HANA is SAP's next-generation intelligent ERP suite..."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"usage"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"prompt_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">45</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"completion_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">128</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"total_tokens"</SPAN><SPAN class="">:</SPAN> <SPAN class="">173</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class=""><span class="lia-unicode-emoji" title=":party_popper:">🎉</span><STRONG>Congratulations! You've successfully called SAP GenAI Hub from n8n!</STRONG></P><HR /><H2 id="toc-hId-171579980">Step 3: Build an Interactive Chat Workflow</H2><P class="">Let's upgrade to a conversational version.</P><H3 id="toc-hId--318336532">Workflow Structure</H3><DIV class=""><P class=""><STRONG>Chat Trigger</STRONG> → <STRONG>HTTP Request (SAP GenAI Hub)</STRONG> → <STRONG>Code (Extract Reply)</STRONG></P></DIV><H3 id="toc-hId--514850037">3.1 Chat Trigger Node</H3><P class="">Add a <STRONG>Chat Trigger</STRONG> node (found under Advanced AI category):</P><UL class=""><LI>Keep default settings</LI><LI>It provides an embedded chat interface for testing</LI></UL><H3 id="toc-hId--711363542">3.2 Modify HTTP Request Node</H3><P class="">Change the Body to dynamically get user input:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional enterprise consultant specializing in SAP and digital transformation topics."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"role"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">"content"</SPAN><SPAN class="">:</SPAN> <SPAN class="">"{{ $json.chatInput }}"</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><BLOCKQUOTE><P class=""><CODE>{{ $json.chatInput }}</CODE> automatically retrieves the user's input from the chat interface.</P></BLOCKQUOTE><H3 id="toc-hId--907877047">3.3 Add Code Node to Extract Response</H3><P class="">Add a <STRONG>Code</STRONG> node to extract the AI's reply:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">const</SPAN> response <SPAN class="">=</SPAN> $input<SPAN class="">.</SPAN><SPAN class="">all</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> aiMessage <SPAN class="">=</SPAN> response<SPAN class="">.</SPAN><SPAN class="">choices</SPAN><SPAN class="">?.</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">?.</SPAN>message<SPAN class="">?.</SPAN>content <SPAN class="">||</SPAN> <SPAN class="">"Sorry, no response received."</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">response</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId--1104390552">3.4 Test the Chat</H3><OL class=""><LI>Activate the Workflow</LI><LI>Click the <STRONG>Chat</STRONG> button to open the chat interface</LI><LI>Start chatting!</LI></OL><HR /><H2 id="toc-hId--1007501050">Step 4: Add Conversation Memory (Optional)</H2><P class="">The above version treats each message independently without context. We can use n8n's <STRONG>Workflow Static Data</STRONG> to implement simple memory.</P><H3 id="toc-hId--1497417562">Improved Workflow</H3><P class="">Add a <STRONG>Code</STRONG> node before HTTP Request to manage conversation history:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">// Get session ID and user input</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> sessionId <SPAN class="">=</SPAN> $json<SPAN class="">.</SPAN><SPAN class="">sessionId</SPAN> <SPAN class="">||</SPAN> <SPAN class="">'default'</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> userInput <SPAN class="">=</SPAN> $json<SPAN class="">.</SPAN><SPAN class="">chatInput</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Get history from global static data</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> staticData <SPAN class="">=</SPAN> <SPAN class="">$getWorkflowStaticData</SPAN><SPAN class="">(</SPAN><SPAN class="">'global'</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">if</SPAN> <SPAN class="">(</SPAN><SPAN class="">!</SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">)</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN> <SPAN class="">=</SPAN> <SPAN class="">{</SPAN><SPAN class="">}</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">if</SPAN> <SPAN class="">(</SPAN><SPAN class="">!</SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">)</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN> <SPAN class="">=</SPAN> <SPAN class="">[</SPAN>
</SPAN><SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"system"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> <SPAN class="">"You are a professional enterprise consultant specializing in SAP and digital transformation topics."</SPAN>
</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN> <SPAN class="">]</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN>
</SPAN>
<SPAN><SPAN class="">// Add user message</SPAN>
</SPAN><SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">push</SPAN><SPAN class="">(</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"user"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> userInput</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Keep only the last 20 messages (avoid token limit)</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> messages <SPAN class="">=</SPAN> staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">slice</SPAN><SPAN class="">(</SPAN><SPAN class="">-</SPAN><SPAN class="">20</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> sessionId<SPAN class="">,</SPAN>
</SPAN><SPAN> messages</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Then modify the HTTP Request Body:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">"messages"</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN><SPAN class="">{</SPAN> JSON.stringify($json.messages) <SPAN class="">}</SPAN><SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Finally, save the assistant's reply in the response extraction Code node:</P><DIV class=""><DIV><PRE><CODE><SPAN><SPAN class="">const</SPAN> response <SPAN class="">=</SPAN> $input<SPAN class="">.</SPAN><SPAN class="">all</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> sessionId <SPAN class="">=</SPAN> <SPAN class="">$</SPAN><SPAN class="">(</SPAN><SPAN class="">'Prepare Messages'</SPAN><SPAN class="">)</SPAN><SPAN class="">.</SPAN><SPAN class="">first</SPAN><SPAN class="">(</SPAN><SPAN class="">)</SPAN><SPAN class="">.</SPAN><SPAN class="">json</SPAN><SPAN class="">.</SPAN><SPAN class="">sessionId</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> aiMessage <SPAN class="">=</SPAN> response<SPAN class="">.</SPAN><SPAN class="">choices</SPAN><SPAN class="">?.</SPAN><SPAN class="">[</SPAN><SPAN class="">0</SPAN><SPAN class="">]</SPAN><SPAN class="">?.</SPAN>message<SPAN class="">?.</SPAN>content <SPAN class="">||</SPAN> <SPAN class="">"Sorry, no response received."</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">// Save assistant reply to history</SPAN>
</SPAN><SPAN><SPAN class="">const</SPAN> staticData <SPAN class="">=</SPAN> <SPAN class="">$getWorkflowStaticData</SPAN><SPAN class="">(</SPAN><SPAN class="">'global'</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN><SPAN>staticData<SPAN class="">.</SPAN><SPAN class="">conversations</SPAN><SPAN class="">[</SPAN>sessionId<SPAN class="">]</SPAN><SPAN class="">.</SPAN><SPAN class="">push</SPAN><SPAN class="">(</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">role</SPAN><SPAN class="">:</SPAN> <SPAN class="">"assistant"</SPAN><SPAN class="">,</SPAN>
</SPAN><SPAN> <SPAN class="">content</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">)</SPAN><SPAN class="">;</SPAN>
</SPAN>
<SPAN><SPAN class="">return</SPAN> <SPAN class="">[</SPAN><SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">json</SPAN><SPAN class="">:</SPAN> <SPAN class="">{</SPAN>
</SPAN><SPAN> <SPAN class="">response</SPAN><SPAN class="">:</SPAN> aiMessage</SPAN><SPAN> <SPAN class="">}</SPAN>
</SPAN><SPAN><SPAN class="">}</SPAN><SPAN class="">]</SPAN><SPAN class="">;</SPAN></SPAN></CODE></PRE></DIV></DIV><P class="">Now your chatbot remembers context!</P><HR /><H2 id="toc-hId--1400528060">Practical Use Cases</H2><H3 id="toc-hId--1890444572">Use Case 1: Automated Email Replies</H3><DIV class=""><DIV><P class=""><STRONG>Email Trigger</STRONG> → <STRONG>Extract Content</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Send Reply</STRONG></P></DIV></DIV><P class="">Sample Prompt:</P><DIV class=""><DIV class=""><DIV class=""><DIV class=""><DIV class=""> </DIV></DIV></DIV></DIV><DIV><PRE><CODE><SPAN>Based on the following customer email, generate a professional reply:</SPAN>
<SPAN>Email content:</SPAN><SPAN>{{ $json.emailBody }}</SPAN>
<SPAN>Requirements:</SPAN><SPAN>1. Professional and friendly tone</SPAN><SPAN>2. If it's a price inquiry, mention that detailed pricing will follow within 24 hours</SPAN><SPAN>3. If it's a complaint, express apology and commitment to follow up</SPAN></CODE></PRE></DIV></DIV><H3 id="toc-hId--1918774386">Use Case 2: Document Summarization</H3><DIV class=""><DIV><P class=""><STRONG>Webhook</STRONG> → <STRONG>Read PDF</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Save to Notion</STRONG></P><H3 id="toc-hId--2115287891"> </H3></DIV></DIV><H3 id="toc-hId-1983165900">Use Case 3: Data Analysis Assistant</H3><P class="">Combine SAP APIs to fetch business data and let AI generate analysis reports:</P><P class=""><STRONG>Schedule Trigger</STRONG> → <STRONG>SAP S/4HANA</STRONG> → <STRONG>SAP GenAI Hub</STRONG> → <STRONG>Slack Message</STRONG></P><HR /><H2 id="toc-hId-2080055402">Troubleshooting</H2><H3 id="toc-hId-1590138890"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>401 Unauthorized</H3><P class=""><STRONG>Cause</STRONG>: Authentication failed</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>Client ID and Secret are fully copied (including special characters like <CODE>!</CODE>, <CODE>|</CODE>, <CODE>$</CODE>)</LI><LI>Authentication is set to <CODE>Send In Body</CODE></LI><LI>Token URL format is correct (ends with <CODE>/oauth/token</CODE>)</LI></UL><H3 id="toc-hId-1393625385"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>404 Not Found</H3><P class=""><STRONG>Cause</STRONG>: API path error</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>Deployment ID is correct</LI><LI>Model is successfully deployed (status shows "Running" in AI Launchpad)</LI><LI>API URL path format: <CODE>/v2/inference/deployments/{id}/chat/completions</CODE></LI></UL><H3 id="toc-hId-1197111880"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>403 Forbidden</H3><P class=""><STRONG>Cause</STRONG>: Insufficient permissions</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI><CODE>AI-Resource-Group</CODE> header is set (default is <CODE>default</CODE>)</LI><LI>User has permission to access the deployment</LI></UL><H3 id="toc-hId-1000598375"><span class="lia-unicode-emoji" title=":cross_mark:">❌</span>400 Bad Request</H3><P class=""><STRONG>Cause</STRONG>: Invalid request body</P><P class=""><STRONG>Check</STRONG>:</P><UL class=""><LI>JSON format is correct</LI><LI><CODE>messages</CODE> is an array</LI><LI>Each message has both <CODE>role</CODE> and <CODE>content</CODE></LI></UL><HR /><H2 id="toc-hId-1097487877">Optimization Tips</H2><OL class=""><LI><STRONG>Limit conversation history</STRONG>: Keep only the last N turns to avoid exceeding context window</LI><LI><STRONG>Add error handling</STRONG>: Use n8n's Error Trigger to handle API failures gracefully</LI><LI><STRONG>Use variables</STRONG>: Store API URLs and Deployment IDs in n8n Variables for easier management</LI></OL><HR /><H2 id="toc-hId-900974372">Summary</H2><P class="">In this tutorial, you learned:</P><P class=""><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to configure OAuth2 authentication for SAP GenAI Hub in n8n<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to use HTTP Request node to call LLM APIs<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to build a chatbot with conversation memory<BR /><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>How to troubleshoot common errors</P><P class="">The power of n8n lies in its flexibility — you can seamlessly integrate SAP GenAI Hub with hundreds of other applications (Email, Databases, SAP S/4HANA, etc.) to build truly enterprise-grade AI automation workflows.</P><P class=""><STRONG>Happy Automating! <span class="lia-unicode-emoji" title=":rocket:">🚀</span></STRONG></P>2026-02-02T16:33:01.109000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/stop-building-from-scratch-autonomous-ai-agents-for-sap-sales-order/ba-p/14323461Stop Building from Scratch: Autonomous AI Agents for SAP Sales Order Processing2026-02-07T18:50:48.664000+01:00Siarheihttps://community.sap.com/t5/user/viewprofilepage/user-id/84286<DIV class=""><H3 id="toc-hId-1918636765">Why this article?</H3><P>It is great that we are living in the era of AI technologies where humans are still needed to design and automate <span class="lia-unicode-emoji" title=":grinning_face:">😀</span>. But let’s be honest: give it a few more years and no company will ask "Which AI platform should we choose?" That decision will be made by central AI services based on strict country and corporate regulations. The only thing that will matter is security and reliability.</P><P><STRONG>So, why am I writing this? </STRONG>Because I see so many people struggling with custom, built-from-scratch AI automation solutions at the enterprise level. These projects start quickly, but they get stuck just as fast.</P><P>You know, it is like cookies <span class="lia-unicode-emoji" title=":grinning_face:">😀</span>. You can make one in a sandbox; however, to scale that to a corporate production level is a totally different problem.</P><P>If you are still wondering why this article matters: I want to highlight that while humans are still designing automation, <STRONG>speed is everything</STRONG>. Deploying autonomous AI agents on a solid, reliable platform is a massive competitive advantage. Those playing around with internal IT projects or relying on System Integrators (SI) to build from zero are being left behind.</P><P>Let me explain why using one simple example: <STRONG>Customer Purchase Orders.</STRONG></P><H3 id="toc-hId-1722123260">The Business Case: The Reality of POs</H3><P>A company wins a deal (congratulations!), and the Customer sends a Purchase Order (PO).</P><P>In an ideal world, this happens via EDI. If you have EDI, you are lucky and you can stop reading. But in the real world, customers send PDFs, scanned images with handwriting, Excel files with macros, or even a screenshot of their laptop screen.</P><P>This <I>shouldn't</I> be an issue. Innovative platforms must handle data extraction from unstructured sources out of the box.</P><P><I>“Training? Fine-tuning??”</I></P><P>I know some of you just thought that <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>. Yes, training OCR models used to be mandatory. If you haven't faced the pain of training models, be happy, because you are young enough <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>. Today, this must work without that hassle.</P><H3 id="toc-hId-1525609755">The Main Challenge: Master Data Mapping</H3><P>Extracting text is easy. The <I>real</I> challenge is my favorite one: <STRONG>Master Data Mapping.</STRONG></P><P>How do we robustly and reliably read the customer’s product description and map it to the correct Master Data Object in our SAP system? And how do we do it without heavy development?</P><P><STRONG>The Solution: SAP Master Data Vectorization</STRONG></P><P>We don't need custom code; we need <STRONG>Vectorization</STRONG>.</P><P>Generative AI can write SQL queries, sure. But think about it: how many SQL queries would the AI have to guess to find a material that is <I>semantically</I> similar but <I>written</I> differently? Infinite. Or maybe a few less if you are lucky and the description contains an acronym like "A4 Paper" <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span>.</P><P>When AI looks for a material master in <STRONG>vectorized data</STRONG> (rather than a database table), it reads by <I>sense</I>, not just by keywords. This allows it to reliably find the right SAP object even if the customer's description is vague.</P><P>I know only one SAP-certified Business AI Platform that has this Vectorization feature in its standard set: <STRONG>Skybuffer AI</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 01.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369831iAA8A4870CD9D26D4/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="Skybuffer AI on Sales Orders OCR AI Bridge Pic 01.png" alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 01.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <I>Pic.1 SAP Master Data Vectorization Setup</I></P><P><I><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 02.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369835i01D95270E03364EE/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI on Sales Orders OCR AI Bridge Pic 02.png" alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 02.png" /></span></I></P><P class="lia-align-center" style="text-align: center;"><I> <I>Pic.2 Example of Material Master Similarities based Search</I></I></P><H3 id="toc-hId-1329096250">The Implementation: Autonomous Agents in Action</H3><P>This is the essential part: stop playing in the sandbox. Take a platform and create a simple autonomous AI agent.</P><P>In this example, I am using <STRONG>Skybuffer AI</STRONG> because it handles the heavy lifting out of the box:</P><UL><LI><P><STRONG>Vectorizes</STRONG> SAP master data for semantic search.</P></LI><LI><P><STRONG>Reads Emails</STRONG> (including complex attachments and OCR).</P></LI><LI><P><STRONG>Safety First:</STRONG> Uses Mailboxes as a staging area (no messy data replication).</P></LI><LI><P><STRONG>ABAP Control:</STRONG> It doesn't let the AI "guess" how to create data. It passes structured JSON to an ABAP class/BAPI for safe execution.</P></LI><LI><P><STRONG>No Code:</STRONG> configured entirely in SAP Fiori.</P></LI></UL><H4 id="toc-hId-1261665464">The Flow</H4><P>A Skybuffer AI agent acts effectively as an automated employee. You can schedule it, give it tools, and let it run.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 03.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369837iEA2CAE541EB823C1/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI on Sales Orders OCR AI Bridge Pic 03.png" alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 03.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <I>Pic.3 Scheduling Autonomous AI Agent</I></P><P><STRONG>Step 1: The Input</STRONG></P><P>The Agent connects to a Mailbox. It reads the email, extracts the attachment data, and places it into the Agent's memory.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 04.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369841i1AE6681F31CF58C5/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI on Sales Orders OCR AI Bridge Pic 04.png" alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 04.png" /></span></P><P class="lia-align-center" style="text-align: center;"> <I>Pic.4 Flow inside the Tool of Skybuffer AI Action Server</I></P><P><STRONG>Step 2: The Brain (Generative AI)</STRONG></P><P>We connect the Generative AI and give it guidelines.</P><P><I>Note: The vectorized SAP master data is mapped directly to this action.</I></P><P><I><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 05.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369842iB05710840961F1EB/image-size/large?v=v2&px=999" role="button" title="Skybuffer AI on Sales Orders OCR AI Bridge Pic 05.png" alt="Skybuffer AI on Sales Orders OCR AI Bridge Pic 05.png" /></span></I></P><P class="lia-align-center" style="text-align: center;"> <I>Pic.5 Generative AI Action of Skybuffer AI Action Server</I></P><P>The AI creates a JSON structure based on our request. <STRONG>Crucially</STRONG>, we do not ask the AI to update SAP directly. We hand that JSON over to an SAP ABAP class. This ensures the transactional data is created correctly, without hallucinations.</P><P><STRONG>Step 3: The Output</STRONG></P><P>We update the email in the Mailbox with the status and send a confirmation back to the customer.</P><P>Everything runs either in your SAP HANA On-Premise shell (directly in your network) or on your SAP BTP tenant.</P><H3 id="toc-hId-936069240">Conclusion</H3><P>Autonomous AI agents are no longer "future tech", they are easy solutions available now.</P><P>By choosing a Business AI automation platform rather than building from scratch, your company gets the best of Generative AI and the SAP backend immediately.</P><P>It is strange that so many companies are still "playing" with in-house development. You wouldn't build your own Sales Order Management system from scratch instead of using SAP, right? <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span> That would be weird. Yet, companies are giving huge budgets to internal teams to build AI wrappers.</P><P>You don't need a giant team. You need a reliable solution, speed, and flexibility. Just look at <STRONG>Cursor, </STRONG>they revolutionized coding with roughly 60 employees before their massive 900M funding round <span class="lia-unicode-emoji" title=":winking_face:">😉</span>.</P><P><STRONG>Be quick. Be autonomous.</STRONG></P></DIV>2026-02-07T18:50:48.664000+01:00https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/the-combined-power-of-sap-joule-and-ai-for-the-rise-with-sap-methodology/ba-p/14324150The Combined Power of SAP Joule and AI for the RISE with SAP Methodology2026-02-11T08:00:00.018000+01:00Manuel_Lederlehttps://community.sap.com/t5/user/viewprofilepage/user-id/1456211<P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Manuel_Lederle_0-1770631008335.png" style="width: 1014px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370242i8C696E7CAB76644D/image-dimensions/1014x304/is-moderation-mode/true?v=v2" width="1014" height="304" role="button" title="Manuel_Lederle_0-1770631008335.png" alt="Manuel_Lederle_0-1770631008335.png" /></span></P><P><FONT color="#FF6600"><EM>Part five of the five-part series, “RISE with SAP Methodology”</EM></FONT></P><P>AI is everywhere right now. Not just in tech headlines or innovation labs, but also in boardrooms, architecture discussions, and project plans. And if you work with SAP solutions, you’ve likely noticed that most road maps now mention it, too.</P><P>Many of us have seen “the next big thing” come and go. New tools, new buzzwords, and big promises have often delivered limited practical impact. So, the real question isn’t whether AI exists. It’s “where will AI actually make a difference?”</P><P>Organizations want technology that helps people work better, make smarter decisions faster and cut through complexity in day-to-day work with real-world pressures and consequential deadlines. AI should not be treated as a standalone innovation topic, but as a capability embedded across products, processes and user experiences.</P><P>That’s exactly the direction SAP’s new strategy is taking. At the heart of this strategy sits a simple idea: bring applications, data, and AI together in SAP Business Suite, so intelligence becomes part of how work naturally happens, as shown in Figure 1.<BR /><BR /></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Manuel_Lederle_1-1770631053463.png" style="width: 935px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370244iBD4526B8E132D1AA/image-dimensions/935x527/is-moderation-mode/true?v=v2" width="935" height="527" role="button" title="Manuel_Lederle_1-1770631053463.png" alt="Manuel_Lederle_1-1770631053463.png" /></span></P><P><EM>Figure 1: The Future of AI in SAP Business Suite</EM><BR /><BR /></P><P><FONT size="5"><STRONG>Joule vs. AI Capabilities: Understanding the Difference</STRONG></FONT></P><P>When people talk about AI in SAP environments, two concepts often get mixed together: AI capabilities and the Joule solution.<BR /><BR />While connected, each one plays vastly distinct roles:</P><UL><LI><STRONG>AI capabilities</STRONG> form the foundation and are the engine, quietly running in the background. Powerful, but often invisible, they analyze huge volumes of data, predict risks and generate recommendations. They include technologies such as machine learning, predictive analytics and automation embedded across SAP solutions.<BR /><BR /></LI><LI><STRONG>Joule</STRONG><SPAN>, on the other hand, is the </SPAN>interaction layer<SPAN> between people and AI and facilitates one-on-one conversations with the user. This is the moment when intelligence becomes accessible. Instead of searching through lots of guidelines, dashboards or compiling reports, users simply ask questions, such as “What should I focus on next in our transformation project?” or “Where are we off track?”. Users can even just say “Summarize the risks”. And in return, they get answers immediately and in context. (see Figure 2 below)</SPAN></LI></UL><P>Together, AI capabilities and Joule form a complementary model where AI feels natural, not abstract. The experience is remarkably similar to talking to a knowledgeable colleague who already understands the system landscape. There’s deep intelligence behind the scenes and a simple, human interaction up front.<BR /><BR /></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Manuel_Lederle_2-1770631053478.png" style="width: 932px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370245i5D4D38942A78009F/image-dimensions/932x424?v=v2" width="932" height="424" role="button" title="Manuel_Lederle_2-1770631053478.png" alt="Manuel_Lederle_2-1770631053478.png" /></span></P><P><EM>Figure 2: Interacting with Joule as the new AI user experience</EM><BR /><BR /></P><P><FONT size="5"><STRONG>Joule: A practical entry point to AI</STRONG></FONT></P><P>As shown in Figure 2 above, one of SAP´s strategic priority is to offer a new AI user experience through Joule. This AI-enabled assistant introduces a simple, practical way to bring AI into transformation work.<BR /><BR />While delivering many AI capabilities across the SAP solution portfolio, Joule goes beyond using AI as another analytics tool or dashboard by running an interaction layer that allows customers to work with AI through a single, conversational interface.</P><P>This changes how teams engage with information during transformation projects or in day-to-day business. Instead of navigating multiple tools, reports, or status decks, users can interact with Joule in natural language—asking questions, requesting summaries, or starting conversations on how to proceed with tasks alongside the transformation project. (see Figure 3)</P><P>Behind the scenes, Joule connects to relevant data across SAP solutions and presents answers in a way that is easy to understand and easy to act on. In practice, this lowers the barrier to insight. Instead of searching for information, teams start working interactively with it.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Manuel_Lederle_3-1770631053485.png" style="width: 935px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370243iFD35DA02D91DAF7A/image-dimensions/935x490/is-moderation-mode/true?v=v2" width="935" height="490" role="button" title="Manuel_Lederle_3-1770631053485.png" alt="Manuel_Lederle_3-1770631053485.png" /></span></P><P><EM>Figure 3: Using Joule interactively during your transformation work<BR /><BR /></EM></P><P><FONT size="5"><STRONG>Strategic enablers of the transformation journey</STRONG></FONT></P><P>As mentioned before, AI is embedded directly into SAP solutions and workflows to help people navigate complexity. Users can surface relevant insights, identify patterns, and reduce manual effort.</P><P>Take, for example, transforming an ERP landscape. This project can easily become complex, depending on customer requirements. The volume of information that teams must use at the same time—including process data, system insights, project status, testing results, risks, and dependencies—often spread across tools and documents. Even with a clear methodology in place, the effort required to connect these dots manually has become a challenge of its own.</P><P>RISE with SAP Methodology already provides the structure needed by providing the integrated tooling, standardized framework, and expert guidance to support the proper execution discipline. However, embedding AI in the methodology adds a new layer of intelligence at the point of action, helping teams understand what matters where risks are emerging, and which decisions requires attention (see Figure 4).</P><P>In this case, AI reduces the load that comes with large-scale change. When applied correctly, it supports people in making better and faster decisions, especially when time and attention are limited.</P><P>Meanwhile, Joule is used whenever clarity is needed (see Figure 4). This opens up a new way of working with guidance, insights, and execution support throughout the transformation journey when using <SPAN>RISE with SAP Methodology, serving as an</SPAN> active companion.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Manuel_Lederle_4-1770631053491.png" style="width: 929px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/370246iC26B772F2324433A/image-dimensions/929x475/is-moderation-mode/true?v=v2" width="929" height="475" role="button" title="Manuel_Lederle_4-1770631053491.png" alt="Manuel_Lederle_4-1770631053491.png" /></span></P><P><EM>Figure 4: Joule and AI as an integral part of RISE with SAP Methodology<BR /><BR /></EM>Giving an end-to-end perspective alongside the transformation, Joule allows project teams to handle time-consuming and complex tasks faster and with greater ease. This includes searching for information and guidelines that can help them start and review extensive assessments and analyses.</P><P>As planning and design activities progress, Joule enables teams to stay oriented. It supports prioritization, highlights dependencies, and helps validate assumptions before decisions become difficult to reverse. AI does not replace workshops or experience, but it strengthens them with data-driven insight.</P><P>During execution, when the build and testing phases accelerate, Joule supports transparency and focus. AI capabilities detect patterns in defects, progress, and quality signals, while Joule summarizes what matters most. So instead of consolidating information manually, teams can react early.</P><P>Joule supports readiness discussions as go-live approaches. By correlating information across testing, migration, and operational checks, AI helps teams assess whether they are truly ready based on evidence, not gut feeling.</P><P>After go-live, AI continues to support continuous improvement. Usage patterns, incidents, and performance signals are analyzed in the background. Joule helps connect teams to those insights to understand where to improve next and how operational issues relate back to earlier transformation decisions.<BR /><STRONG><BR /><FONT size="5">Practical value over automation</FONT></STRONG></P><P>The strength of AI in SAP transformations does not lie in automation alone.<BR />It lies in reducing all the cognitive load that happens during the transformation project.</P><P>By combining all elements of the RISE with SAP Methodology:</P><UL><LI><EM>a clear path (with the Standardized Framework)</EM></LI><LI><EM>Integrated tooling (with the Integrated toolchain)</EM></LI><LI><EM>End-to-end Execution support (with the Expert Guidance) and</EM></LI><LI><EM>AI-driven assistance through Joule and specific AI capabilities</EM></LI></UL><P>customers are well equipped to move through their transformation with focus, confidence and consistency.</P><P>AI does not replace people or experience.<BR />It supports both – at the right moment, in the right context.</P><P><EM>To learn more about<SPAN> </SPAN></EM><A href="https://www.sap.com/products/erp/rise/methodology.html" target="_blank" rel="noopener noreferrer"><EM>RISE with SAP Methodology</EM></A><EM>, read<SPAN> </SPAN></EM><A href="https://dam.sap.com/mac/app/p/pdf/asset/preview/HGGANyW?ltr=a&rc=10&doi=SAP1246926" target="_blank" rel="noopener noreferrer"><EM>our in-depth overview</EM></A><EM>.</EM></P><P class="lia-align-left" style="text-align : left;"><STRONG><FONT color="#000000"><FONT color="#FF9900">-<FONT color="#FF6600">-------------------------------------------------------------------------------------------------------------------------------------------------</FONT></FONT></FONT></STRONG></P><P><FONT size="4"><STRONG><FONT color="#FF6600">RISE with SAP Methodology - Blog-series </FONT><BR /></STRONG></FONT></P><P>Discover the 5-part blog series:</P><UL><LI>Blog 1/5 (December 3rd, 2025)<BR /><A href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/rise-with-sap-methodology-a-practical-guide-to-modernizing-with-confidence/ba-p/14277250" target="_blank"><FONT color="#FF6600"><STRONG>RISE with SAP Methodology: A practical Guide to Modernizing with Confidence</STRONG></FONT></A></LI><LI>Blog 2/5 (December 10th, 2025)<BR /><A href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/deep-dive-inside-the-framework-of-rise-with-sap-methodology/ba-p/14284060" target="_blank"><FONT color="#FF6600"><STRONG>Deep Dive: Inside the framework of RISE with SAP Methodology</STRONG></FONT></A></LI><LI>Blog 3/5 (January 14th, 2026)<BR /><FONT color="#FF6600"><STRONG><A href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/the-integrated-toolchain-in-a-nutshell/ba-p/14305651" target="_blank">The Integrated Toolchain in a Nutshell</A></STRONG></FONT></LI><LI>Blog 4/5 (January 28th, 2026)<BR /><STRONG><FONT color="#FF6600"><A href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/expert-guidance/ba-p/14308680" target="_self">Expert Guidance: Your Guided Transformation Accelerator</A></FONT><BR /></STRONG></LI><LI>Blog 5/5 (February 11th, 2026)<BR /><FONT color="#FF6600"><A href="https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/the-combined-power-of-sap-joule-and-ai-for-the-rise-with-sap-methodology/ba-p/14324150" target="_self"><STRONG>The Combined Power of SAP Joule and AI for the RISE with SAP Methodology</STRONG></A></FONT></LI></UL><P class="lia-align-left" style="text-align : left;"><EM> <STRONG><FONT color="#000000"><FONT color="#FF9900">-<FONT color="#FF6600">-------------------------------------------------------------------------------------------------------------------------------------------------</FONT></FONT></FONT></STRONG></EM></P><P class="lia-align-left" style="text-align : left;"><FONT color="#000000"><EM>#clouderpprivate #s4hanacloud #sap #risewithsap #transformation #risewithsapmethodology</EM></FONT></P>2026-02-11T08:00:00.018000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/shadow-ai%E5%AF%BE%E7%AD%96-sap-ai-core-%E3%81%A7%E5%AE%9F%E7%8F%BE%E3%81%99%E3%82%8B%E5%AE%89%E5%85%A8%E3%81%AAai%E6%B4%BB%E7%94%A8%E7%92%B0%E5%A2%83/ba-p/14327553Shadow AI対策:SAP AI Core で実現する安全なAI活用環境2026-02-13T06:12:30.136000+01:00KentaroAraihttps://community.sap.com/t5/user/viewprofilepage/user-id/472646<H2 id="toc-hId-1789674142">1. 仕組み:SAPが仲介し、認証情報を隠蔽する</H2><P>個人でChatGPTを使う場合、PCから直接OpenAI社のサーバーにデータが送られます。一方、SAPの環境では、SAP AI Core や Generative AI Hub が「仲介役(プロキシ)」として動作します。アプリケーションは、SAP BTP内のプライベートなエンドポイント(外部インターネットから直接アクセスできない、SAP BTPで管理された安全な接続口)へアクセスするため、直接外部のAIサービスと通信することはありません。</P><P>さらに、SAP BTPの Connectivity Service を利用すれば、接続先URLや認証情報(Client ID/Secretなど)を一元管理できます。これにより、アプリケーションのコードの中に機密情報を書く必要がなくなり、開発者はより安全に大規模言語モデル(LLM)を利用することができます。Connectivity Serviceの詳細は次のURLを参照してください。<A href="https://help.sap.com/docs/connectivity/sap-btp-connectivity-cf/connectivity?locale=en-US" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/connectivity/sap-btp-connectivity-cf/connectivity?locale=en-US</A></P><H2 id="toc-hId-1593160637">2. 技術的詳細:入力データを自動で守る「Orchestration」</H2><P>SAP AI Core には「Orchestration(オーケストレーション)」という機能があります。これは、大規模言語モデル(LLM)への問い合わせに対して、送信前の入力チェックや送信後の出力フィルタリングなど、複数の処理を統合的に実行できる機能です。</P><P>Shadow AI対策として特に重要なのが以下の2点です。</P><UL><LI><STRONG>個人情報のマスキング</STRONG>: 従業員が誤って個人情報(メールアドレスや氏名など)を入力した場合でも、Orchestration機能がAIへの送信直前に自動的に検知し、伏せ字(マスキング)に変換します。これにより、外部モデルへの個人情報流出を未然に防ぎます。</LI><LI><STRONG>有害な入力のブロック</STRONG>: 差別的な発言や暴力的な内容が含まれていないかをチェックし、企業のポリシーに反する入力を遮断します。</LI></UL><P>これらの具体的な設定方法(JSONパラメータの記述例など)については、私が執筆した以下のブログ記事で詳しく解説しています。実装を検討される方は、ぜひこちらをご参照ください。</P><UL class=""><LI><STRONG><A class="" href="https://community.sap.com/t5/technology-blog-posts-by-sap/sap-ai-core%E3%81%AB%E3%81%8A%E3%81%91%E3%82%8Borchestration%E6%A9%9F%E8%83%BD%E3%81%A8%E3%82%BB%E3%82%AD%E3%83%A5%E3%83%AA%E3%83%86%E3%82%A3%E8%A8%AD%E5%AE%9A%E3%81%AE%E5%AE%9F%E8%A3%85%E3%82%AC%E3%82%A4%E3%83%89/ba-p/14325729" target="_blank">SAP AI CoreにおけるOrchestration機能とセキュリティ設定の実装ガイド</A></STRONG></LI></UL><H2 id="toc-hId-1396647132">3. プロンプトのライフサイクル管理:Prompt Registry</H2><P>Shadow AIのもう一つの問題は、「誰がどんなプロンプトを使っているかわからず、品質や結果が一貫しない」ことです。これを解決するのが Prompt Registry です。</P><P>これは、プロンプトを単なるテキストではなく「バージョン管理されたライブラリ」として中央で管理するサービスです。設計から実行(runtime)までのライフサイクル全体を管理できるため、アプリケーション間での一貫性と再利用性が保証されます。 コンセプトについては <A class="" href="https://learning.sap.com/courses/solve-your-business-problems-using-prompts-and-llms-in-sap-generative-ai-hub/managing-prompts-with-the-prompt-registry-and-templates" target="_blank" rel="noopener noreferrer">SAP Learning: Managing Prompts with the Prompt Registry and Templates</A> でも詳しく解説されています。</P><P>技術的には、以下の2つの方法でプロンプトを管理・利用できます。</P><OL><LI><STRONG>Declarative API(Git連携)</STRONG>: Gitリポジトリ上のプロンプト定義(<CODE>repo_name</CODE> を指定)と自動的に同期し、コードの一部として管理する方法。</LI><LI><STRONG>Imperative API(直接操作)</STRONG>: API経由でテンプレートの ID や、名前・シナリオ・バージョンの組み合わせを指定して取得・更新する方法。</LI></OL><P>これにより、従業員が勝手なプロンプトを使うのではなく、会社が承認し、バージョン管理された安全なプロンプト(Approved Prompts)だけを利用するよう統制を効かせることが可能になります。 開発者向けの実装ガイドは <A class="" href="https://sap.github.io/ai-sdk/docs/js/ai-core/prompt-registry" target="_blank" rel="noopener nofollow noreferrer">SAP Cloud SDK for AI: Prompt Registry</A> が参考になります。</P><H2 id="toc-hId-1200133627">4. まとめ</H2><P>Shadow AIを防ぐには、ルールで縛るだけでなく、技術的な解決策が必要です。</P><P class="">SAP AI Core / Generative AI Hub を仲介役として利用し、Destination Service で認証情報を管理することで、安全な接続経路を確立できます。さらに、Orchestration によるデータの保護や Prompt Registry による利用管理を組み合わせることで、企業は安心してAI活用を進めることができます。</P>2026-02-13T06:12:30.136000+01:00https://community.sap.com/t5/technology-blog-posts-by-members/on-the-rpt-1-podcast-practical-ai-for-sap-data-demos-lessons-and-what-s/ba-p/14299887On the RPT-1 podcast: practical AI for SAP data (demos, lessons, and what’s next) 🎙️2026-02-13T10:05:44.365000+01:00amitlalmicrosofthttps://community.sap.com/t5/user/viewprofilepage/user-id/686206<P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="amitlalmicrosoft_0-1770787059764.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371298iC27BEDA06AE30A67/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="amitlalmicrosoft_0-1770787059764.png" alt="amitlalmicrosoft_0-1770787059764.png" /></span></P><P><BR />If you work in SAP, you already know the truth: <STRONG>our world is rows, columns, master data, transactional history, and process context</STRONG> — not creative writing, in other words - SAP is not a “write me a poem” platform. That’s why I really enjoyed this conversation on <STRONG>SAP RPT-1</STRONG>, where we went beyond “AI hype” and into <STRONG>what’s genuinely useful for SAP practitioners</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="amitlalmicrosoft_1-1770787223523.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371299iE2D23C3CA07772DD/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="amitlalmicrosoft_1-1770787223523.png" alt="amitlalmicrosoft_1-1770787223523.png" /></span></P><P><STRONG>RPT-1 is built for structured enterprise data.</STRONG> In plain terms: it’s designed to reason over the kinds of datasets we live in every day—think finance, supply chain, procurement, operations—where accuracy, traceability, and consistency matter more than fancy wording.</P><H3 id="toc-hId-1896654919">What you’ll get from the episode</H3><UL><LI><P><STRONG>A practical view of “AI for SAP data”</STRONG> — where it fits (and where it doesn’t)</P></LI><LI><P><STRONG>Real demos and prototypes</STRONG> focused on outcomes, not theory</P></LI><LI><P>How to think about <STRONG>predictive + analytical use cases</STRONG> that start small but scale (without turning into a 6-month science project)</P></LI><LI><P>A clear message: <STRONG>move from chatbot experiments to measurable business impact</STRONG></P></LI></UL><P class="lia-align-center" style="text-align: center;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="amitlalmicrosoft_2-1770787318215.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371300i00671C6DE0A21746/image-size/large/is-moderation-mode/true?v=v2&px=999" role="button" title="amitlalmicrosoft_2-1770787318215.png" alt="amitlalmicrosoft_2-1770787318215.png" /></span></P><H3 id="toc-hId-1700141414">Why this matters for the SAP community</H3><P>Most AI discussions ignore a key point: <STRONG>SAP value is locked in structured data + process discipline</STRONG>. The moment you align AI with that reality—tabular signals, business semantics, governance—you shift from “cool demo” to <STRONG>real operational leverage</STRONG>.</P><P>If you’re exploring AI for SAP and you want something that’s grounded in <STRONG>enterprise reality</STRONG> (data quality, governance, repeatability, outcomes), this episode is a solid watch:</P><P><span class="lia-unicode-emoji" title=":movie_camera:">🎥</span>Watch here my podcast with Holger and Goran from Microsoft:<BR /><A class="" href="https://youtu.be/CbUDRgEO0yI?si=J4vE-BZ0Xig_srYo" target="_new" rel="noopener nofollow noreferrer">https://youtu.be/CbUDRgEO0yI?si=J4vE-BZ0Xig_srYo</A></P><P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FCbUDRgEO0yI%3Ffeature%3Doembed&display_name=YouTube&url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DCbUDRgEO0yI&image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FCbUDRgEO0yI%2Fhqdefault.jpg&type=text%2Fhtml&schema=youtube" width="640" height="360" scrolling="no" title="#276 - ToW Exploring SAP RPT-1 (Amit Lal) | SAP on Azure Video Podcast" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div><BR /><BR />Please share your thoughts and feedback! <BR /><BR />Cheers,<BR /><A href="https://www.amit-lal.com" target="_blank" rel="noopener nofollow noreferrer">Amit Lal</A><BR /><BR /></P>2026-02-13T10:05:44.365000+01:00https://community.sap.com/t5/business-transformation-blog-posts/sap-testing-in-2026-agentic-ai-and-quality-engineering-for-s-4hana-amp-btp/ba-p/14329981SAP Testing in 2026: Agentic AI and Quality Engineering for S/4HANA & BTP2026-02-18T13:48:12.256000+01:00venkatesanmhttps://community.sap.com/t5/user/viewprofilepage/user-id/2270900<P><FONT size="5">Quality Engineering Transformation in S/4HANA and BTP Programs</FONT></P><P>Quality engineering integrated with S/4HANA project architecture is the new norm. It is no longer feasible for a company to simply convert to S/4HANA; they are reinventing their finance, logistics, manufacturing, and procurement processes while transitioning custom logic to BTP and running side-by-side applications. With each architectural choice comes the added complexity of testing for data integrity, service integration, authorization models, and performance characteristics.</P><P>In this context, testing begins well before a system is built. Quality engineers are involved in the solution design phase to assess migration paths, extensibility patterns, and potential integration risks. Data conversion strategies should be validated not only for their structural integrity, but also for their business relevance as they relate to S/4HANA’s simplified data model. This can be particularly challenging with historical data. During the build phase, regular configuration changes and transport movements require continuous regression testing rather than milestone-based testing.<BR /><BR />Several structural changes define SAP quality engineering at this stage:</P><UL><LI>Testing scope is increasingly derived from business criticality rather than transaction count. Processes with financial exposure or regulatory impact receive deeper validation.</LI><LI>BTP extensions require testing beyond SAP GUI transactions. API behavior, event messaging, and cloud security controls must be validated under real usage conditions.</LI><LI>Performance and stability testing have shifted earlier in the lifecycle, particularly for hybrid scenarios where S/4HANA interacts with external platforms.</LI></UL><P>Within this framework, SAP teams are emerging as a practical response to scale. Autonomous systems analyze all configuration changes and provide valuable recommendations for which specific tests should be conducted to achieve maximum test coverage. The SAP Artificial Intelligence team partners with the Quality Engineers to ensure that the intelligence used in testing aligns with the test's business intent. This leads to testing that adapts rather than exhausts test execution, resulting in reduced execution overhead and improved reliability of the S/4HANA and BTP landscapes.</P><P><FONT size="5">Agentic AI and the Evolution of SAP Testing Models</FONT></P><P>The adoption of agentic AI introduces a fundamentally different operating model for SAP testing. Automation systems that have been traditionally used, either defined by static logic elements and pre-written scripts or operate in just a 'set and forget' manner. The way that agentic AI is built, on the other hand, is to learn from the outcomes that occur, identify the patterns associated with those outcomes, and act on its own within a pre-defined set of constraints. Automated processes that rely on predetermined scripts to run a series of tests will not work well in SAP systems, as they are constantly changing and closely interrelated.</P><P>AI agents in SAP testing environments are capable of monitoring transport activity, configuration updates, and integration changes. When a test fails, these agents evaluate the execution context rather than simply logging an error. In this case, they can determine whether the test failure is due to data inconsistencies, authorization issues, or delays caused by upstream integrations. By reasoning the error context, agents are much more accurate at prioritizing and classifying defects, reducing the need for rework.<BR /><BR />Common applications of agentic AI in SAP quality engineering include:</P><UL><LI>Dynamic regression planning where agents adjust test scope based on recent system changes rather than executing full regression cycles.</LI><LI>Intelligent failure clustering that groups related defects and identifies systemic issues instead of isolated symptoms.</LI><LI>Context-aware test data generation that aligns with real business scenarios, improving validation depth for complex S/4HANA processes.</LI></UL><P>These capabilities also reshape team structures. Agentic SAP teams operate with a shared responsibility model, where quality engineers define validation objectives, and AI systems handle execution level decisions. SAP AI teams provide governance, ensuring transparency and auditability of autonomous actions. Over time, these systems accumulate knowledge from prior releases, enabling predictive insights that anticipate failure patterns before they impact production.</P><P>As enterprises evolve to develop Agentic AI Services, there’s an ever-increasing demand from companies that want to create scalable solutions to incorporate into their existing SAP landscapes, while also minimizing the amount of manual work. If properly implemented, SAP testing programs may realize tangible improvements in release confidence, defect containment, and operating efficiency - all without sacrificing an enterprise’s ability to maintain control.</P><P><FONT size="5">Conclusion</FONT></P><P>The defining characteristics of SAP testing in 2026 are depth, continuity, and intelligence. The quality engineering practices required for S/4HANA and BTP programs require an understanding of the technical architecture and business context. Active testing has moved beyond a final check to become an inherent part of the testing discipline, influencing design, observing how it is executed, and facilitating continual improvement in SAP environments.</P><P>The rise of agentic AI in SAP testing represents a natural progression in this evolution. By enabling autonomous reasoning and adaptive execution, AI agents (e.g., chatbots) within SAP ecosystems enable organizations to manage the complexity of their operations without increasing testing overhead. As SAP AI teams add these capabilities to SAP, and enterprises use agentic AI services, quality engineering becomes predictive, reactive, and aligned with overarching enterprise transformation goals. In this environment, AI used for SAP testing is no longer an experiment but an operational requirement for maintaining reliable, scalable SAP systems.</P>2026-02-18T13:48:12.256000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/iso-42001-certification-the-new-benchmark-for-ai-vendor-selection/ba-p/14331515ISO 42001 Certification: The New Benchmark for AI Vendor Selection2026-02-19T08:31:26.737000+01:00KentaroAraihttps://community.sap.com/t5/user/viewprofilepage/user-id/472646<H2 id="toc-hId-1790418795">Introduction: AI Governance Moves from "Guidelines" to "Certification"</H2><P>Since the publication of <STRONG>ISO/IEC 42001 (AI Management System)</STRONG> in December 2023 [1], the landscape of AI governance has evolved steadily. Now, entering 2026, we are witnessing a phase where major certification bodies worldwide have operationalized their audit services, and leading technology vendors are increasingly acquiring this certification. AI governance, which began as a collection of local guidelines and ethical codes, is now adopting a global benchmark: "certification (proof) by a third party."</P><P>For SaaS/PaaS vendors like SAP, the questions we receive from customers have shifted. Beyond the traditional functional question of "What can this AI do?", we are now asked non-functional and governance-related questions with equal weight, such as "Is this AI safe?" and "How is the training data managed?"</P><P>In this article, from a security and compliance perspective, I will explain the practical implications of the ISO 42001 certification that SAP has acquired and how it can help reduce AI adoption risks for user companies.</P><H2 id="toc-hId-1593905290">1. Why ISO 42001 is the "Common Language of Trust"</H2><P>Since the widespread adoption of generative AI, companies have faced significant challenges in managing risks such as the "black box" nature of AI (lack of transparency in decision-making) and "hallucinations" (generation of incorrect information). While traditional security standards like ISO/IEC 27001 are essential for information security, they were not designed to fully cover the specific nuances of the AI lifecycle such as data bias, continuous learning, and ethical considerations.</P><P>This is why <STRONG>ISO/IEC 42001:2023</STRONG> was introduced as the first international management system standard specifically for AI.</P><P>A key feature of this standard is that it does not certify the accuracy of a specific AI model at a single point in time. Instead, it validates whether an organization has established a "system to continuously identify, manage, and improve AI risks (Management System)." [2] Holding ISO 42001 certification serves as objective proof that a company has built and operates a responsible management structure for its AI systems.</P><H2 id="toc-hId-1397391785">2. SAP's Certification and Scope of Application</H2><P>SAP announced the acquisition of ISO 42001 certification for its key AI services in 2025.</P><P>What is important here is the scope of application. This certification covers core services such as <STRONG>Joule</STRONG> (our generative AI assistant) and <STRONG>SAP AI Core</STRONG> (our AI development and execution platform). This indicates that the applications and development environments utilizing these foundations are managed under a system based on international standards.</P><H3 id="toc-hId-1329960999">Understanding the Difference: "Self-Attestation" vs. "Third-Party Certification"</H3><P>There are two primary approaches to demonstrating conformity to AI governance: "Self-attestation" by the supplier and "Third-party certification" by an independent body. SAP has chosen the path of "Third-party certification," which involves rigorous audits by an independent organization, to ensure a higher level of objectivity.</P><P>For security and legal professionals, this distinction is significant. When adopting a service based solely on self-attestation, user companies often need to send detailed security checklists to vendors and scrutinize their responses—a process that consumes considerable time and resources. However, when a vendor holds ISO 42001 certification, the certification itself serves as <STRONG>objective </STRONG>evidence of governance, which can significantly streamline the Vendor Risk Management (VRM) process.</P><H2 id="toc-hId-1004364775">3. Practical Benefits for User Companies</H2><P>What are the practical benefits for user companies utilizing SAP's certified environment? I see two main advantages.</P><H3 id="toc-hId-936933989">1. Clarifying Roles in Supply Chain Risk Management</H3><P>If a user company were to build an AI service from scratch and aim for ISO 42001 compliance, the effort would be substantial, requiring resources for AI-specific risk assessments and continuous monitoring systems.</P><P>By adopting a certified platform (PaaS/SaaS) like SAP Business AI, companies can leverage the vendor's established governance for the infrastructure layer. This concept is similar to the "Shared Responsibility Model" in cloud security.</P><P>Specifically, SAP maintains governance over the training environment and infrastructure (Security <EM>of</EM> the AI). This allows user companies to focus their resources on managing the upper layers directly linked to their specific business values and use cases, such as data quality and ethical decision-making (Security <EM>in</EM> the AI). This division of labor can help shorten the lead time for AI adoption while maintaining reliability across the supply chain.</P><H3 id="toc-hId-740420484">2. Supporting Accountability to Stakeholders</H3><P>In addition to shareholders and regulators, business partners are increasingly conducting strict due diligence regarding the safety and transparency of AI integrated into business operations.</P><P>In such scenarios, being able to demonstrate that "we utilize SAP's platform, which complies with the international standard ISO 42001 and is audited by a third party," provides a strong, objective basis for trust. This certification functions as valid evidence to help fulfill accountability requirements to stakeholders.</P><P>Furthermore, as legal frameworks like the EU AI Act are implemented globally, compliance is becoming a critical management issue. While ISO 42001 certification does not automatically guarantee legal compliance, it is widely recognized as a framework that supports alignment with these regulations. Leveraging a certified foundation can help streamline the complex process of proving compliance, allowing companies to direct limited resources toward creating business value rather than just defensive measures.</P><H2 id="toc-hId-414824260">Conclusion: Criteria for Selecting AI in the Future</H2><P>AI technology evolves daily, but the importance of the trust that underpins it remains constant. As technology advances, the value of transparency and governance only increases.</P><P>As a compliance professional, I want to emphasize that for SAP, acquiring ISO 42001 is not a final goal but a milestone. It is objective proof that we have established a robust governance structure to continuously provide an "environment where customers can integrate AI into the core of their business with confidence."</P><P>I recommend incorporating "ISO 42001 certification status" into your functional comparison table when selecting AI solutions. This addition will serve as a definitive step toward realizing long-term, stable, and trustworthy AI utilization.</P><H3 id="toc-hId-347393474">References</H3><UL><LI><P><A title="null" href="https://www.sap.com/about/trust-center/certification-compliance.html" target="_blank" rel="noopener noreferrer">SAP Trust Center: Certifications and Compliance</A></P></LI></UL>2026-02-19T08:31:26.737000+01:00https://community.sap.com/t5/technology-blog-posts-by-sap/setting-up-llm-model-connection-in-open-webui-using-hyperspace-proxy/ba-p/14331949Setting up LLM Model Connection in Open WebUI using Hyperspace Proxy2026-02-19T16:41:40.753000+01:00TianranWeihttps://community.sap.com/t5/user/viewprofilepage/user-id/1904179<H3 id="toc-hId-1919505455">Introduction</H3><P>Open WebUI provides a powerful, self-hosted interface for interacting with Large Language Models (LLMs). When combined with SAP's Hyperspace Proxy, you can access multiple enterprise-grade AI models securely through a unified gateway. This guide walks you through the complete setup process, from configuring the Hyperspace Proxy to accessing various LLM models in your local Open WebUI instance.</P><H3 id="toc-hId-1722991950">Prerequisites</H3><P>Before starting, ensure you have:</P><UL><LI>Access to SAP's internal tools and GitHub Enterprise</LI><LI>Administrative privileges on your local machine</LI><LI>Basic familiarity with terminal/command line operations</LI><LI>Docker installed (or ability to install it)</LI></UL><H3 id="toc-hId-1526478445">Part 1: Setting up Hyperspace Proxy</H3><H4 id="toc-hId-1459047659">Step 1: Enroll in the Pilot Program</H4><P><STRIKE><STRONG>Important:</STRONG> Start this step at least one day before you need to use the proxy, as approval typically takes 24 hours.</STRIKE></P><OL><LI><STRIKE>Navigate to the Hyperspace Pilot Program enrollment page:</STRIKE><BR /><STRIKE><A href="https://pages.github.tools.sap/hAIperspace/hai-docs/llm-proxy/pilot-program/" target="_blank" rel="noopener nofollow noreferrer">https://pages.github.tools.sap/hAIperspace/hai-docs/llm-proxy/pilot-program/</A></STRIKE></LI><LI><STRIKE>Follow the registration instructions provided on the page</STRIKE></LI></OL><P>State of Feb. 19. 2026: Hyperspace Proxy is available to all of SAP teams!</P><H4 id="toc-hId-1262534154">Step 2: Run the Hyperspace Proxy Locally</H4><P>Once approved, set up the proxy on your local machine:</P><OL><LI>Follow the quick start guide at:<BR /><A href="https://pages.github.tools.sap/hAIperspace/hai-docs/llm-proxy/quickstart/" target="_blank" rel="noopener nofollow noreferrer">https://pages.github.tools.sap/hAIperspace/hai-docs/llm-proxy/quickstart/</A></LI><LI>After successful setup, you should see the proxy running in your Terminal with important connection details including:<UL><LI>Local proxy URL (typically<SPAN> </SPAN><CODE><A href="http://localhost:6655" target="_blank" rel="noopener nofollow noreferrer">http://localhost:6655</A></CODE>)</LI><LI>Your API key</LI><LI>Available endpoints</LI></UL></LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_4-1771515612086.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374347i568DD1F68ED94463/image-size/large?v=v2&px=999" role="button" title="Trrr_4-1771515612086.png" alt="Trrr_4-1771515612086.png" /></span></P><P><STRONG>Keep this terminal window open</STRONG><SPAN> </SPAN>- the proxy needs to run continuously for Open WebUI to access the models.</P><H3 id="toc-hId-936937930">Part 2: Installing Open WebUI with Docker</H3><H4 id="toc-hId-869507144">Step 1: Install Docker</H4><P>Docker will host the Open WebUI application locally.</P><P><STRONG>For macOS:</STRONG></P><OL><LI>Install Homebrew (if not already installed):<BR /><CODE>/bin/bash -c "$(curl -fsSL <A href="https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh" target="_blank" rel="noopener nofollow noreferrer">https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh</A>)"</CODE></LI><LI>Install Docker via Homebrew:<BR /><CODE>brew install --cask docker</CODE></LI><LI>Verify the installation:<BR /><CODE>docker info</CODE></LI></OL><P><STRONG>For Windows/Linux:</STRONG><BR />Follow the official Docker installation guide for your operating system at<SPAN> </SPAN><A href="https://docker.com" target="_blank" rel="noopener nofollow noreferrer">docker.com</A>.</P><H4 id="toc-hId-672993639">Step 2: Launch Open WebUI</H4><OL><LI>Follow the Open WebUI Quick Start Guide (refer to official Open WebUI documentation)</LI><LI>Start the Open WebUI service in Docker</LI><LI>Check the container status:<BR /><CODE>docker ps</CODE></LI><LI>Wait for the status to show "Up About..." (this typically takes a few minutes)</LI><LI>Once ready, access the web interface at:<BR /><A href="http://localhost:3000" target="_blank" rel="noopener nofollow noreferrer">http://localhost:3000</A></LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_5-1771515634972.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374348iF0B393E67A0FC02F/image-size/large?v=v2&px=999" role="button" title="Trrr_5-1771515634972.png" alt="Trrr_5-1771515634972.png" /></span></P><H3 id="toc-hId-347397415">Part 3: Connecting Hyperspace Proxy to Open WebUI</H3><P>Now comes the crucial step - connecting your local Open WebUI instance to the Hyperspace Proxy to access LLM models.</P><H4 id="toc-hId-279966629">Basic Connection Setup</H4><OL><LI><STRONG>Access Admin Panel</STRONG><UL><LI>Open Open WebUI in your browser (<CODE><A href="http://localhost:3000" target="_blank" rel="noopener nofollow noreferrer">http://localhost:3000</A></CODE>)</LI><LI>Navigate to the Menu Bar and select "Admin Panel"<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_0-1771515478353.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374343i54D86DB2FBD6AA69/image-size/medium?v=v2&px=400" role="button" title="Trrr_0-1771515478353.png" alt="Trrr_0-1771515478353.png" /></span></LI></UL></LI><LI><STRONG>Add New Connection</STRONG><UL><LI>Go to "Connections" → "Settings"</LI><LI>Click the "+" button to add a new connection<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_6-1771515670760.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374349i47D59F90496F44A7/image-size/large?v=v2&px=999" role="button" title="Trrr_6-1771515670760.png" alt="Trrr_6-1771515670760.png" /></span><P> </P></LI></UL></LI><LI><STRONG>Configure OpenAI-Compatible Endpoint</STRONG><UL><LI><STRONG>URL:</STRONG><SPAN> </SPAN><CODE><A href="http://host.docker.internal:6655/openai/v1" target="_blank" rel="noopener nofollow noreferrer">http://host.docker.internal:6655/openai/v1</A></CODE></LI><LI><STRONG>API Key:</STRONG><SPAN> </SPAN>Enter the API key from your Hyperspace Proxy terminal output</LI><LI>Click "Save"<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_2-1771515508566.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374345i76E9972A5DFCC15C/image-size/medium?v=v2&px=400" role="button" title="Trrr_2-1771515508566.png" alt="Trrr_2-1771515508566.png" /></span><P> </P></LI></UL></LI></OL><P><EM><STRONG>Note:</STRONG><SPAN> </SPAN>We use<SPAN> </SPAN><CODE>host.docker.internal</CODE><SPAN> </SPAN>instead of<SPAN> </SPAN><CODE>localhost</CODE><SPAN> </SPAN>because the Open WebUI container needs to access services running on your host machine.</EM></P><OL><LI><STRONG>Verify Connection</STRONG><UL><LI>After saving, navigate to the Chat window</LI><LI>You should now see available models in the model selector dropdown</LI></UL></LI></OL><H4 id="toc-hId--414263971">Advanced Setup: Accessing More Models via LiteLLM</H4><P>To unlock access to a broader range of models including Claude, GPT, and Gemini variants:</P><OL><LI><STRONG>Add LiteLLM Connection</STRONG><UL><LI>Return to "Admin Panel" → "Connections" → "Settings"</LI><LI>Click "+" to add another connection</LI><LI><STRONG>URL:</STRONG><SPAN> </SPAN><CODE><A href="http://host.docker.internal:6655/litellm/v1" target="_blank" rel="noopener nofollow noreferrer">http://host.docker.internal:6655/litellm/v1</A></CODE></LI><LI><STRONG>API Key:</STRONG><SPAN> </SPAN>Use the same API key from Hyperspace Proxy</LI><LI>Click "Save"<BR /><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Trrr_3-1771515536263.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374346i74808DB6D079F90C/image-size/medium?v=v2&px=400" role="button" title="Trrr_3-1771515536263.png" alt="Trrr_3-1771515536263.png" /></span></LI></UL></LI><LI><STRONG>Specify Model IDs</STRONG><P>Unlike the basic OpenAI endpoint, LiteLLM requires explicit model ID specification. Here's the complete reference:</P>API Base URL Available Model IDs <TABLE border="1" cellspacing="0" cellpadding="8"><TBODY><TR><TD><CODE><A href="http://localhost:6655/anthropic/v1" target="_blank" rel="noopener nofollow noreferrer">http://localhost:6655/anthropic/v1</A></CODE></TD><TD><UL><LI>anthropic--claude-4.5-opus</LI><LI>anthropic--claude-4.5-sonnet</LI><LI>anthropic--claude-4.5-haiku</LI></UL></TD></TR><TR><TD><CODE><A href="http://localhost:6655/openai/v1" target="_blank" rel="noopener nofollow noreferrer">http://localhost:6655/openai/v1</A></CODE></TD><TD><UL><LI>gpt-5</LI></UL></TD></TR><TR><TD><CODE><A href="http://localhost:6655/litellm/v1" target="_blank" rel="noopener nofollow noreferrer">http://localhost:6655/litellm/v1</A></CODE></TD><TD><UL><LI>gemini-2.5-flash</LI><LI>gemini-2.5-pro</LI><LI>gpt-5</LI><LI>gpt-5-mini</LI><LI>anthropic--claude-4.5-sonnet</LI><LI>anthropic--claude-4.5-opus</LI><LI>anthropic--claude-4.5-haiku</LI></UL></TD></TR></TBODY></TABLE></LI><LI><STRONG>Configure Model Access</STRONG><UL><LI>In the Admin Panel, you may need to enable specific models</LI><LI>Enter the model IDs from the table above as needed</LI><LI>Save your configuration</LI></UL></LI></OL><H3 id="toc-hId--317374469">Troubleshooting Tips</H3><H4 id="toc-hId--807290981">Proxy Connection Issues</H4><UL><LI>Ensure the Hyperspace Proxy terminal window is still running</LI><LI>Verify your API key is copied correctly without extra spaces</LI><LI>Check that you're using<SPAN> </SPAN><CODE>host.docker.internal</CODE><SPAN> </SPAN>not<SPAN> </SPAN><CODE>localhost</CODE><SPAN> </SPAN>in Docker</LI></UL><H4 id="toc-hId--1003804486">Docker Container Not Starting</H4><UL><LI>Run<SPAN> </SPAN><CODE>docker logs <container-name></CODE><SPAN> </SPAN>to check error messages</LI><LI>Ensure port 3000 isn't already in use by another application</LI><LI>Verify Docker has sufficient resources allocated</LI></UL><H3 id="toc-hId--906914984"> </H3><P>You now have a fully functional local Open WebUI instance connected to enterprise LLM models via Hyperspace Proxy! This setup provides:</P><UL><LI><STRONG>Privacy:</STRONG><SPAN> </SPAN>All interactions run locally on your machine</LI><LI><STRONG>Flexibility:</STRONG><SPAN> </SPAN>Access to multiple cutting-edge LLM models</LI><LI><STRONG>Control:</STRONG><SPAN> </SPAN>Full customization of your AI interface</LI><LI><STRONG>Security:</STRONG><SPAN> </SPAN>Enterprise-grade access through SAP's Hyperspace gateway</LI></UL><P>Enjoy exploring the capabilities of various LLMs through your personalized Open WebUI interface!</P><H3 id="toc-hId--1103428489">Additional Resources</H3><UL><LI><A href="https://pages.github.tools.sap/hAIperspace/hai-docs/" target="_blank" rel="noopener nofollow noreferrer">Hyperspace Proxy Documentation</A></LI><LI><A href="https://docs.openwebui.com/" target="_blank" rel="noopener nofollow noreferrer">Open WebUI Official Documentation</A></LI><LI><A href="https://docs.docker.com/desktop/" target="_blank" rel="noopener nofollow noreferrer">Docker Desktop Documentation</A></LI></UL>2026-02-19T16:41:40.753000+01:00