https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-HANA-Cloud-blog-posts.xml SAP Community - SAP HANA Cloud 2026-03-01T21:01:11.009740+00:00 python-feedgen SAP HANA Cloud blog posts in SAP Community https://community.sap.com/t5/artificial-intelligence-blogs-posts/beyond-vectors-the-next-evolution-of-rag-on-sap-btp-using-sap-hana-cloud/ba-p/14301103 Beyond Vectors: The Next Evolution of RAG on SAP BTP using SAP HANA Cloud 2026-01-05T16:24:19.911000+01:00 Gunter https://community.sap.com/t5/user/viewprofilepage/user-id/727 <H2 id="toc-hId-1787644355"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Gemini_Generated_Image_y4jpb8y4jpb8y4jp.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/358043i322E944DE1F47984/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Gemini_Generated_Image_y4jpb8y4jpb8y4jp.png" alt="Gemini_Generated_Image_y4jpb8y4jpb8y4jp.png" /></span></STRONG></H2><H2 id="toc-hId-1591130850"><STRONG>Introduction</STRONG></H2><P>We are all familiar with Retrieval-Augmented Generation (RAG) by now. It has become the standard architecture for bringing enterprise data to Large Language Models (LLMs). The pattern is simple: chunk your documents, create vector embeddings, and perform a similarity search.</P><P>But as we move from "Proof of Concept" to production, we are hitting a ceiling.</P><P>Standard Vector RAG is excellent at finding things that "sound like" your query (semantic similarity), but it struggles with <STRONG>structure</STRONG> and <STRONG>reasoning</STRONG>. It can tell you that "Alice works at SAP," but it often fails at multi-hop questions like "Who works in the same department as the person who wrote the Q3 report?"</P><P>This is where <STRONG>GraphRAG</STRONG> enters the picture. And the best part? If you are on SAP BTP, you already have one of the most powerful engines for this architecture: <STRONG>SAP HANA Cloud</STRONG>.</P><H2 id="toc-hId-1394617345"><STRONG>The Limitations of "Flat" Data</STRONG></H2><P>In a standard Vector Store, your data is essentially a pile of isolated chunks. When you search, you grab the top 5 chunks from the pile. You lose the context of how those chunks relate to one another.</P><P>This becomes even more critical when dealing with <STRONG>Multimodal Data</STRONG>. Real-world enterprise documents aren't just text; they are PDFs containing diagrams, flowcharts, and architecture schematics. In a standard vector process, these images are often ignored or separated from the text that explains them.</P><H2 id="toc-hId-1198103840"><STRONG>The Solution: Hybrid GraphRAG</STRONG></H2><P>The next evolution of AI on SAP BTP isn't about choosing between Vector Search or Knowledge Graphs. It is about using <STRONG>both</STRONG>. I have been exploring a "Hybrid" approach where we use:</P><OL><LI><P><STRONG>The Vector Engine</STRONG> (built into SAP HANA Cloud) for fuzzy, semantic search.</P></LI><LI><P><STRONG>The Graph Engine</STRONG> (RDF/SPARQL in SAP HANA Cloud) for precise, structured relationships.</P></LI></OL><P>Imagine SAP HANA Cloud as a "Super Librarian." It has a messy pile of fuzzy ideas (Vectors) and a neat corkboard of connected facts (The Knowledge Graph). To answer a complex business question, the system checks the fuzzy pile to understand the <I>intent</I>, but validates the answer against the corkboard to ensure <I>factual accuracy</I>.</P><H2 id="toc-hId-1001590335"><STRONG>Multimodal RAG: Keeping Context Intact</STRONG></H2><P>One of the most exciting capabilities of this hybrid approach is handling mixed media.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Gemini_Generated_Image_a8g428a8g428a8g4.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/358044i5A20DB269351B91F/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Gemini_Generated_Image_a8g428a8g428a8g4.png" alt="Gemini_Generated_Image_a8g428a8g428a8g4.png" /></span></P><P>When processing complex documentation (like technical manuals or financial reports), we can use the graph to create <STRONG>structural links</STRONG>. We can index a diagram or an image not just by what is inside it, but by <I>where it sits in the document</I>.</P><P>By modeling the document structure in the Knowledge Graph (e.g., <CODE>TextChunk A</CODE> -&gt; <CODE>ADJACENT_TO</CODE> -&gt; <CODE>Image B</CODE>), we ensure that when an LLM retrieves the text explanation, it automatically pulls the relevant diagram along with it. This creates a true Multimodal RAG experience that "sees" the document the way a human does.</P><H2 id="toc-hId-805076830"><STRONG>Why SAP HANA Cloud?</STRONG></H2><P>Usually, building this architecture requires a complex stack: a vector database (like Pinecone), a graph database (like Neo4j), and a relational database to glue them together.</P><P>SAP HANA Cloud is unique because it is truly <STRONG>multi-model </STRONG>(yes it's also multimodal with this library). You can store your vectors, your RDF triples, and your relational metadata all in the same persistence layer. This simplifies the architecture immensely and reduces data movement, which is critical for enterprise security and latency.</P><H2 id="toc-hId-608563325"><STRONG>A New Framework for SAP Developers</STRONG></H2><P>To prove this concept, I have developed a TypeScript framework designed specifically for Node.js environments on SAP BTP. It acts as a bridge, orchestrating the interaction between the LLM (via LiteLLM) and SAP HANA Cloud’s dual engines.</P><P>The framework handles:</P><UL><LI><P><STRONG>Schema-Guided Extraction:</STRONG> Using LLMs to turn unstructured text into clean Knowledge Graph entities (Nodes and Edges).</P></LI><LI><P><STRONG>Hybrid Retrieval:</STRONG> Performing a vector search and then traversing the graph to find related context (including images).</P></LI><LI><P><STRONG>Unified Storage:</STRONG> Managing both the <CODE>REAL_VECTOR</CODE> data and RDF Triples in a single connection.</P></LI></UL><H2 id="toc-hId-412049820"><STRONG>Learn More</STRONG></H2><P>If you are interested in building the next generation of context-aware AI agents on SAP BTP, I invite you to read the full technical breakdown here (collaboration on the repo welcome!):</P><P><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> <STRONG><A class="" href="https://medium.com/@techandfun/beyond-vectors-building-powerful-graph-rag-on-sap-btp-with-hana-cloud-50da841bb31f" target="_blank" rel="noopener nofollow noreferrer">Read the full article on Medium: Beyond Vectors - Building Powerful Graph RAG on SAP BTP</A></STRONG></P><P>Let’s move beyond simple similarity search and start building AI that truly understands the structure of our enterprise data.</P> 2026-01-05T16:24:19.911000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/simplifying-time-series-analytics-with-unified-time-series-interface/ba-p/14292218 Simplifying Time Series Analytics with Unified Time Series Interface 2026-01-08T23:36:27.942000+01:00 zhengwang https://community.sap.com/t5/user/viewprofilepage/user-id/893377 <P>Time series analysis is fundamental in industries ranging from retail to finance, helping businesses forecast trends, predict anomalies, and optimize operations. Traditional approaches, however, often require complex preprocessing, data conversion, and algorithm selection, posing challenges for less technical users.</P><P>To address these issues, <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/sap-hana-cloud-sap-hana-database-predictive-analysis-library-pal" target="_self" rel="noopener noreferrer">SAP HANA Predictive Analysis Library (PAL)</A> has introduced a unified interface for time series algorithms. Following the successful implementation of its unified classification and regression interfaces, this update aims to make time series analysis more efficient and user-friendly.</P><P>In this blog post, we explore the latest features of this unified interface and showcase an example to illustrate its usage.</P><H1 id="toc-hId-1638274962">Key Highlights</H1><P>Let’s dive into new interface's key features in detail:</P><H3 id="toc-hId-1699926895">Unified Workflow</H3><P>The unified interface streamlines the management of PAL algorithms by providing a standardized structure for invoking them. This simplifies parameter handling and data preparation for individual algorithms, enhancing efficiency and ease of use. Supported algorithms include Additive Model Time Series Analysis (AMTSA), Auto Regressive Integrated Moving Average (ARIMA), Bayesian Structural Time Series (BSTS), and Exponential Smoothing (SMOOTH).</P><H3 id="toc-hId-1503413390">Automatic Timestamp Conversion</H3><P>The datasets of different time series analysis tasks can have diverse time formats, therefore automatic timestamp conversion is introduced in new unified interface. This feature automatically detects and converts between integer timepoints and timestamp types.&nbsp;To convert timepoints to timestamps, users must define START_POINT and INTERVAL. INTERVAL represents the spacing between timestamps, measured in the smallest unit of the target type (TARGET_TYPE). For instance, if the target type is DAYDATE and a weekly interval is desired, the INTERVAL value would be set to 7. Conversely, converting timestamps to timepoints is automated, with the system generating consecutive integers based on input timestamps. However, the input timestamps should be evenly spaced for this conversion to function effectively.</P><H3 id="toc-hId-1306899885">Pivoted Input Data Format Support</H3><P>Traditionally, additional steps are required to transform the pivoted data into a usable format. To simplify this data preparation process, the new unified interface directly supports pivoted input data formats. This feature is particularly beneficial for complex, multidimensional time series data. The&nbsp;structure of input data is&nbsp;<SPAN>defined</SPAN> in the metadata table,&nbsp; as&nbsp;<SPAN>illustrated&nbsp;</SPAN>below.</P><pre class="lia-code-sample language-sql"><code>CREATE COLUMN TABLE PAL_META_DATA_TBL ( "VARIABLE_NAME" NVARCHAR (50), "VARIABLE_TYPE" NVARCHAR (50) ); INSERT INTO PAL_META_DATA_TBL VALUES ('TIMESTAMP', 'CONTINUOUS'); INSERT INTO PAL_META_DATA_TBL VALUES ('Y', 'TARGET');</code></pre><H3 id="toc-hId-1110386380">Massive Mode Capability</H3><P>When dealing with vast datasets, users can leverage "massive mode" in unified interface. This mode enables algorithms to process multiple datasets simultaneously, with each dataset being executed independently and in parallel. To learn more about massive mode, visit the page on <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/massive-execution-of-pal-functions" target="_self" rel="noopener noreferrer">Massive Execution of PAL Functions</A>.</P><H1 id="toc-hId-655707437">Example</H1><P>Let’s demonstrate the new interface with an example. Note that the code provided is purely for illustrative purposes and is not intended for production use.</P><P>The dataset is the <A href="https://archive.ics.uci.edu/dataset/381/beijing+pm2+5+data" target="_self" rel="nofollow noopener noreferrer">Beijing PM2.5</A> data from the UCI Machine Learning Repository. It comprises hourly recordings of PM2.5 levels (airborne particles with aerodynamic diameters less than 2.5 μm) collected by the US Embassy in Beijing between January 1, 2010, and December 31, 2014. Additionally, meteorological data from Beijing Capital International Airport is included. The objective is to predict PM2.5 concentrations using various input features.</P><P>This dataset contains 43,824 rows and 11 columns. During preprocessing, the year, month, day, and hour columns were merged into a single 'date' column, and rows with missing values were addressed. The restructured dataset included the following 9 columns.</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">date: Timestamp of the record</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">pollution: PM2.5 concentration (ug/m^3)</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">dew: Dew Point</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">temp: Temperature</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">press: Pressure (hPa)</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">wnd_dir: Combined wind direction</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">wnd_spd: Cumulated wind speed (m/s)</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">snow: Cumulated hours of snow</P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;">rain: Cumulated hours of rain</P><P>To make it more manageable for demonstration purposes, we selected the first 1,000 instances. From this selection, we allocated 990 instances to the training set and reserved the final 10 for the testing set. Here's a glimpse at the first five rows of the training set.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="UnifiedTimeSeries_1_TrainingData.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352976iC62DF756524E4971/image-size/large?v=v2&amp;px=999" role="button" title="UnifiedTimeSeries_1_TrainingData.png" alt="UnifiedTimeSeries_1_TrainingData.png" /></span></P><P>Once the data is loaded, the model can be trained, and results can be obtained using the following annotated SQL script.</P><pre class="lia-code-sample language-sql"><code>--########## COLUMN TABLE CREATION ########## CREATE COLUMN TABLE PAL_PARAMETER_TBL__0 ("PARAM_NAME" NVARCHAR(256), "INT_VALUE" INTEGER, "DOUBLE_VALUE" DOUBLE, "STRING_VALUE" NVARCHAR(1000)); CREATE COLUMN TABLE PAL_MODEL_TBL__0 ("INDEX" NVARCHAR (50), "CONTENT" NCLOB); CREATE COLUMN TABLE PAL_STATISTICS_TBL__0 ("NAME" NVARCHAR (50), "VALUE_1" DOUBLE, "VALUE_2" DOUBLE, "VALUE_3" DOUBLE, "VALUE_4" DOUBLE, "VALUE_5" DOUBLE, "REASON" NVARCHAR (50)); CREATE COLUMN TABLE PAL_DECOMPOSE_TBL__0 ("TIME_STAMP" NVARCHAR (50), "TREND" DOUBLE, "SEASONAL" DOUBLE, "REGRESSION" DOUBLE, "RANDOM" DOUBLE); CREATE COLUMN TABLE PAL_PLACE_HOLDER_TBL__0 ("OBJECT" NVARCHAR (10), "KEY" NVARCHAR (10), "VALUE" NVARCHAR (10)); CREATE COLUMN TABLE PAL_PREDICT_PARAMETER_TBL__0 ("PARAM_NAME" NVARCHAR(256), "INT_VALUE" INTEGER, "DOUBLE_VALUE" DOUBLE, "STRING_VALUE" NVARCHAR(1000)); CREATE COLUMN TABLE PAL_PREDICT_RESULT_TBL__0 ("TIME_STAMP" NVARCHAR (50), "FORECAST" DOUBLE, "VALUE_1" DOUBLE, "VALUE_2" DOUBLE, "VALUE_3" DOUBLE, "VALUE_4" DOUBLE, "VALUE_5" DOUBLE); CREATE COLUMN TABLE PAL_PREDICT_DECOMPOSITION_TBL__0 ("TIME_STAMP" NVARCHAR (50), "VALUE_1" DOUBLE, "VALUE_2" NCLOB, "VALUE_3" NCLOB, "VALUE_4" NCLOB, "VALUE_5" NCLOB); CREATE COLUMN TABLE PAL_PREDICT_PLACE_HOLDER_TBL__0 ("OBJECT" NVARCHAR (50), "KEY" NVARCHAR (50), "VALUE" NVARCHAR (50)); --########## TABLE INSERTS ########## -- The training data is stored in PAL_DATA_TBL__0, and the prediction data in PAL_PREDICT_DATA_TBL__0. --########## PAL_PARAMETER_TBL__0 DATA INSERTION ########## -- Specify algorithm type, 0: AMTSA, 1: ARIMA, 2: BSTS, 3: SMOOTH INSERT INTO PAL_PARAMETER_TBL__0 VALUES ('FUNCTION', 0, NULL, NULL); --########## UNIFIED INTERFACE FOR TIME SERIES CALL ########## DO BEGIN lt_data = SELECT * FROM PAL_DATA_TBL__0; lt_param = SELECT * FROM PAL_PARAMETER_TBL__0; CALL _SYS_AFL.PAL_UNIFIED_TIMESERIES (:lt_data, :lt_param, lt_model, lt_stat, lt_decom, lt_ph); lt_pdata = SELECT * FROM PAL_PREDICT_DATA_TBL__0; lt_pparam = SELECT * FROM PAL_PREDICT_PARAMETER_TBL__0; CALL _SYS_AFL.PAL_UNIFIED_TIMESERIES_PREDICT (:lt_pdata, :lt_model, :lt_pparam, lt_result, lt_decomp, lt_pph); INSERT INTO PAL_PREDICT_RESULT_TBL__0 SELECT * FROM :lt_result; INSERT INTO PAL_PREDICT_DECOMPOSITION_TBL__0 SELECT * FROM :lt_decomp; END; --########## SELECT * TABLES ########## SELECT * FROM PAL_PREDICT_RESULT_TBL__0; SELECT * FROM PAL_PREDICT_DECOMPOSITION_TBL__0; --########## TABLES CLEANUP ########## DROP TABLE PAL_PARAMETER_TBL__0; DROP TABLE PAL_MODEL_TBL__0; DROP TABLE PAL_STATISTICS_TBL__0; DROP TABLE PAL_DECOMPOSE_TBL__0; DROP TABLE PAL_PLACE_HOLDER_TBL__0; DROP TABLE PAL_PREDICT_PARAMETER_TBL__0; DROP TABLE PAL_PREDICT_RESULT_TBL__0; DROP TABLE PAL_PREDICT_DECOMPOSITION_TBL__0; DROP TABLE PAL_PREDICT_PLACE_HOLDER_TBL__0;</code></pre><P>You can view the model, prediction results, and decomposition in the output tables. Below are illustrative snapshots of the output tables.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="UnifiedTimeSeries_2_Result.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352977iF0ECDD0DEB22037E/image-size/large?v=v2&amp;px=999" role="button" title="UnifiedTimeSeries_2_Result.png" alt="UnifiedTimeSeries_2_Result.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="UnifiedTimeSeries_3_Decomposition.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/352978i0276F55EC34A8260/image-size/large?v=v2&amp;px=999" role="button" title="UnifiedTimeSeries_3_Decomposition.png" alt="UnifiedTimeSeries_3_Decomposition.png" /></span></P><P>The composition of the resulting tables depends on the selected algorithm. For AMTSA, the result table includes the predicted values along with the lower and upper bounds of the uncertainty intervals. Additionally, the decomposition table provides various components, such as trend, seasonality, and others.</P><H1 id="toc-hId-459193932">Summary</H1><P>The unified interface is introduced to simplify the usage of PAL algorithms. This blog post highlights the key features addressing challenges in time series analysis, such as varied time formats, pivoted data structures, and handling large data volumes. This new interface makes it easier for users to unlock the potential of their temporal data.</P><P>&nbsp;</P><P>Recent topics on HANA machine learning:</P><P><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/comprehensive-guide-to-mltrack-in-sap-hana-cloud-end-to-end-machine/ba-p/14134217" target="_self">Comprehensive Guide to MLTrack in SAP HANA Cloud: End-to-End Machine Learning Experiment Tracking</A></P><P><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/new-machine-learning-and-ai-features-in-sap-hana-cloud-2025-q2/ba-p/14136079" target="_self">New Machine Learning and AI features in SAP HANA Cloud 2025 Q2</A></P><P><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/new-machine-learning-and-ai-features-in-sap-hana-cloud-2025-q1/ba-p/14078615" target="_self">New Machine Learning and AI features in SAP HANA Cloud 2025 Q1</A></P> 2026-01-08T23:36:27.942000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/new-machine-learning-nlp-and-ai-features-in-sap-hana-cloud-2025-q3/ba-p/14304443 New Machine Learning, NLP and AI features in SAP HANA Cloud 2025 Q3 2026-01-09T12:54:46.437000+01:00 ChristophMorgen https://community.sap.com/t5/user/viewprofilepage/user-id/14106 <P><SPAN>With the SAP HANA Cloud 2025 Q3 release, several new embedded Machine Learning / AI functions&nbsp;have been released with the SAP HANA Cloud Predictive Analysis Library (PAL) and the Automated Predictive Library (APL). </SPAN></P><UL><LI><SPAN>An enhancement summary is available in the What’s new document for <A href="https://help.sap.com/whats-new/2495b34492334456a49084831c2bea4e?Category=Predictive+Analysis+Library&amp;Valid_as_Of=2025-09-01:2025-09-30&amp;locale=en-US" target="_self" rel="noopener noreferrer">SAP HANA Cloud database 2025.28 (QRC 3/2025)</A>.</SPAN></LI></UL><H2 id="toc-hId-1787736735">&nbsp;</H2><H2 id="toc-hId-1591223230"><SPAN>Time series analysis and forecasting function enhancements</SPAN></H2><P><STRONG><SPAN>Threshold support in timeseries outlier detection </SPAN></STRONG></P><P><SPAN>In time series, an outlier is a data point that is different from the general behavior of remaining data points.&nbsp; In the PAL <STRONG><EM>time series outlier detection</EM></STRONG> function, the outlier detection task is divided into two steps</SPAN></P><UL><LI><SPAN>In step 1 the residual values are derived from the original series, </SPAN></LI><LI><SPAN>In step 2, the outliers are detected from the residual values.</SPAN></LI></UL><P><SPAN>Multiple methods are available to evaluate a data point to be an outlier or not. </SPAN></P><UL><LI><SPAN>Including Z1 score, Z2 score, IIQR score, MAD score, IsolationForest, DBSCAN</SPAN></LI><LI><SPAN>If used in combination, outlier voting can be applied for a combined evaluation.&nbsp;</SPAN></LI></UL><P><SPAN>Now, <STRONG>new</STRONG> and in addition, <STRONG><EM>thresholds values for outlier scores</EM></STRONG> are supported</SPAN></P><UL><LI><SPAN>New parameter OUTPUT_OUTLIER_THRESHOLD </SPAN></LI><LI><SPAN>Based on the given threshold value, if the time series value is beyond the (upper and lower) outlier threshold for the time series, the corresponding data point as an outlier.</SPAN></LI><LI><SPAN>Only valid when outlier_method = 'iqr', 'isolationforest', 'mad', 'z1', 'z2'.</SPAN></LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ChristophMorgen_0-1767958753257.jpeg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359750iE20F7716FF87FA07/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="ChristophMorgen_0-1767958753257.jpeg" alt="ChristophMorgen_0-1767958753257.jpeg" /></span></P><P>&nbsp;</P><P><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId-1394709725"><SPAN>Classification and regression function enhancements</SPAN></H2><P><STRONG><SPAN>Corset sampling support with SVM models</SPAN></STRONG></P><P><STRONG>Coreset sampling</STRONG>&nbsp;is a machine learning technique to</P><UL><LI>select a small, representative subset (the "coreset") from larger datasets,</LI><LI>enabling faster, more efficient training and processing while maintaining similar model accuracy as using the full data.</LI><LI>It works by identifying the most "informative" samples, filtering out redundant or noisy data, and allowing complex algorithms to run on a manageable dataset sizes.</LI></UL><P><STRONG>Support Vector Machine (SVM)</STRONG>&nbsp;model training is computationally expensive, and computational costs are specifically sensitive to the number of training points, which makes SVM models often impractical for large datasets.&nbsp;</P><P><SPAN>Therefore SVM in the Predictive Analysis Library has been enhanced and now</SPAN></P><UL><LI>offers&nbsp;<STRONG>embedded coreset sampling</STRONG>&nbsp;capabilities</LI><LI>enabled with the new parameters USE_CORESET and CORESET_SCALE as the <SPAN>sampling ratio when constructing coreset</SPAN>.</LI></UL><P>This enhancement significantly reduces SVM training time with minimal impact on accuracy.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="ChristophMorgen_1-1767958753264.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359751iDA955B4D29D2C3A9/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="ChristophMorgen_1-1767958753264.png" alt="ChristophMorgen_1-1767958753264.png" /></span></P><P>&nbsp;</P><P>&nbsp;</P><H2 id="toc-hId-1198196220"><SPAN>AutoML and pipeline function enhancements</SPAN></H2><P><STRONG><SPAN>Target encoding support in&nbsp;AutoML&nbsp;</SPAN></STRONG></P><P>The PAL AutoML framework introduces a new pipeline operator for target encoding of categorial features</P><UL><LI><SPAN>Categorical data is often required to be preprocessed and required to get converted from non-numerical features into formats suitable for the respective machine learning algorithm, i.e. numeric values</SPAN><UL><LI><SPAN>Examples features: text labels (e.g., “red,” “blue”) or discrete categories (e.g., “high,” “medium,” “low”)</SPAN></LI></UL></LI><LI><SPAN>One-hot encoding converts each categorial feature value &nbsp;into a binary column (0 or 1), which works well for features with a limited number of unique values. PAL already applies an optimized one-hot encoding method aggregating very low frequent values.</SPAN></LI><LI><SPAN>Target encoding replaces the categorial values with the mean of the target / label column for high-cardinality features, which avoids to create large and sparse one-hot encoded feature matrices</SPAN><UL><LI><SPAN>Example of a high cardinality feature: “city” column with hundreds-thousands of unique values, postal code, product IDs etc.</SPAN></LI></UL></LI></UL><P>The PAL AutoML engine will analyze the input feature cardinality and then automatically decide if to apply target encoding or another encoding method. For medium to high cardinality categorial features, target encoding may improve the performance significantly.</P><P><SPAN>By automating target encoding, the PAL AutoML engine aims to improve model performance and generalization, especially when dealing with complex, high-cardinality categorical features, without requiring manual intervention.</SPAN></P><P>In addition, the AutoML and pipeline function now also support columns of type half precision vector.</P><H2 id="toc-hId-1001682715">&nbsp;</H2><H2 id="toc-hId-805169210"><SPAN>Misc. Machine Learning and statistics function enhancements</SPAN></H2><P><STRONG><SPAN>High-dimensional feature data reduction using UMAP</SPAN></STRONG></P><P>UMAP (Uniform Manifold Approximation and Projection) is a non-linear dimensionality reduction algorithm used to simplify complex, high-dimensional feature spaces, while preserving its essential structure. It is widely considered the modern gold standard for visualizing targeted dimension reduction of large-scale datasets, because it balances computational speed with the ability to maintain both local and global relationships.</P><UL><LI><SPAN>It reduces thousands of variables (dimensions) into 2D or 3D scatter plots that humans can easily interpret.</SPAN></LI><LI><SPAN>Unlike comparable methods like t-SNE, UMAP is better at preserving global structure, meaning the relative positions between different clusters remain more meaningful.</SPAN></LI><LI><SPAN>It is significantly faster and more memory-efficient than t-SNE, capable of processing datasets with millions of points in a reasonable timeframe.</SPAN></LI><LI><SPAN>It can be used as a "transformer" preprocessing step in Machine Learning scenarios to reduce large feature spaces before applying clustering (e.g., k-means, HDBSCAN) or classification models, often improving their performance.</SPAN></LI></UL><P><SPAN>The following new functions are introduced</SPAN></P><UL><LI><SPAN>_SYS_AFL.PAL_UMAP</SPAN>​ with the most important <SPAN>parameters N_NEIGHBORS, MIN_DIST, N_COMPONENTS, DISTANCE_LEVEL</SPAN>​</LI></UL><UL><LI><SPAN>_SYS_AFL.PAL_TRUSTWORTHINESS</SPAN>​, u<SPAN>sed to measure the structure similarity between original high dimensional space and embedded low dimensional space based on K nearest neighbors.</SPAN></LI></UL><P><STRONG><SPAN>&nbsp;</SPAN></STRONG></P><P><STRONG><SPAN>Calculating pairwise distances</SPAN></STRONG></P><P><SPAN>Many algorithms, for example clustering algorithms utilize distance matrixes as a preprocessing step, often inbuild to the functions. While often there is the wish to decouple though the distance matrix calculation from the follow-up task like the actual clustering. Moreover, if decoupled custom calculated matrixes can be fed into algorithms as input.</SPAN></P><UL><LI><SPAN>Most PAL clustering functions support to feed-in a pre-calculated similarity matrix</SPAN></LI></UL><P><SPAN>Now, a dedicated <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-predictive-analysis-library/distance-md?version=LATEST&amp;q=distance&amp;locale=en-US" target="_blank" rel="noopener noreferrer">pairwise distance calculation</A> function is provided </SPAN></P><UL><LI><SPAN>It supports distance metrics like <EM>Manhattan, Euclidien, Minkowski, Chebyshey</EM> as well as <STRONG>Levenshtein</STRONG></SPAN></LI><LI><SPAN>The <STRONG><EM>Levenshtein distance</EM></STRONG> (or “edit distance”) is a distance metric specifically targeting distance between text-columns. </SPAN><UL><LI><SPAN>It calculates the minimum number of single-character edits (insertions, deletions, or substitutions) needed to transform one word into another, acting as a measure of their similarity. A lower distance indicates a higher similarity.</SPAN></LI></UL></LI></UL><P><SPAN>Applicable use cases</SPAN></P><UL><LI><SPAN>It is useful in data cleaning, table column similarity analysis between columns of the same data type.</SPAN></LI><LI><SPAN>After calculating the column similarity across all data types, clustering like K-Means can be applied to group similar fields and propose mappings for fields within the same cluster</SPAN></LI></UL><P><SPAN>&nbsp;</SPAN></P><P><STRONG><SPAN>Real Vector data type support</SPAN></STRONG></P><P>The following PAL functions have been enhanced to support columns of type real vector</P><UL><LI><SPAN>Spectral Clustering</SPAN></LI><LI><SPAN>Cluster Assignment</SPAN></LI><LI><SPAN>Decision tree</SPAN></LI><LI><SPAN>Sampling</SPAN></LI></UL><P>In addition the AutoML and pipeline function now also support columns of type half precision vector.</P><P>&nbsp;</P><H2 id="toc-hId-608655705"><SPAN>Creating Vector Embeddings enhancements</SPAN></H2><P><SPAN>The SAP HANA Database Vector Engine function VECTOR_EMBEDDING()&nbsp;</SPAN><SPAN>has added support for remote, SAP AI Core exposed embedding models. Detailed instruction are given in the documentation at&nbsp;</SPAN><SPAN><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/creating-text-embeddings-with-sap-ai-core" target="_blank" rel="noopener noreferrer">Creating Text Embeddings with SAP AI Core | SAP Help Portal</A></SPAN></P><P>&nbsp;</P><H2 id="toc-hId-412142200"><SPAN>Python ML client (hana-ml) enhancements</SPAN></H2><P><EM>The full list of new methods and enhancements with hana_ml 2.26&nbsp; is summarized in the </EM><SPAN><A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/2025_3_QRC/en-US/change_log.html" target="_blank" rel="noopener noreferrer"><EM>changelog for hana-ml 2.26</EM></A> </SPAN><EM>as part of the documentation. The key enhancements in this release include</EM></P><P><STRONG>New&nbsp;Functions</STRONG></P><UL><LI>Added text tokenization API.</LI><LI>Added explainability support with IsolationForest Outlier Detection</LI><LI>Added constrained clustering API.</LI><LI>Added intermittent time series data test in time series report.</LI></UL><P><STRONG>Enhancements</STRONG></P><UL><LI>Support time series SHAP visualizations for AutoML Timeseries model explanations</LI></UL><P>You can find an examples notebook illustrating the highlighted feature enhancements <SPAN><A href="https://github.com/SAP-samples/hana-ml-samples/blob/main/Python-API/pal/notebooks/25QRC03_2.26.ipynb" target="_blank" rel="nofollow noopener noreferrer">here 25QRC03_2.26.ipynb</A>.&nbsp; </SPAN></P> 2026-01-09T12:54:46.437000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/sap-hana-cloud-release-january-2026-round-up/ba-p/14307255 SAP HANA Cloud release January 2026 Round-up 2026-01-14T11:36:32.644000+01:00 andreamiranda https://community.sap.com/t5/user/viewprofilepage/user-id/135788 <P><SPAN>Dear SAP HANA Cloud Enthusiasts,</SPAN><BR /><BR />We are thrilled to share a comprehensive collection of the latest videos, blogposts, and resources from the SAP HANA Cloud Q4 2025 release.</P><P>&nbsp;</P><TABLE border="1" width="100%"><TBODY><TR><TD width="50.129533678756474%" height="222px"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="playTeaserqrc42025.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361236i330DD3979DA0B86C/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="playTeaserqrc42025.png" alt="playTeaserqrc42025.png" /></span></TD><TD width="49.870466321243526%" height="222px"><H4 id="toc-hId-2045989657"><STRONG>What’s New Teaser</STRONG></H4>Discover the newest advancements in SAP HANA Cloud with Lead Product Manager Thomas Hammer, as he highlights his top features from the latest release in this engaging teaser.<BR /><BR /><A href="https://www.youtube.com/watch?v=XoDaaRlkP7A&amp;list=PL3ZRUb1AKkpTDZQgENtRcupp6vsNg8NHN&amp;index=1" target="_blank" rel="nofollow noopener noreferrer">Watch it now on YouTube.</A></TD></TR><TR><TD width="50.129533678756474%" height="250px"><H4 id="toc-hId-1849476152"><STRONG>What’s New blogpost</STRONG></H4>Intrigued by the teaser? Explore our "What’s New in SAP HANA Cloud in December 2025" blogpost for an in-depth look at the innovations and find valuable links to further demos and content.<BR /><BR /><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/what-s-new-in-sap-hana-cloud-december-2025/ba-p/14295366" target="_blank">Read it here!</A></TD><TD width="49.870466321243526%" height="250px"><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/what-s-new-in-sap-hana-cloud-december-2025/ba-p/14295366" target="_blank"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Screenshotblogpostqrc42025.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361238i2AA7B69CCC93B911/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="Screenshotblogpostqrc42025.png" alt="Screenshotblogpostqrc42025.png" /></span></A></TD></TR><TR><TD width="50.129533678756474%" height="250px"><A href="https://youtu.be/W1DRNx1Ovgw?si=4Zp1jghksYI8Pikt" target="_blank" rel="noopener nofollow noreferrer"><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="playwebinarqrc42025.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361244iF5928216F66D2941/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="playwebinarqrc42025.png" alt="playwebinarqrc42025.png" /></span></A></TD><TD width="49.870466321243526%" height="250px"><H4 id="toc-hId-1652962647">What's New Webinar</H4>Prefer watching over reading? Our 'What’s New' webinar is just for you! Join our Product experts for an in-depth view of the latest features. Available to watch anytime&nbsp;<A href="https://youtu.be/W1DRNx1Ovgw?si=4Zp1jghksYI8Pikt" target="_blank" rel="noopener nofollow noreferrer">here</A>!</TD></TR></TBODY></TABLE><H3 id="toc-hId-1327366423">&nbsp;</H3><DIV class=""><HR /><SPAN>Don’t miss out on all the content and remember to&nbsp;</SPAN><A href="https://community.sap.com/topics/hana" target="_blank">follow us in the SAP HANA Community.</A></DIV><P>Remember to check our content following the # whatsnewinsaphanacloud tag:<SPAN>&nbsp;</SPAN><A href="https://community.sap.com/t5/tag/whatsnewinsaphanacloud/tg-p/board-id/technology-blog-sap" target="_blank">here</A><BR /><BR /><SPAN>Don’t forget to subscribe and follow SAP HANA Cloud on&nbsp;</SPAN><A href="https://www.youtube.com/playlist?list=PL3ZRUb1AKkpTDZQgENtRcupp6vsNg8NHN" target="_blank" rel="nofollow noopener noreferrer">YouTube</A><SPAN>&nbsp;to always stay up-to-date regarding the most recent innovations in SAP HANA Cloud.</SPAN><BR /><SPAN>&nbsp;</SPAN><BR /><SPAN>All the best,</SPAN><BR /><BR /><STRONG>Andrea on behalf of the SAP HANA Cloud team</STRONG></P><P><STRONG><A class="" href="https://community.sap.com/t5/c-khhcw49343/SAP+HANA+Cloud%25252C+SAP+HANA+database/pd-p/ada66f4e-5d7f-4e6d-a599-6b9a78023d84" target="_blank">#SAP HANA Cloud, SAP HANA database</A><SPAN>&nbsp;</SPAN>&nbsp;<SPAN>&nbsp;#</SPAN><A class="" href="https://community.sap.com/t5/c-khhcw49343/SAP+HANA+Cloud/pd-p/73554900100800002881" target="_blank">SAP HANA Cloud</A><SPAN>&nbsp;</SPAN>&nbsp;</STRONG></P><P><STRONG>#whatsnewinsaphanacloud</STRONG></P> 2026-01-14T11:36:32.644000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/code-connect-2026-is-coming-mark-your-calendars/ba-p/14307923 Code Connect 2026 is Coming – Mark Your Calendars! 2026-01-16T10:17:39.514000+01:00 BirgitS https://community.sap.com/t5/user/viewprofilepage/user-id/41902 <P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Code Connect logo" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361588i1404B9EF84CF278E/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="CodeConnectBanner.png" alt="CodeConnectBanner.png" /></span></SPAN></P><P>&nbsp;</P><P><SPAN>We’re excited to announce the return of&nbsp;<A href="https://code-connect.dev/" target="_blank" rel="noopener nofollow noreferrer"><STRONG>Code Connect</STRONG>&nbsp;</A>for its third edition, bringing together three dynamic events - <STRONG>UI5con, reCAP, and HANA Tech Con</STRONG> - under one roof. Mark your calendars for&nbsp;<STRONG>July 13 to 16, 2026</STRONG>, and join us in&nbsp;<STRONG>St. Leon-Rot, Germany</STRONG>, or online.</SPAN></P><P>&nbsp;</P><H2 id="toc-hId-1787830851"><SPAN>What is Code Connect?</SPAN></H2><P><SPAN>Code Connect creates a unique opportunity to experience three specialized events in one location: <A href="https://openui5.org/ui5con/" target="_blank" rel="noopener nofollow noreferrer">UI5con</A>, <A href="https://recap-conf.dev/" target="_blank" rel="noopener nofollow noreferrer">reCAP</A>, and <A href="https://hanatech.community/" target="_blank" rel="noopener nofollow noreferrer">HANA Tech Con</A>, allowing you to dive deep into different aspects of SAP development. Code Connect is designed for developers at every level: Whether you're an SAP veteran or just starting out, this is your chance to&nbsp;connect, learn, and innovate&nbsp;alongside a vibrant community of developers.</SPAN></P><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Collage of photos from past Code Connect events" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361582i126BE1733C3ACA09/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="Collage.png" alt="Collage.png" /></span></SPAN></P><P>&nbsp;</P><H3 id="toc-hId-1720400065">Your Week at Code Connect</H3><H3 id="toc-hId-1523886560"><SPAN>July 13: Code Jam Sessions and Warmup</SPAN></H3><P><SPAN>Kick things off with our&nbsp;Code Jam sessions - a hands-on way to sharpen your skills before the main event. Afterward, join us for a&nbsp;pre-conference networking event&nbsp;to meet fellow attendees in a relaxed setting.</SPAN></P><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Logo UI5con" style="width: 200px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361598i6723ED463E741644/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="UI5con2.png" alt="UI5con2.png" /></span></P><H3 id="toc-hId-1327373055">&nbsp;</H3><H3 id="toc-hId-1130859550">&nbsp;</H3><H3 id="toc-hId-934346045"><SPAN>July 14: UI5con </SPAN></H3><P>The official program kicks off with UI5con, bringing together UI5 enthusiasts to share insights, explore the latest innovations, and build new connections. Expect expert sessions, interactive workshops, and plenty of opportunities to engage with the UI5 community.</P><P><SPAN><A href="https://openui5.org/ui5con/" target="_blank" rel="noopener nofollow noreferrer">Learn more about UI5con</A>.</SPAN></P><P>&nbsp;</P><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Logo reCAP" style="width: 200px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361611i7856787EDD6734D2/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="reCAP_3.png" alt="reCAP_3.png" /></span></SPAN></P><H5 id="toc-hId-995997978">&nbsp;</H5><H5 id="toc-hId-799484473">&nbsp;</H5><H5 id="toc-hId-602970968">&nbsp;</H5><H5 id="toc-hId-406457463">&nbsp;</H5><H3 id="toc-hId--123452849"><SPAN>July 15: reCAP</SPAN></H3><P>The next day focuses on the SAP Cloud Application Programming Model (CAP). At reCAP, developers, customers, and partners meet the CAP Product Team to discuss technical concepts, share project experiences, and explore future possibilities.</P><P><SPAN><A href="https://recap-conf.dev/" target="_blank" rel="noopener nofollow noreferrer">Learn more about reCAP</A>.</SPAN></P><P>&nbsp;</P><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Logo HANA Tech Con" style="width: 150px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/361578iD47004A3E8E22283/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="HANA_tech_con.png" alt="HANA_tech_con.png" /></span></SPAN></P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><H3 id="toc-hId--319966354">&nbsp;</H3><H3 id="toc-hId--516479859">July 16: HANA Tech Con</H3><P>The week concludes with HANA Tech Con. Delve into the HANA universe and join development experts, users, and partners to exchange knowledge and ignite new ideas. If you've ever had questions about HANA that you haven't found answers to, this is your chance to get them resolved.</P><P><SPAN><A href="https://hanatech.community/" target="_blank" rel="noopener nofollow noreferrer">Learn more about HANA Tech Con</A>.</SPAN></P><P><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId--419590357">Early Bird Process</H2><P><SPAN>Planning a longer trip? We offer a limited number of early bird tickets for attendees who need to arrange travel well in advance. </SPAN></P><P>Check our <SPAN><A href="https://code-connect.dev/faq.html" target="_blank" rel="noopener nofollow noreferrer">FAQ document</A></SPAN> for details on how to secure your ticket.</P><P>&nbsp;</P><H2 id="toc-hId--616103862"><SPAN>Sponsorship Opportunities</SPAN></H2><P>Be a part of Code Connect 2026 and become a sponsor to support the developer community at UI5con, reCAP, and HANA Tech Con. By sponsoring Code Connect, you gain access to a diverse audience spanning front-end developers, backend specialists, and database experts – all in one event.</P><P><SPAN>Read our <A href="https://cap.cloud.sap/resources/events/Code_Connect_2026_Sponsor_Packages.pdf" target="_blank" rel="noopener nofollow noreferrer">sponsorship prospectus</A> to check our sponsorship opportunities.</SPAN></P><P>&nbsp;</P><H2 id="toc-hId--812617367"><SPAN>Call for Proposals</SPAN></H2><P>Our Call for Speakers runs from 26 January to 13 March. As a speaker, you’ll be an active part of Code Connect, shaping the conversation, sharing your expertise, and inspiring the developer community. Don’t wait and submit your session proposal by 13 March at the latest.</P><UL><LI><SPAN><A href="https://ui5con.cfapps.eu12.hana.ondemand.com/" target="_blank" rel="noopener nofollow noreferrer">Call for proposals UI5con</A></SPAN></LI><LI><SPAN><A href="https://recap.cfapps.eu12.hana.ondemand.com/" target="_blank" rel="noopener nofollow noreferrer">Call for proposals reCAP</A></SPAN></LI><LI><SPAN><A href="https://hanatech.cfapps.eu12.hana.ondemand.com/" target="_blank" rel="noopener nofollow noreferrer">Call for proposals HANA Tech Con</A></SPAN></LI></UL><H2 id="toc-hId--1009130872">&nbsp;</H2><H2 id="toc-hId--1205644377"><SPAN>Important Dates</SPAN></H2><UL><LI><SPAN>Call for Proposals: January 26, 2026 to March 13, 2026.</SPAN></LI><LI><SPAN>Registration Opens: April 10, 2026</SPAN></LI><LI><SPAN>Agenda Published: Early June 2026</SPAN></LI><LI><SPAN>Code Connect Week: July 13 to 16, 2026</SPAN></LI></UL><P><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId--1402157882"><SPAN>Ready to Connect?</SPAN></H2><P><SPAN><A href="https://code-connect.dev/" target="_blank" rel="noopener nofollow noreferrer">Code Connect 2026</A> represents more than just learning opportunities. It's about building relationships, sharing knowledge, and being part of a community that's shaping the future of SAP development.</SPAN></P><P><SPAN>Join us at Code Connect 2026 and be part of a community driving the future of technology. We can't wait to see you there!</SPAN></P><P>&nbsp;</P> 2026-01-16T10:17:39.514000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219 Good to know: Analyzing Runtime Dump (SAP HANA Dump Analyzer vs Visual Studio Code) 2026-01-20T08:53:33.889000+01:00 Laszlo_Thoma https://community.sap.com/t5/user/viewprofilepage/user-id/170406 <P><ul =""><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-1659456364">Why was this blog post created?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-1462942859">Where can I find the most important information about the SAP HANA Dump Analyzer and Visual Studio Code - Supportability tools for SAP HANA?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-1266429354">What are the experiences based on the comparison of the tools?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-1069915849">Generated report inSAP HANA Dump Analyzer</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-873402344">Generated report inVisual Studio Code - Supportability tools for SAP HANA</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-676888839">What is the conclusion?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-480375334">Other articles</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-283861829">Do you have further questions?</a></li><li style="list-style-type:disc; margin-left:0px; margin-bottom:1px;"><a href="https://community.sap.com/t5/technology-blog-posts-by-sap/good-to-know-analyzing-runtime-dump-sap-hana-dump-analyzer-vs-visual-studio/ba-p/14310219#toc-hId-87348324">Contribution</a></li></ul></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Banner_2026.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362879iF2F69BC5373CC00D/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Banner_2026.png" alt="SAP_Community_Blog_Banner_2026.png" /></span></P><P class="lia-align-right" style="text-align : right;"><FONT color="#FF0000">last updated: 2026-01-20</FONT></P><H1 id="toc-hId-1658532033" id="toc-hId-1659456364">Why was this blog post created?</H1><P>SAP HANA Dump Analyzer is a well-known and long-used tool.&nbsp;Same analysis can be done in Visual Studio Code - Supportability tools for SAP HANA (SAP Extension).&nbsp;It's worth getting acquainted with the new tool.</P><H1 id="toc-hId-1462942859">Where can I find the most important information about the SAP HANA Dump Analyzer and Visual Studio Code - Supportability tools for SAP HANA?</H1><P><SPAN>SAP Community Article:&nbsp;</SPAN><A class="" href="https://blogs.sap.com/2023/03/29/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/" target="_blank" rel="noopener noreferrer">Where can I find knowledge and information belongs to SAP HANA?</A></P><UL><LI><SPAN>section&nbsp;</SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/ba-p/13562344#toc-hId--22403863" target="_blank">Runtime Dump (RTE)</A></LI><LI><SPAN>section&nbsp;</SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/ba-p/13562344#toc-hId--1819355222" target="_blank">SAP HANA Dump Analyzer</A></LI></UL><P><SPAN>SAP Community Article:&nbsp;<A class="" href="https://blogs.sap.com/2023/06/02/where-can-i-find-information-about-the-available-tools-for-sap-hana-all-types-of-use/" target="_blank" rel="noopener noreferrer">Where can I find information about the available tools for SAP HANA (all types of use)?</A></SPAN></P><UL><LI><SPAN>section&nbsp;</SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/where-can-i-find-information-about-the-available-tools-for-sap-hana-all/ba-p/13549330#toc-hId-2066311963" target="_blank">Visual Studio Code</A>&nbsp;-&nbsp;<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/where-can-i-find-information-about-the-available-tools-for-sap-hana-all/ba-p/13549330#toc-hId-1351552132" target="_blank">Supportability tools for SAP HANA</A></LI><LI><SPAN>section&nbsp;</SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/where-can-i-find-information-about-the-available-tools-for-sap-hana-all/ba-p/13549330#toc-hId-1448441634" target="_blank">SAP HANA Dump Analyzer</A></LI></UL><H1 id="toc-hId-1266429354">What are the experiences based on the comparison of the tools?</H1><P>The same analysis can be performed with both tools. The presentation is slightly different, with Visual Studio Code providing a fresher, more modern look, but the information in the reports is exactly the same.&nbsp;If Visual Studio Code is already installed, you can access a very useful analytical tool, with just adding the necessary SAP extension.<BR /><BR /></P><TABLE border="1"><TBODY><TR><TD width="33.333333333333336%">&nbsp;</TD><TD width="33.333333333333336%"><STRONG>SAP HANA Dump Analyzer</STRONG></TD><TD width="33.333333333333336%"><STRONG>Visual Studio Code - Supportability tools for SAP HANA</STRONG></TD></TR><TR><TD width="33.333333333333336%"><STRONG>Design</STRONG></TD><TD width="33.333333333333336%">Old school.</TD><TD width="33.333333333333336%">Modern.</TD></TR><TR><TD width="33.333333333333336%"><STRONG>Look and feel</STRONG></TD><TD width="33.333333333333336%">Well structured.</TD><TD width="33.333333333333336%">Well structured but more compact.</TD></TR><TR><TD><STRONG>What will be the generated result?</STRONG></TD><TD>.HTML file</TD><TD>Result will be opened in the application.</TD></TR><TR><TD><STRONG>Can the various details and data be cut and copied?</STRONG></TD><TD>Yes.</TD><TD>Yes.</TD></TR><TR><TD width="33.333333333333336%"><STRONG>Export</STRONG></TD><TD width="33.333333333333336%">.HTML file</TD><TD width="33.333333333333336%">N/A</TD></TR></TBODY></TABLE><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Community_Blog_Image_SAPHanaDumpAnalyzer_vs_SupportabilityToolsForSAPHANA.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362912iB55FE7BBBF434F73/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="SAP_Community_Blog_Image_SAPHanaDumpAnalyzer_vs_SupportabilityToolsForSAPHANA.png" alt="SAP_Community_Blog_Image_SAPHanaDumpAnalyzer_vs_SupportabilityToolsForSAPHANA.png" /></span></P><H1 id="toc-hId-1069915849">Generated report in&nbsp;SAP HANA Dump Analyzer</H1><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="SAP_Hana_Dump_Analyzer.png" style="width: 757px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362916iE1AEC815DAEBCB3B/image-size/large?v=v2&amp;px=999" role="button" title="SAP_Hana_Dump_Analyzer.png" alt="SAP_Hana_Dump_Analyzer.png" /></span></P><P>The below example shows an outcome which generated via <STRONG>"Auto Analyzer"</STRONG> option. Other option is the <STRONG>"Expert Mode"</STRONG> - - &gt; "Runtime Dump Mini Check". In some case there is a need to to generate both type of output for better understanding the issue.</P><P>Summary Info tab where system related information and detected issues presented.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DA_1.png" style="width: 829px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362952i722C8882DEA5DC53/image-size/large?v=v2&amp;px=999" role="button" title="DA_1.png" alt="DA_1.png" /></span></P><P>In this example HANA Workload Analysis tab exists because this kind of issue has been identified.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DA_2.png" style="width: 825px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362953i22EC52511864CD48/image-size/large?v=v2&amp;px=999" role="button" title="DA_2.png" alt="DA_2.png" /></span></P><P>Pie chart and highlighted statement hash gives us important information regarding the issue.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DA_3.png" style="width: 819px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362955i6D6F64640EA6FE3E/image-size/large?v=v2&amp;px=999" role="button" title="DA_3.png" alt="DA_3.png" /></span></P><P>In this example Blocked Transactions tab exists because this kind of issue has been identified.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DA_4.png" style="width: 829px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362956i6048462415895177/image-size/large?v=v2&amp;px=999" role="button" title="DA_4.png" alt="DA_4.png" /></span></P><P>With Blocked Transaction wait Graph option we can get further details.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="DA_5.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362957i2383E7EC3792D1E1/image-size/large?v=v2&amp;px=999" role="button" title="DA_5.png" alt="DA_5.png" /></span></P><H1 id="toc-hId-873402344">Generated report in&nbsp;Visual Studio Code - Supportability tools for SAP HANA</H1><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Visual_Studio_Code_Supportability_Tools_for_SAP_HANA.png" style="width: 757px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362917iFCD0491A84E30AD4/image-size/large?v=v2&amp;px=999" role="button" title="Visual_Studio_Code_Supportability_Tools_for_SAP_HANA.png" alt="Visual_Studio_Code_Supportability_Tools_for_SAP_HANA.png" /></span></P><P>In Visual Studio Code we need to generate a work folder and import the files.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VSC_0.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/363262i8E3933B5B128BEE0/image-size/large?v=v2&amp;px=999" role="button" title="VSC_0.png" alt="VSC_0.png" /></span></P><P>In this example High Workload tab exists because this kind of issue has been identified.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VSC_2.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362959iCC44EC376FBF506C/image-size/large?v=v2&amp;px=999" role="button" title="VSC_2.png" alt="VSC_2.png" /></span></P><P>In this example Many Transactions Blocked tab exists because this kind of issue has been identified.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VSC_1.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/362961i1A76238DA7B80497/image-size/large?v=v2&amp;px=999" role="button" title="VSC_1.png" alt="VSC_1.png" /></span></P><H1 id="toc-hId-1068991518" id="toc-hId-676888839">What is the conclusion?</H1><P>Complete analysis can be performed using Visual Studio Code without installing the previously used SAP HANA Dump Analyzer. However, the old tool can be easily exported by saving it as a simple .html file, and with this action, the entire content can be stored offline and attached to an email, case, etc.</P><H1 id="toc-hId-675964508" id="toc-hId-480375334"><SPAN>Other articles</SPAN></H1><P><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/03/29/where-can-i-find-knowledge-and-information-belongs-to-sap-hana/" target="_blank" rel="noopener noreferrer">Where can I find knowledge and information belongs to SAP HANA?</A><BR /><span class="lia-unicode-emoji" title=":writing_hand:">✍️</span>&nbsp;<A href="https://blogs.sap.com/2023/06/02/where-can-i-find-information-about-the-available-tools-for-sap-hana-all-types-of-use/" target="_blank" rel="noopener noreferrer">Where can I find information about the available tools for SAP HANA (all types of use)?</A></P><H1 id="toc-hId-479451003" id="toc-hId-283861829">Do you have further questions?</H1><P>Please do not hesitate to contact me if you have question or observation regarding the article.<BR />Q&amp;A link for SAP HANA:<SPAN>&nbsp;</SPAN><A href="https://answers.sap.com/tags/73554900100700000996" target="_blank" rel="noopener noreferrer">https://answers.sap.com/tags/73554900100700000996</A>&nbsp;</P><H1 id="toc-hId-282937498" id="toc-hId-87348324">Contribution</H1><P>If you find any missing information belongs to the topic, please let me know. I am happy to add the new content. My intention is to maintain the content continuously to keep the info up-to-date.</P><P><FONT color="#999999"><STRONG>Release Information</STRONG></FONT></P><TABLE width="100%" cellspacing="1"><TBODY><TR><TD height="58px"><FONT color="#999999">Release Date</FONT></TD><TD height="58px"><FONT color="#999999">Description</FONT></TD></TR><TR><TD height="30px"><FONT color="#999999">2026.01.20</FONT></TD><TD height="30px"><FONT color="#999999">First/initial Release of the SAP Blog Post documentation (Technical Article).</FONT></TD></TR></TBODY></TABLE> 2026-01-20T08:53:33.889000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/sap-rpt-1-why-is-it-essential-for-predicting-business-outcomes-in-today-s/ba-p/14314375 SAP RPT‑1: Why is it Essential for Predicting Business Outcomes in Today’s and Future Generative AI? 2026-01-26T05:55:37.670000+01:00 MajoMartinez https://community.sap.com/t5/user/viewprofilepage/user-id/14892 <P>Ever since the introduction of ChatGPT in November 2022, Generative AI has reshaped the AI industry. Companies like Amazon, Google, Microsoft, and of course SAP have accelerated innovation, recognizing the enormous value Generative AI and Large Language Models (LLMs) bring to enterprise operations.&nbsp;</P><P>General‑purpose LLMs excel at understanding language, reasoning over text, and identifying patterns in unstructured information. They are creative, adaptive, and capable of leveraging data from files, documents, and multimedia across diverse data systems.</P><P><FONT size="5">The Motive</FONT></P><DIV><P>However, when it comes to actually <STRONG>predicting business outcomes</STRONG>, LLMs fall short. Why?<BR />Because they are&nbsp;not designed for high‑precision, multi‑step reasoning over large, enterprise‑grade <STRONG>tabular datasets</STRONG>.</P><P>And this is where the majority of business data resides: structured tables, such as GL accounts, invoices, inventory, sales records, financial transactions, and countless others. Not in free‑form text.</P><DIV><P>Everyone wants to be able to predict business outcomes like:</P><DIV><UL><LI>What is the probability of converting a sale?</LI><LI>Which customers are likely to pay late?</LI><LI>Who is at risk of churn?</LI><LI>...</LI></UL></DIV><DIV>Historically, answering these questions required traditional machine learning like classification models, linear regression, etc. (a.k.a Traditional AI or Narrow AI). The problem with this is that classical ML requires to train a model per task, which easily can lead to hundreds of separate models hard to maintain in the long run, making it very cumbersome, expensive and extremely difficult to scale.</DIV><DIV>&nbsp;</DIV><DIV><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rpt blog.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365028iEA5B2D61306B8EB7/image-size/large/is-moderation-mode/true?v=v2&amp;px=999" role="button" title="rpt blog.png" alt="rpt blog.png" /></span></DIV><DIV>&nbsp;</DIV><DIV><FONT size="5">The Solution: SAP RPT-1</FONT></DIV><DIV>&nbsp;</DIV><DIV>To solve this,&nbsp;SAP introduced in TechEd 2025 the first foundation model specifically designed for structured enterprise data: Relational Pre‑Trained Transformer 1 (RPT‑1).&nbsp; The model is trained natively on tabular business datasets and is engineered to understand rows, columns, joins and business semantics out of the box.&nbsp;As the name suggests, RPT‑1 is:</DIV><UL><LI><STRONG>Relational</STRONG>: optimized for structured relational business data</LI><LI><STRONG>Pre‑trained</STRONG>: powered by tens of thousands of GPU hours (no more endless classical ML training cycles)</LI><LI><STRONG>Transformer</STRONG>‑based: performs logical, not linguistic, transformations (e.g., filtering, joining, aggregating, and multi‑step reasoning)</LI></UL><P>Instead of hundreds of ML models, you now can use one single foundation model for many predictive tasks.</P><P>One of the most disruptive features of RPT‑1 is its in‑context learning capability. Instead of training or fine‑tuning multiple modes, you simple provide historical rows of data and request predictions for new rows. Prediction cases like:</P><UL><LI>Customer churn prediction</LI><LI>Late delivery prediction</LI><LI>Late payment prediction</LI><LI>Sales conversion prediction</LI><LI>And many more</LI></UL><P><STRONG>RPT‑1 Performance Benchmark</STRONG></P><P>RPT-1 outperforms both LLM and classical ML for tabular data and high value agentic cases. Here are some stats so far:</P><UL><LI>50x faster than LLMs</LI><LI>Up to 2x Prediction Quality vs Narrow AI/ML Models</LI><LI>Up to 3.5x Prediction Quality vs LLMs</LI><LI>100,000x fewer GPU FLOPs</LI><LI>50,000x Less Energy consumption vs LLMs*</LI></UL><P data-unlink="true">*comparable tasks on a NVIDIA H100, as a benchmark. Sources (<A href="https://youtu.be/X9qHsLmPMk4?t=946&amp;si=1--wB78x9otamQHy" target="_self" rel="nofollow noopener noreferrer">TechEd</A>, <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/a-new-paradigm-for-enterprise-ai-in-context-learning-for-relational-data/ba-p/14260221" target="_self">Blog</A>)&nbsp;.</P><P><STRONG>Why are these stats so important?</STRONG> On top of the benefits you can already imagine like time and resource savings; according to Gartner 40% of agentic-AI projects are likely to be canceled by end of 2027, mainly because of high cost, complexity, or unclear business value (<A href="https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027" target="_self" rel="nofollow noopener noreferrer">source)</A>. RPT‑1 directly addresses these challenges.</P><P>&nbsp;</P><P><FONT size="5">Sales Conversion Prediction Example Use Case</FONT></P><DIV><P>During SAP TechEd, a compelling demo showcased how RPT‑1 integrates with SAP’s Agentic AI.</P><P>Imagine a sales team wanting to prioritize leads based on their likelihood to convert. Instead of manually training an ML model, a data analyst can use Joule to generate SQL code leveraging <STRONG>RPT‑1’s <CODE>PREDICT</CODE> function</STRONG>.</P><P>Steps include:</P><OL><LI>Join sales inquiries, historical performance, customer attributes, and other relevant data into a Data Product in SAP Business Data Cloud.</LI><LI>Expose the Data Product to SAP HANA Cloud via zero‑copy.</LI><LI>Use RPT‑1 to instantly generate a new prediction column "sales_conversion_probability"; without training or tuning.</LI></OL><P>The model derives patterns directly from the historical data and produces high‑quality prediction scores.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RPT1 on HANA Cloud - joule sql code.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365060iB4B5A3378FD6036C/image-size/large?v=v2&amp;px=999" role="button" title="RPT1 on HANA Cloud - joule sql code.png" alt="RPT1 on HANA Cloud - joule sql code.png" /></span></P><P data-unlink="true">You can watch the full&nbsp;demo&nbsp;<A href="https://youtu.be/X9qHsLmPMk4?t=2123&amp;si=pmMuIfGmZJ3U5MsL" target="_self" rel="nofollow noopener noreferrer">here</A>&nbsp;(jump to minute 35:23).</P><P data-unlink="true">&nbsp;</P><P><FONT size="5">Available RPT‑1 Versions</FONT></P><P>SAP RPT‑1 has been generally available since Q4 2025. You can choose from:</P><DIV><UL><LI><STRONG>SAP RPT‑1 Small</STRONG>: optimized for speed and efficiency</LI><LI><STRONG>SAP RPT‑1 Large:</STRONG> optimized for highest accuracy by using more capacity</LI><LI><STRONG>SAP RPT‑1 OSS</STRONG> (Open Source): available on <A href="https://huggingface.co/sap/sap-rpt-1-oss" target="_self" rel="nofollow noopener noreferrer">HuggingFace</A> and <A href="https://github.com/SAP-samples/sap-rpt-1-oss" target="_self" rel="nofollow noopener noreferrer">Github</A> for exploration and learning</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="rpt on genai hub.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/365061iB9E0A36A349BD445/image-size/large?v=v2&amp;px=999" role="button" title="rpt on genai hub.png" alt="rpt on genai hub.png" /></span></P><P><FONT size="5">Try it yourself!</FONT></P><P>Don’t have access to a BTP account yet? here is an entry point:</P><UL><LI>Sign up for a 30-day SAP Generative AI Hub <A href="https://www.sap.com/products/artificial-intelligence/generative-ai-hub-trial.html" target="_self" rel="noopener noreferrer">trial</A></LI><LI>Follow this step-by-step <A href="https://community.sap.com/t5/artificial-intelligence-blogs-posts/sap-rpt-1-a-step-by-step-guide-on-getting-started/ba-p/14290171" target="_self">guide</A> to get started by&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/1698208">@sherene_tan</a>&nbsp;</LI></UL><P>&nbsp;</P><P><FONT size="5">Conclusion</FONT></P><P>SAP RPT‑1 represents a major shift in how enterprises will deliver predictive insights in the GenAI era. While LLMs excel at language understanding, they are not built for structured, relational business data. RPT‑1 closes this gap and eliminates the complexity of traditional machine learning,&nbsp;drastically reducing cost and compute.</P><P>With SAP RPT-1's foundation model purpose‑built for enterprise-grade tabular data, the objective<SPAN>&nbsp;is to enable organizations to operationalize predictive insights faster, more efficiently, and more accurately than ever before.</SPAN></P></DIV></DIV></DIV></DIV> 2026-01-26T05:55:37.670000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/developing-hana-ml-models-with-sap-databricks/ba-p/14317905 Developing HANA ML models with SAP Databricks 2026-02-04T14:10:52.496000+01:00 nidhi_sawhney https://community.sap.com/t5/user/viewprofilepage/user-id/218133 <H2 id="toc-hId-1788754312"><FONT size="6">Introduction</FONT></H2><P><FONT size="4">SAP HANA provides a rich set of Machine Learning capabilities natively which can be used via SQL or python interface. For an introduction to these capabilities you can refer to&nbsp;<A href="https://pypi.org/project/hana-ml" target="_blank" rel="nofollow noopener noreferrer">HANA Machine Learning</A>&nbsp;and&nbsp;<A title="Developing Regression Models with the Python Machine Learning Client for SAP HANA" href="https://learning.sap.com/learning-journeys/developing-regression-models-with-the-python-machine-learning-client-for-sap-hana" target="_blank" rel="noopener noreferrer">Developing Regression Models with the Python Machine Learning Client for SAP HANA</A><SPAN>&nbsp;</SPAN><SPAN>learning journey and this excellent<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/hands-on-tutorial-leverage-sap-hana-machine-learning-in-the-cloud-through/ba-p/13495327" target="_self"> blog post</A> from&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/45487">@YannickSchaper</a>.</SPAN></FONT></P><P><FONT size="4"><SPAN>In this blogpost I will walkthrough the capabilities that enhance the power of hana-ml with the model tracking capabilities provided by&nbsp;&nbsp;<A href="https://mlflow.org/" target="_self" rel="nofollow noopener noreferrer">mlflow</A>&nbsp;. The python package hana-ml has supported the tracking and usability of trained ml models via mlflow which are covered extensively in these 2-part blogposts&nbsp;<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/tracking-hana-machine-learning-experiments-with-mlflow-a-conceptual-guide/ba-p/13688478" target="_self">tracking-hana-machine-learning-experiments-with-mlflow-a-conceptual-guide</A>&nbsp;from&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/39047">@stojanm</a>&nbsp;and&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/43098">@martinboeckling</a>&nbsp;. In this post I will focus on the&nbsp;<A href="https://docs.databricks.com/aws/en/mlflow/#databricks-managed-mlflow" target="_self" rel="nofollow noopener noreferrer">Databricks managed mlflow</A>&nbsp;as it greatly eases the use of mlflow without having to setup the mlflow server. These capabilities are available both in SAP Databricks from SAP Business Data Cloud(BDC) and Enterprise Databricks for customers who connect Databricks to BDC via bdc-connect. For this blogpost I will be using SAP Databricks provisioned with SAP Business Data Cloud.</SPAN></FONT></P><P><FONT size="4"><SPAN>With the launch of SAP Business Data Cloud, developers have a much more streamlined access to AI/ML capabilities both from SAP and Databricks. This applies to data available via the Unity Catalog or accessible via the SQL access. Here I will focus on the Notebook capabilities and Training and Inference on datasets in the HANA Cloud layer accessed via SQL and utilize the compute of HANA Cloud.</SPAN></FONT></P><P><FONT size="4"><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="BDC_AIML.png" style="width: 645px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367313i701B003012738597/image-dimensions/645x282?v=v2" width="645" height="282" role="button" title="BDC_AIML.png" alt="BDC_AIML.png" /></span></SPAN></FONT></P><H2 id="toc-hId-1592240807">&nbsp;</H2><P>The d<FONT size="4">atasets in HANA Cloud could be data persisted in HANA or remotely available via federation from HDLFS or BDC Data Products installed to embedded HANA Cloud from Datasphere.</FONT></P><H2 id="toc-hId-1395727302"><FONT size="5">Connect to ML datasets on HANA Cloud</FONT></H2><P><FONT size="4">To connect to data on the HANA Cloud, be it the embedded HANA Cloud of SAP Datasphere or a stand-alone HANA Cloud, one needs the 4 parameters which provide the url, port(443), username and password.</FONT></P><H5 id="toc-hId-1586461954"><FONT size="4">Prerequisites</FONT></H5><P><FONT size="4">In addition the HANA Cloud or Datasphere instance needs to have the Databricks IP to the Allow-list to enable connection. </FONT></P><P><FONT size="4">The database user needs to have the following privileges which are provided by the HANA Cloud or Datasphere administrator&nbsp;</FONT></P><OL><LI><FONT size="4">AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION</FONT></LI><LI>AFL__SYS_AFL_APL_AREA_EXECUTE</LI><LI>AFLPM_CREATOR_ERASER_EXECUTE</LI></OL><P>For Datasphere these privileges are enabled when the administrator creates the database user with OpenSQL access and Enables APL and PAL</P><P><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="DSP_MLUSER.png" style="width: 418px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367324i516A7E25793BD389/image-dimensions/418x367?v=v2" width="418" height="367" role="button" title="DSP_MLUSER.png" alt="DSP_MLUSER.png" /></span></P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId-873617573">&nbsp;</H1><H1 id="toc-hId-677104068">&nbsp;</H1><H1 id="toc-hId-480590563">&nbsp;</H1><H1 id="toc-hId-284077058">&nbsp;</H1><H1 id="toc-hId-87563553">&nbsp;</H1><H1 id="toc-hId--108949952">&nbsp;</H1><H3 id="toc-hId--122529388">&nbsp;</H3><H3 id="toc-hId--319042893">&nbsp;</H3><H3 id="toc-hId--515556398">Connect to HANA from Databricks</H3><P><FONT size="4">Here is a code snippet to connect to the HANA Cloud SQL layer for data access using Databricks secret. For this you need to create the secrets needed for&nbsp;HANA Cloud connectivity like snippet below</FONT></P><pre class="lia-code-sample language-python"><code>from databricks.sdk import WorkspaceClient w = WorkspaceClient() scope = "&lt;scope-name&gt;" w.secrets.create_scope(scope) url = "&lt;hana-url&gt;" port = 443 user = "&lt;hana-db-user&gt;" password = "&lt;hana-db-password&gt;" w.secrets.put_secret(scope,"hana_url",string_value =url) w.secrets.put_secret(scope,"hana_port",string_value =port) w.secrets.put_secret(scope,"hana_user",string_value =user) w.secrets.put_secret(scope,"hana_password",string_value = password)</code></pre><P class="lia-align-center" style="text-align: center;"><EM>create_secrets</EM></P><pre class="lia-code-sample language-python"><code>import os import hana_ml from hana_ml import dataframe import mlflow print("hana_ml version:", hana_ml.__version__) print("mlflow version:", mlflow.__version__) scope = "&lt;scope_name&gt;" os.environ['HANA_ADDRESS'] = dbutils.secrets.get(scope=scope, key="hana_url") os.environ['HANA_PORT'] = dbutils.secrets.get(scope=scope, key="hana_port") os.environ['HANA_UNAME'] = dbutils.secrets.get(scope=scope, key="hana_user") os.environ['HANA_PASS'] = dbutils.secrets.get(scope=scope, key="hana_password") import hana_ml.dataframe as dataframe cc = dataframe.ConnectionContext( address=os.environ['HANA_ADDRESS'], port=os.environ['HANA_PORT'], user=os.environ['HANA_UNAME'], password=os.environ['HANA_PASS'] ) if cc.connection.isconnected(): print(f'User {os.environ["HANA_UNAME"]} connected to HANA successfully') print(f"HANA Version: {cc.hana_version()}")</code></pre><P><FONT size="4">Otherwise you can also connect via the <SPAN>python-dotenv, especially if you are developing locally.</SPAN></FONT></P><H1 id="toc-hId--125263889">&nbsp;</H1><H1 id="toc-hId--321777394"><FONT size="5">Develop the ML model with mlflow</FONT></H1><P><FONT size="4">Here I will use a sample dataset provided by hana-ml package to make it easier to test. This would be replaced by the appropriate dataset the user wants to use for training the ML model.</FONT></P><pre class="lia-code-sample language-python"><code>from hana_ml.algorithms.pal.utility import DataSets # Load Dataset bike_dataset = DataSets.load_bike_data(cc)#This creates the correspoding table on HANA Cloud # number of rows and number of columns print("Shape of datset: {}".format(bike_dataset.shape)) # columns print(bike_dataset.columns) # types of each column print(bike_dataset.dtypes()) # print the first 3 rows of dataset print(bike_dataset.head(3).collect()) #Split the dataset into train &amp; test # Add a ID column for AutomaticRegression, the last column is the label bike_dataset = bike_dataset.add_id('ID', ref_col='days_since_2011') # Split the dataset into training and test dataset cols = bike_dataset.columns cols.remove('cnt') bike_data = bike_dataset[cols + ['cnt']] bike_train = bike_data.filter('ID &lt;= 600') bike_test = bike_data.filter('ID &gt; 600') print(bike_train.head(3).collect()) print(bike_test.head(3).collect())</code></pre><P><FONT size="4">We used a basic splitting methodology above, hana-ml provides splitting capabilities via&nbsp;<A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/2025_4_QRC/en-US/pal/algorithms/hana_ml.algorithms.pal.partition.train_test_val_split.html#hana_ml.algorithms.pal.partition.train_test_val_split" target="_self" rel="noopener noreferrer">hana_ml.algorithms.pal.partition.train_test_val_split</A>&nbsp;to assist in this process.</FONT></P><P><FONT size="4">Now that we have&nbsp; a training and test dataset, we can start the training process and use mlflow to track the results in Databricks experiments via the code below</FONT></P><pre class="lia-code-sample language-python"><code>mlflow.set_tracking_uri("databricks") experiment_path = '&lt;experiment_path&gt;' mlflow.set_experiment(experiment_path) # Here we are using AutomaticRegression to show the metrics automatically created and tracked via mlflow from hana_ml.algorithms.pal.auto_ml import AutomaticClassification, AutomaticRegression auto_r = AutomaticRegression(generations=2, population_size=15, offspring_size=5) # enable_workload_class if you have workload_classes defined on HANA Cloud instance, here we disable it but in productive scenarios you would have it enabled #auto_r.enable_workload_class(workload_class_name="PAL_AUTOML_WORKLOAD") auto_r.disable_workload_class_check() try: with mlflow.start_run(run_name="hana-ml-autoreg-bike") as run: auto_r.enable_mlflow_autologging(is_exported=True) auto_r.fit(bike_train, key="ID") runid = run.info.run_id except Exception as e: raise e</code></pre><P><FONT size="4">The&nbsp;<EM><STRONG>enable_mlflow_autologging</STRONG> </EM>function above enables the creation of key model metrics automatically, in this case suitable for regression without any additional effort from the user. These metrics would differ based on the algorithm. The user can easily log additional parameters, metrics, artifacts as desired and supported my mlfow.</FONT></P><P><FONT size="4">When the above code is run we get the experiments logged with default metrics that hana-ml model logged automatically via mlflow for example R2, RMSE etc below.</FONT></P><P><FONT size="4"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Databricks Experiment and mlflow with hana-ml" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367626i93F3415C08B665BB/image-size/large?v=v2&amp;px=999" role="button" title="hana_ml_mlflow_experiment.png" alt="Databricks Experiment and mlflow with hana-ml" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Databricks Experiment and mlflow with hana-ml</span></span><BR /></FONT></P><P>One can then compare different runs and track model progression as parameters are changed.<span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Compare hana-ml mlflow runs" style="width: 828px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368451i1925A61947F5925C/image-dimensions/828x689?v=v2" width="828" height="689" role="button" title="experiment_run_comparison.png" alt="Compare hana-ml mlflow runs" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Compare hana-ml mlflow runs</span></span></P><P>&nbsp;</P><P>For inferencing on data the hana-ml model can be loaded to HANA Cloud via the code below, the run_id is the run from the Databricks Experiments that you would like to use for inference and can be obtained from the overview of the Experiment</P><pre class="lia-code-sample language-python"><code>from hana_ml.model_storage import ModelStorage bikemodel = ModelStorage.load_mlflow_model(connection_context=cc, model_uri='runs:/{}/model'.format(runid)) #Get the info for the loaded model bikemodel.mlflow_model_info #Use the trained model for prediction on test or new dataset res = bikemodel.predict(bike_test.deselect('cnt') , key="ID") print(res.collect()) bike_test.deselect('cnt').save("INFERENCE_BIKE_DATA_TBL") #Saving this here for using later via the Serving Endpoint</code></pre><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId--518290899"><FONT size="5">Serve the ML model for inferencing</FONT></H1><P><FONT size="4">The hana-ml model can be served on BTP if desirable by exporting the ml model or store and reload the model for inference from HANA Cloud&nbsp;<A href="https://help.sap.com/doc/cd94b08fe2e041c2ba778374572ddba9/2025_4_QRC/en-US/hana_ml.model_storage.html#module-hana_ml.model_storage." target="_self" rel="noopener noreferrer">hana_ml.model_storage</A>&nbsp;, in this case the HANA Cloud instance would need to be same for training and inferencing.</FONT></P><P><FONT size="4">Alternatively, it can be served on Databricks via the Serving endpoint, I describe these below.</FONT></P><P><FONT size="4">Databricks does not natively support the serving for the hana-ml model. However, this can be achieved via the<A href="https://mlflow.org/docs/latest/ml/model/models-from-code/" target="_self" rel="nofollow noopener noreferrer"> mlflow.pyfunc</A> functionality to provide custom models. I will be using the "model from code" method as it has advantages over the legacy methods and recommended going forward. This requires passing the custom handler as separate code. For this we write the python file which handles the desired input to the serving endpoint. In my example, I want the user to pass in the name of table which exists in HANA Cloud(in our example we saved it as&nbsp;&nbsp;</FONT></P><PRE><CODE>INFERENCE_BIKE_DATA_TBL</CODE></PRE><P><FONT size="4">and has the data that needs to be inferenced. The user sends the name of the table to the inference endpoint. The code can be modified to have the input say as a payload to the inference end-point, in that case the custom handler function(hana_ml_pyfunc_model) would then need to persist the payload as a HANA table so the hana-ml predict can be called on it.</FONT></P><H3 id="toc-hId--1301610418"><FONT size="4">Create the custom handler for hana-ml</FONT></H3><pre class="lia-code-sample language-python"><code># Save as script: hana_ml_pyfunc_model.py # %%writefile "./hana_ml_pyfunc_model.py" import mlflow from mlflow import pyfunc from mlflow.models import set_model import hana_ml from hana_ml import dataframe from hana_ml.model_storage import ModelStorage import os class hana_ml_pyfunc_model(pyfunc.PythonModel): def connectToHANA(self, context): try: url = os.getenv('hana_url') port = os.getenv('hana_port') user = os.getenv('hana_user') passwd = os.getenv('hana_password') connection_context = dataframe.ConnectionContext(url, port, user, passwd) return connection_context except Exception as e: print(f"Exception occurred: {e}") raise e return "Exception:{e}", e @mlflow.trace def load_context(self, context): try: with mlflow.start_span("load_context"): self.model = context.artifacts["model"] self.connection_context = self.connectToHANA(context) print("HANA_ML_MODEL loaded in load_context") except Exception as e: print(f"Exception occurred: {e}") raise Exception(f"Loading the context failed due to {e}") @mlflow.trace def predict(self, context, model_input): table_name = None try: if self.connection_context.connection.isconnected() == False: with mlflow.start_span("connect_to_HANA"): self.connection_context = self.connectToHANA(context) if self.connection_context.connection.isconnected(): print("HANA Connection Successful") else: raise Exception("HANA Connection Failed") with mlflow.start_span("load_model"): hana_model = ModelStorage.load_mlflow_model(connection_context=self.connection_context, model_uri=self.model,use_temporary_table=False, force=True) print("HANA_ML_MODEL loaded in predict") print("model_input", model_input) table_name = str(model_input["INFERENCE_TABLE_NAME"][0]) print("Table Name:", table_name) with mlflow.start_span("hana_ml_predict"): df = self.connection_context.table(table_name) if df.count() &gt; 0: print(f"Running HANA ML inference on {table_name} with {df.count()} records") prediction = hana_model.predict(df, key = "ID").collect() print("Prediction completed") else: raise Exception(f"HANA Inference Table {table_name} is empty") return prediction except Exception as e: print(f"Exception occurred: {e}") raise f"Exception:{e}" set_model(hana_ml_pyfunc_model())</code></pre><H2 id="toc-hId--1204720916">&nbsp;</H2><H3 id="toc-hId--1694637428"><FONT size="4">Log the custom </FONT><FONT size="4">pyfunc model </FONT></H3><P><FONT size="4">Then we log the above pyfunc model which can be registered to enable the creation of a Serving endpoint on Databricks</FONT></P><pre class="lia-code-sample language-python"><code>#Create the signature for the model input and output. In this example: # the input is the name of an existing table in HANA which has the data that needs to be inferenced # the output is the "cnt" counts for the bike data and the associated score import mlflow from mlflow.models import ModelSignature, infer_signature from mlflow.types.schema import Schema, ColSpec signature = ModelSignature(inputs = Schema([ColSpec("string", "INFERENCE_TABLE_NAME")])) signature.outputs = Schema([ColSpec("integer", "ID"), ColSpec("double", "SCORES")]) mlflow.set_tracking_uri("databricks") runid="&lt;run_id&gt;" #runid from the training phase which is the chosen champion model to be served model_uri='runs:/{}/model'.format(runid) experiment_name = '&lt;experiment_name&gt;' mlflow.set_experiment(experiment_name) model_file = "hana_ml_pyfunc_model.py" #This is the file that is written in step above and handles the calll to hana_ml for predict on the user-provided inference table with mlflow.start_run() as run: mlflow.pyfunc.log_model( artifact_path="model", python_model=model_file, artifacts={"model": model_uri}, pip_requirements=["hana-ml","ipython"], signature = signature, input_example={"INFERENCE_TABLE_NAME" : "INFERENCE_BIKE_DATA_TBL"}, ) # Register the model #Need to register the model to enable it to be served on Databricks model_uri = f"runs:/{run.info.run_id}/model" registered_model_name = "&lt;your_model_name&gt;" mlflow.register_model(model_uri=model_uri, name=registered_model_name)</code></pre><H4 id="toc-hId-2110413356">&nbsp;</H4><H4 id="toc-hId-2082083542"><FONT size="4">Test the custom pyfunc model</FONT></H4><P><FONT size="4">To test the logged model in step above you can call the following code</FONT></P><pre class="lia-code-sample language-python"><code>## Code to test the model logged as custom pyfunc model which can be registered and deployed for serving logged_model = 'runs:/run.info.run_id/model' #run from the pyfunc model logging dataset = {"inputs": {"INFERENCE_TABLE_NAME" : "INFERENCE_BIKE_DATA_TBL"}} loaded_model = mlflow.pyfunc.load_model(logged_model) loaded_model.predict(dataset["inputs"])</code></pre><P><FONT size="4">Additionally the model endpoint can also be tested by using package uv with code below</FONT></P><pre class="lia-code-sample language-python"><code>run_id = run.info.run_id #run from the pyfunc model logging model_uri = f"runs:/{run_id}/model" dataset = {"inputs": {"INFERENCE_TABLE_NAME" : "INFERENCE_BIKE_DATA_TBL"}} input_data = dataset mlflow.models.predict( model_uri=model_uri, input_data=dataset["inputs"], env_manager="uv", )</code></pre><H2 id="toc-hId--1822591245">&nbsp;</H2><H2 id="toc-hId--2019104750"><FONT size="5">Create the Serving Endpoint</FONT></H2><P><FONT size="4">Now we have a registered model that can be deployed for serving. To do this I show the steps here to do it via the <A href="https://docs.databricks.com/aws/en/machine-learning/model-serving/store-env-variable-model-serving?language=Serving%C2%A0UI" target="_self" rel="nofollow noopener noreferrer">Databricks Serving UI</A>&nbsp; via the serving endpoint, it can also be done with code via Rest API or <A href="https://docs.databricks.com/aws/en/machine-learning/model-serving/store-env-variable-model-serving?language=MLflow%C2%A0Deployments%C2%A0SDK" target="_self" rel="nofollow noopener noreferrer">sdks</A>. Go to Serving and Create a new Serving endpoint. Choose the registered_model_name from step above and add the environment variables for the HANA Cloud connection so the serving code can connect to HANA and call the model inference on user provided table name</FONT></P><P><FONT size="4"><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="Creating Serving Enpoint" style="width: 675px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367624i1C5520D787242C6A/image-dimensions/675x473?v=v2" width="675" height="473" role="button" title="serving_1.png" alt="Creating Serving Enpoint" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Creating Serving Enpoint</span></span></FONT></P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><H2 id="toc-hId-2079349041">&nbsp;</H2><H2 id="toc-hId-1882835536">&nbsp;</H2><H2 id="toc-hId-1686322031">&nbsp;</H2><H2 id="toc-hId-1489808526">&nbsp;</H2><H2 id="toc-hId-1293295021">&nbsp;</H2><H2 id="toc-hId-1096781516">&nbsp;</H2><H2 id="toc-hId-900268011">&nbsp;</H2><H2 id="toc-hId-871938197">&nbsp;</H2><H2 id="toc-hId-675424692">&nbsp;</H2><H2 id="toc-hId-478911187"><FONT size="5"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="HANA credentials as environment variables for deployment" style="width: 703px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367627i14AEE525CE5AAD74/image-dimensions/703x581?v=v2" width="703" height="581" role="button" title="serving_2.png" alt="HANA credentials as environment variables for deployment" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">HANA credentials as environment variables for deployment</span></span></FONT></H2><P>&nbsp;</P><H2 id="toc-hId-282397682"><FONT size="5">Test the Serving Endpoint</FONT></H2><P><FONT size="4">The deployment as usual takes some minutes. Once the serving endpoint is in ready state, it can be tested with the usual ways when pressing <EM>Use</EM>&nbsp;</FONT></P><P><FONT size="4">Here is a sample screenshot for testing it in the Browser</FONT></P><H2 id="toc-hId-85884177"><FONT size="5"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Test Serving" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/367628i784E69D8F37597D1/image-size/large?v=v2&amp;px=999" role="button" title="test_serving_1.png" alt="Test Serving" /><span class="lia-inline-image-caption" onclick="event.preventDefault();">Test Serving</span></span></FONT></H2><P>Here is the corresponding code to test via Python</P><pre class="lia-code-sample language-python"><code>import os,json,requests os.environ['DATABRICKS_TOKEN'] = "&lt;Developer_Token&gt;" #Token obtained by following https://docs.databricks.com/aws/en/dev-tools/auth/pat def score_model(dataset): url = '&lt;serving_url&gt;' headers = {'Authorization': f'Bearer {os.environ.get("DATABRICKS_TOKEN")}', 'Content-Type': 'application/json'} data_json = json.dumps(dataset, allow_nan=True) response = requests.request(method='POST', headers=headers, url=url, data=data_json) if response.status_code != 200: raise Exception(f'Request failed with status {response.status_code}, {response.text}') return response.json() dataset = {'inputs': {'INFERENCE_TABLE_NAME' : "&lt;hana_cloud_table_name_for_inference&gt;"}} res = score_model(dataset) print(res)</code></pre><H1 id="toc-hId-182773679">&nbsp;</H1><H1 id="toc-hId--13739826"><FONT size="5">Endpoint Consumption</FONT></H1><P data-unlink="true"><FONT size="4">The serving endpoint created above can be used in applications via rest API calls. In production the API would need to be secured via <A href="https://docs.databricks.com/aws/en/dev-tools/auth/" target="_self" rel="nofollow noopener noreferrer">OAuth authenication</A>&nbsp;. Here is a<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/connecting-sap-analytics-cloud-to-databricks-model-serving-endpoint/ba-p/14290451" target="_self"> blogpost</A> from&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/239">@Ian_Henry</a>&nbsp;&nbsp;describing how such an endpoint can be triggered from SAP Analytics Cloud for example.</FONT></P><H1 id="toc-hId--210253331"><FONT size="5">Conclusion</FONT></H1><P><FONT size="4">In this blogpost we show the power combination of using SAP HANA Cloud and model experiment tracking &amp; serving capabilities from SAP Databricks via managed mlflow. This is suitable for usecases where data already resides in the HANA layer and the performance benefits from running hana-ml in data accessible via HANA Cloud in-memory is desirable while benefitting from model development support provided by SAP Databricks.&nbsp;</FONT></P><P><FONT size="4">The code for the above is available here on&nbsp;<A title="hana-mlflow" href="https://github.com/SAP-samples/hana-ml-samples/tree/main/PAL-Databricks-mlflow" target="_self" rel="nofollow noopener noreferrer">SAP-samples/hana-ml-samples</A>.</FONT></P> 2026-02-04T14:10:52.496000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/consuming-data-products-in-sap-hana-cloud-via-sap-business-application/ba-p/14320009 Consuming Data Products in SAP HANA Cloud via SAP Business Application Studio/SAP Build Code 2026-02-06T19:35:45.324000+01:00 shraddhashetty https://community.sap.com/t5/user/viewprofilepage/user-id/44579 <P>This blog post guides you through the process of consuming SAP HANA Cloud Data Products within SAP Business Application Studio (BAS). We will cover everything from the initial SQL security configurations to creating the final Calculation View.</P><P>Once a Data Product is installed in your SAP HANA Cloud instance, it exists as virtual tables in a specialized schema. To use it in a <STRONG>Cloud Application Programming (CAP)</STRONG> or <STRONG>HANA Native</STRONG> project in BAS, you must access the virtual tables from your HDI container.</P><P>Before you begin this blog, ensure you have followed the initial setup steps to locate and install your data product. I highly recommend completing this guide written by my colleague (<SPAN>Dan Van Leeuwen</SPAN>) : <STRONG><A class="" href="https://www.google.com/search?q=https://developers.sap.com/tutorials/hana-cloud-data-products-consumption.html" target="_blank" rel="noopener nofollow noreferrer">Access and Query a Data Product in SAP HANA Cloud</A></STRONG>.</P><P><STRONG>Step 1: Assign the roles &amp; privileges (SQL)</STRONG></P><P>To ensure the technical users (HDI Container users) have permission to see the Data Product, We use a <STRONG>Role-Based</STRONG> approach to manage these permissions.</P><P><STRONG>1. Create a dedicated role:</STRONG> This role will act as a container for all permissions related to your Purchase Order data.</P><pre class="lia-code-sample language-sql"><code>CREATE ROLE VIRTUAL_TABLE_ON_PURCHASE_ORDER_OO; CREATE ROLE VIRTUAL_TABLE_ON_PURCHASE_ORDER_RT;</code></pre><P><STRONG>2. Grant Select with Grant Option:</STRONG> This is crucial. The WITH GRANT OPTION allows the HDI Container user to share these privileges further (e.g., to the end-user reporting tools).</P><pre class="lia-code-sample language-sql"><code>GRANT SELECT ON "_SAP_DATAPRODUCT_sap_s4com_dataProduct_PurchaseOrder_v1_4a6dc5d7-7af5-4b74-8ac7-b9ed0d1e6e95"."_SAP_DATAPRODUCT_57af9989-cf07-4a75-bc36-fddb382b4020_purchaseorder.PurchaseOrder" TO VIRTUAL_TABLE_ON_PURCHASE_ORDER_OO WITH GRANT OPTION; GRANT SELECT ON "_SAP_DATAPRODUCT_sap_s4com_dataProduct_PurchaseOrder_v1_4a6dc5d7-7af5-4b74-8ac7-b9ed0d1e6e95"."_SAP_DATAPRODUCT_57af9989-cf07-4a75-bc36-fddb382b4020_purchaseorder.PurchaseOrder" TO VIRTUAL_TABLE_ON_PURCHASE_ORDER_RT;</code></pre><P><STRONG>3. Grants SELECT&nbsp;to the objects.</STRONG></P><div class="lia-spoiler-container"><a class="lia-spoiler-link" href="#" rel="nofollow noopener noreferrer">Spoiler</a><noscript> (Highlight to read)</noscript><div class="lia-spoiler-border"><div class="lia-spoiler-content">Granting these roles globally to BAS HDI container object owners and runtime users means that <STRONG>all HDI containers and therefore all BAS developers will automatically gain access to the virtual tables</STRONG>. If your intention is universal access, this approach avoids granting roles individually. However, be aware that it effectively broadens access scope system-wide, which may have governance, security, or compliance implications if more restricted access was intended.</div><noscript><div class="lia-spoiler-noscript-container"><div class="lia-spoiler-noscript-content">Granting these roles globally to BAS HDI container object owners and runtime users means that all HDI containers and therefore all BAS developers will automatically gain access to the virtual tables. If your intention is universal access, this approach avoids granting roles individually. However, be aware that it effectively broadens access scope system-wide, which may have governance, security, or compliance implications if more restricted access was intended.</div></div></noscript></div></div> <pre class="lia-code-sample language-sql"><code>GRANT VIRTUAL_TABLE_ON_PURCHASE_ORDER_OO TO "_SYS_DI#BROKER_CG"."_SYS_DI_OO_DEFAULTS"; GRANT VIRTUAL_TABLE_ON_PURCHASE_ORDER_RT TO "BROKER_USER"."RT_DEFAULTS";</code></pre><P><STRONG>Step 2: Create your Project in SAP Business Application Studio</STRONG></P><P>1. Open <STRONG>SAP Business Application Studio</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_0-1770044422749.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368163iE5763FD47480A9F2/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_0-1770044422749.png" alt="shraddhashetty_0-1770044422749.png" /></span></P><P>&nbsp;</P><P>2. Create a new <STRONG>SAP HANA Native Application</STRONG> or a <STRONG>CAP Project</STRONG>. In this demo we are showcasing the SAP HANA Native application.&nbsp;&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_1-1770044422752.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368164i21A051DD1CF0B70F/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_1-1770044422752.png" alt="shraddhashetty_1-1770044422752.png" /></span></P><P>3. Ensure your project is connected to the correct <STRONG>Cloud Foundry Space</STRONG> and <STRONG>SAP HANA Cloud instance</STRONG>.</P><P>4. In the SAP HANA Projects view, click the <STRONG>expand</STRONG> icon to view your project and to check the binding of the <STRONG>HDI Container</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_2-1770044422753.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368162i3C842D3C108697EB/image-size/medium?v=v2&amp;px=400" role="button" title="shraddhashetty_2-1770044422753.png" alt="shraddhashetty_2-1770044422753.png" /></span></P><P><STRONG>Step 3: Create the Calculation View</STRONG></P><OL><LI>Right-click the src folder and select <STRONG>New File</STRONG> -&gt; CV_PurchaseAnalysis.hdbcalculationview.</LI><LI>Choose <STRONG>Data Category</STRONG> as the CUBE and click create.</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="shraddhashetty_5-1770044422770.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/368166i8C124D5B9F434126/image-size/medium?v=v2&amp;px=400" role="button" title="shraddhashetty_5-1770044422770.png" alt="shraddhashetty_5-1770044422770.png" /></span></P><P>3.In the aggregation editor, find the data source</P><P>4.Click the <STRONG>+</STRONG> sign on the Aggregation node and search for your <STRONG>Purchase Order Table </STRONG>and click on <STRONG>Create Synonym.</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_0-1770401843677.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369706iA79DB8DC88F2E942/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_0-1770401843677.png" alt="shraddhashetty_0-1770401843677.png" /></span></P><P>5. You will see the below dialog and click <STRONG>finish.</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_1-1770401944399.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369707iE178469AE6D0ACFA/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_1-1770401944399.png" alt="shraddhashetty_1-1770401944399.png" /></span></P><P>6. Go to the mapping tab, and map the necessary columns to the output column.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_2-1770401998140.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369708i9BBD6749272FA980/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_2-1770401998140.png" alt="shraddhashetty_2-1770401998140.png" /></span></P><P>&nbsp;</P><P><STRONG>6. Save and Deploy the file by clicking the "Deploy" icon (the rocket ship) in the SAP HANA Projects panel.</STRONG></P><P><STRONG>Step 4: Preview the Data</STRONG></P><P>To confirm everything is working:</P><OL><LI>Right click on the aggregation node on your calculation view and click on <STRONG><STRONG>Graphical SQL Data Preview.</STRONG></STRONG><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="shraddhashetty_3-1770402146741.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369709i470BD655C95845EB/image-size/medium?v=v2&amp;px=400" role="button" title="shraddhashetty_3-1770402146741.png" alt="shraddhashetty_3-1770402146741.png" /></span><P>&nbsp;</P></LI><LI>Database Explorer will open in your BAS window. You can drag and drop the required columns and analyze your purchase order details</LI><LI>You should now see the live data flowing from the SAP S/4HANA Data Product directly into your BAS environment.</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="shraddhashetty_4-1770402166853.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/369710iFCF57D9DEBB45926/image-size/large?v=v2&amp;px=999" role="button" title="shraddhashetty_4-1770402166853.png" alt="shraddhashetty_4-1770402166853.png" /></span></P><P>&nbsp;</P><P>By following these steps, you’ve successfully bridged the gap between raw <STRONG>SAP HANA Cloud Data Products</STRONG> and a flexible development environment in <STRONG>SAP Business Application Studio</STRONG>.</P><P>Special thanks to my colleague&nbsp;<SPAN>&nbsp;</SPAN><A class="" href="https://community.sap.com/t5/user/viewprofilepage/user-id/239612" target="_self"><SPAN class="">jan_zwickel</SPAN></A> for their support and expert review in making this blog.</P><P>Thank you for your time.&nbsp;</P><P>&nbsp;</P> 2026-02-06T19:35:45.324000+01:00 https://community.sap.com/t5/sap-for-healthcare-blog-posts/ai-is-transforming-sales-order-processing-in-sap/ba-p/14303939 AI Is Transforming Sales Order Processing in SAP 2026-02-11T04:51:29.555000+01:00 Sachinbobate https://community.sap.com/t5/user/viewprofilepage/user-id/1785944 <P><STRONG>Challenges in Manual Sales Order Processing</STRONG></P><UL><LI>Inconsistent customer data: duplicate addresses, incorrect zip codes, or special characters.</LI><LI>Complex Sold-to / Ship-to relationships: a single customer may have multiple shipping addresses.</LI><LI>Manual verification of POs is time-consuming and prone to human error.</LI></UL><P>Ensuring timely delivery and customer satisfaction requires high workforce effort</P><P><STRONG>AI &amp; Machine Learning Workflow</STRONG></P><P>The AI-assisted SAP workflow automates the sales order creation process as follows:</P><P><STRONG>Step 1:</STRONG> Customer sends an email with a Purchase Order (PO).<BR /><STRONG>Step 2:</STRONG> AI email reader parses the email.<BR /><STRONG>Step 3:</STRONG> OCR and NLP extract the PO details.<BR /><STRONG>Step 4:</STRONG> ML identifies the correct Sold-to / Ship-to relationship, even in cases of inconsistent or duplicate data.<BR /><STRONG>Step 5:</STRONG> Data validation checks accuracy against authoritative sources (e.g., USPS address verification).<BR /><STRONG>Step 6:</STRONG> SAP automatically creates the sales order.<BR /><STRONG>Step 7:</STRONG> Customer receives order confirmation.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="984871_0-1767890162626.png" style="width: 623px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/359525i9686ABDEEC446F8B/image-dimensions/623x179/is-moderation-mode/true?v=v2" width="623" height="179" role="button" title="984871_0-1767890162626.png" alt="984871_0-1767890162626.png" /></span></P><P><EM>AI and ML pipeline automates PO processing, resolves Sold-to/Ship-to relationships, validates data, and creates sales orders in SAP.</EM></P><P><STRONG>Key Benefits</STRONG></P><UL><LI><STRONG>Increased Productivity:</STRONG> Automates repetitive manual tasks.</LI><LI><STRONG>Reduced Errors:</STRONG> Correctly identifies Sold-to / Ship-to and validates data.</LI><LI><STRONG>Time Savings:</STRONG> Faster order processing and reduced cycle time.</LI><LI><STRONG>Enhanced Customer Satisfaction:</STRONG> Timely, accurate orders and confirmations.</LI><LI><STRONG>Cost Efficiency:</STRONG> Reduces workforce dependency and operational costs.</LI></UL><P><STRONG>Machine Learning for Sold-to / Ship-to Resolution</STRONG></P><P>The ML system analyzes historical sales data and learns patterns to:</P><UL><LI>Resolve duplicate or inconsistent addresses.</LI><LI>Identify correct shipping addresses for multiple “Sold-to” customers.</LI><LI>Handle special characters and local language variations.</LI><LI>Ensure accurate SAP sales order creation.</LI></UL><P>This step is <STRONG>critical</STRONG> because errors in Sold-to / Ship-to mapping can delay shipments, cause billing issues, and reduce customer satisfaction</P><P><STRONG>Integration with SAP</STRONG></P><UL><LI>AI/ML connects via <STRONG>REST APIs or SAP AI Services</STRONG>.</LI><LI>Extracted and validated PO data is sent directly to <STRONG>SAP ERP</STRONG>, eliminating manual entry.</LI><LI>SAP automatically generates sales orders, streamlines accounting, and updates inventory.</LI></UL><P><STRONG>Conclusion</STRONG></P><P>Integrating AI and ML into SAP sales order processing transforms a manual, error-prone process into an efficient, automated workflow. By resolving customer data issues, identifying correct Sold-to / Ship-to relationships, and validating PO information, organizations can:</P><UL><LI>Enhance operational efficiency</LI><LI>Reduce costs</LI><LI>Improve customer satisfaction</LI><LI>Focus human resources on higher-value tasks</LI></UL> 2026-02-11T04:51:29.555000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/innovate-with-sap-hana-cloud-an-agentic-multi-model-database-service/ba-p/14326421 Innovate with SAP HANA Cloud, an agentic multi-model database service 2026-02-11T15:45:00.027000+01:00 JoseBastidas https://community.sap.com/t5/user/viewprofilepage/user-id/3227 <P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Innovate with SAP HANA Cloud, an agentic multi-model database service.png" style="width: 799px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371567iAC74D5FE11CC327A/image-size/large?v=v2&amp;px=999" role="button" title="Innovate with SAP HANA Cloud, an agentic multi-model database service.png" alt="Innovate with SAP HANA Cloud, an agentic multi-model database service.png" /></span></P><P>Agents. &nbsp;Graphs. &nbsp;Vectors.</P><P>Now let’s connect the dots.</P><P>Join us for <EM>Innovate with SAP HANA Cloud, an agentic multi-model database service</EM> and see how elastic in-memory power meets Knowledge Graph and Vector Engines—inside one unified platform.</P><P>We’ll break down:</P><UL><LI>Multi-model capabilities that go beyond traditional databases</LI><LI>RAG scenarios with grounded, trustworthy answers</LI><LI>Extension patterns that enable side-by-side innovation</LI><LI>New agentic capabilities that simplify building intelligent applications</LI></UL><P>Short story? &nbsp;This isn’t theory. &nbsp;It’s architecture you can design, position, and deliver.</P><P>If you’re working with customers on AI-infused workloads or composable apps, this session will sharpen your edge. &nbsp;And yes—it’s packed with insights you can use immediately.</P><P>Spots fill quickly.&nbsp; <STRONG>Register now.&nbsp; </STRONG><A href="https://partneredge.sap.com/en/library/education/psd/2026/jan/e_oe_te_w_PSD_WEB_00012762.html" target="_blank" rel="noopener noreferrer">https://partneredge.sap.com/en/library/education/psd/2026/jan/e_oe_te_w_PSD_WEB_00012762.html</A></P><P>#SAP #SAPBTP #SAPPartners #SAPHANACloud #EnterpriseAI #SAPInnovation #DataDriven</P> 2026-02-11T15:45:00.027000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/are-you-ready-to-share-your-hana-insights-with-the-community/ba-p/14328150 Are you ready to share your HANA insights with the community? 2026-02-13T16:52:59.761000+01:00 andreamiranda https://community.sap.com/t5/user/viewprofilepage/user-id/135788 <P><STRONG>Are you ready to share your HANA insights with our community?</STRONG></P><P>HANA Tech Con will take place on <STRONG>July 16</STRONG>, and we’re looking for <STRONG>experts, users, and partners</STRONG> to delight us with engaging and inspiring sessions.</P><P>Ready to submit your proposal? Here is how:</P><P><STRONG>1:</STRONG> Go to the <A href="https://hanatech.community/" target="_self" rel="nofollow noopener noreferrer">HANA Tech Con website</A> and click “Call for Proposals” on the main menu:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2026-02-12 at 18.33.26.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371990i91EC9EC661CF817B/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2026-02-12 at 18.33.26.png" alt="Screenshot 2026-02-12 at 18.33.26.png" /></span></P><P>&nbsp;</P><P>You can also find this option directly on the page by clicking on the button “Become a Speaker”:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2026-02-12 at 18.35.52.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371991i0D297791D6298BF7/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2026-02-12 at 18.35.52.png" alt="Screenshot 2026-02-12 at 18.35.52.png" /></span></P><P><STRONG>2:</STRONG> Once you are inside the Call for Proposals page, you will find detailed information about the topics we are looking for.</P><P><span class="lia-unicode-emoji" title=":light_bulb:">💡</span><EM>Tip:</EM> We especially appreciate <STRONG>original, relevant, and engaging content</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2026-02-13 at 16.38.58.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372388i40FDDCB467960366/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2026-02-13 at 16.38.58.png" alt="Screenshot 2026-02-13 at 16.38.58.png" /></span></P><P>&nbsp;</P><P><STRONG>3: </STRONG>Scroll to the end of the page and click the “Submit Your Talk” button:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2026-02-13 at 16.40.12.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372390i5A2A7FA994C13B24/image-size/large?v=v2&amp;px=999" role="button" title="Screenshot 2026-02-13 at 16.40.12.png" alt="Screenshot 2026-02-13 at 16.40.12.png" /></span></P><P>&nbsp;</P><P><STRONG>4:</STRONG> Once you reach the submission page, you’ll need to choose a title for your Session:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture1.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372391i951C02EFAF836AA9/image-size/large?v=v2&amp;px=999" role="button" title="Picture1.png" alt="Picture1.png" /></span></P><P>&nbsp;</P><P><EM>Here are some examples from the first HANA Tech Con edition:</EM></P><P>- SAP HANA HotSpots</P><P>- Mastering HANA Performance</P><P>- Database Analysis Using the SAP HANA Cloud Knowledge Graph Engine</P><P>- How to optimize the SAP HANA Memory Sizing Predictions</P><P>&nbsp;</P><P><STRONG>5:</STRONG> Select your Session Type based on:</P><UL><LI>TALK (Spot Talk or Deep Dive Talk);</LI><LI>PPTless Demo;</LI><LI>OTHER (If no format that aligns with your vision, just don’t forget to use the <STRONG>remarks field</STRONG>, <U>not description</U>, to describe your proposed session format).&nbsp;</LI><LI>Extensive Talk.</LI></UL><P>The <STRONG>REMARKS FIELD</STRONG> is located just below the Description section:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture2.png" style="width: 936px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372394iB5B8F265DBFB2FC9/image-size/large?v=v2&amp;px=999" role="button" title="Picture2.png" alt="Picture2.png" /></span></P><P>&nbsp;</P><P><STRONG>6:</STRONG> Tell us about yourself!</P><P>If you’re presenting the session alone, please fill in your Name, E-mail and Company.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture3.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372395i7A7FAB2157B718F9/image-size/large?v=v2&amp;px=999" role="button" title="Picture3.png" alt="Picture3.png" /></span></P><P>If your session includes another speaker, please click “Add Speaker” and complete their information as well. You can add up to eight speakers per session:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture4.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372396i2CFEBEB66D4F5FB5/image-size/large?v=v2&amp;px=999" role="button" title="Picture4.png" alt="Picture4.png" /></span></P><P>&nbsp;</P><P><STRONG>7:</STRONG> To submit your proposal, check the <STRONG>Consent Checkbox</STRONG> and click <STRONG><U>Submit Proposal</U></STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture5.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372397i01BC297A022D2939/image-size/large?v=v2&amp;px=999" role="button" title="Picture5.png" alt="Picture5.png" /></span></P><P>&nbsp;</P><P><STRONG><U>MAKE SURE YOU HAVE SUBMITED YOUR PROPOSAL:</U></STRONG></P><P><STRONG>8:</STRONG> After submitting, you’ll see a confirmation page:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Picture8.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372398i40A38ED33E58A316/image-size/large?v=v2&amp;px=999" role="button" title="Picture8.png" alt="Picture8.png" /></span></P><P>&nbsp;</P><P><EM>IMPORTANT INFORMATION:</EM></P><UL><LI><STRONG>Do NOT FORGET</STRONG> to verify your email, otherwise your proposal won’t be processed. <STRONG><U>You have 7 days after submitting your proposal to verify it</U></STRONG>.</LI><LI>Once verified, you will receive a link to <STRONG>view or edit your proposal</STRONG>.</LI><LI>You MAY submit more than one proposal.</LI><LI>Call for speakers closes at 11:59 pm on March 13, 2025.</LI><LI>Results will be communicated by email <STRONG>within 2–3 weeks after the deadline</STRONG>.</LI></UL><P>In case you have any questions, we’re here for you!</P><P>Contact us via&nbsp;<A href="mailto:info@hanatech.community?subject=%5BHANA%20Tech%20Con%5D%20Speaker%20Question&amp;body=Dear%20HANA%20Tech%20Con%20team%2C%0A%0AI%20have%20a%20question%20about%20my%20session%20proposal.%0A%0ACheers!" target="_blank" rel="noopener nofollow noreferrer">info@hanatech.community</A>.</P><P>And of course: <STRONG>Stay connected</STRONG>! We’re looking forward to seeing you on <STRONG>July 16</STRONG>.</P><P>Cheers,</P><P>HANA Tech Con Team</P><P>&nbsp;</P> 2026-02-13T16:52:59.761000+01:00 https://community.sap.com/t5/artificial-intelligence-blogs-posts/leveraging-past-technical-dialogues-to-enhance-customer-support-with-sap/ba-p/14327209 Leveraging Past Technical Dialogues to Enhance Customer Support with SAP BTP 2026-02-16T10:57:12.850000+01:00 Adeleruyelle https://community.sap.com/t5/user/viewprofilepage/user-id/1475324 <P><SPAN>When a client opens a ticket&nbsp;for SAP software support into AMS services,&nbsp;time to&nbsp;resolve&nbsp;remain a relevant&nbsp;KPI.</SPAN><SPAN>&nbsp;</SPAN><SPAN> S.P.A.R.K. (</SPAN><STRONG><SPAN>S</SPAN></STRONG><SPAN>AP&nbsp;</SPAN><STRONG><SPAN>P</SPAN></STRONG><SPAN>rocess&nbsp;</SPAN><STRONG><SPAN>A</SPAN></STRONG><SPAN>ugmented&nbsp;</SPAN><STRONG><SPAN>R</SPAN></STRONG><SPAN>esponse&nbsp;</SPAN><STRONG><SPAN>K</SPAN></STRONG><SPAN>nowledge) is a Retrieval‑Augmented Generation (RAG) platform built&nbsp;</SPAN><STRONG><SPAN>entirely on SAP technology</SPAN></STRONG><SPAN>—SAP HANA&nbsp;Cloud, BTP AI services, and the Cloud SDK. It turns every resolved ticket into a searchable, conversational knowledge base, delivering the exact solution a consultant needs&nbsp;</SPAN><STRONG><SPAN>exactly where they work</SPAN></STRONG><SPAN>.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>In this blog&nbsp;we’ll&nbsp;show how S.P.A.R.K. overcomes the challenges of technical dialogue data, why a native‑SAP stack matters, and the measurable impact on ticket‑resolution speed and consultant productivity. We’ll&nbsp;also highlight the&nbsp;</SPAN><STRONG><SPAN>scientific paper</SPAN></STRONG><SPAN>&nbsp;that documents the architecture and results—our first peer‑reviewed publication made possible thanks to SAP&nbsp;and Talan&nbsp;Research&nbsp;Center.&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>This blog post&nbsp;describe&nbsp;technical implementation and research outcomes led within SAP AMS delivery center at TALAN.</SPAN><SPAN>&nbsp;</SPAN></P><H1 id="toc-hId-1660588391"><SPAN class=""><SPAN class=""><BR />The Challenge: Unlocking Conversational Data</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></H1><P><SPAN class=""><SPAN class="">Unlike standard document retrieval, support tickets are messy. They are conversational, often asynchronous, and the critical information (the root cause and solution) is diluted across many messages. Furthermore,</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">AMS<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">support tickets present specific hurdles:</SPAN></SPAN></P><OL><LI><SPAN class=""><SPAN class=""><SPAN class=""><STRONG>Data Privacy</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN>&nbsp;</SPAN>Tickets<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">contain</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>sensitive info like client names and system IDs which must be protected.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></LI><LI><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><STRONG>Technical Jargon</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN>&nbsp;</SPAN>Standard models struggle with the specific vocabulary of SAP<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">as<SPAN>&nbsp;</SPAN></SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">transactions and error codes.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></LI><LI><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><STRONG>Asymmetry</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN>&nbsp;</SPAN>We need to match a<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">new</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN>&nbsp;</SPAN>problem description against<SPAN>&nbsp;</SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">past</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN>&nbsp;</SPAN>resolved tickets that<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">contain</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>both the problem and the solution.</SPAN></SPAN><SPAN class="">&nbsp;<BR /></SPAN></SPAN></SPAN></SPAN></LI></OL><P>&nbsp;</P><H1 id="toc-hId-1464074886"><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class="">The Solution: A Secure, High-Performance RAG Architecture</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></H1><P><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class="">To<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">address</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">thes</SPAN><SPAN class="">e</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>chall</SPAN><SPAN class="">enges,<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">we</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">designe</SPAN><SPAN class="">d</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>a<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">ro</SPAN><SPAN class="">bust</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>pipe</SPAN><SPAN class="">line</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>on SAP BTP<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">that</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">transf</SPAN><SPAN class="">orm</SPAN><SPAN class="">s</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">raw</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>n</SPAN><SPAN class="">o</SPAN><SPAN class="">is</SPAN><SPAN class="">e</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">into</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">acti</SPAN><SPAN class="">onabl</SPAN><SPAN class="">e</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>insigh</SPAN><SPAN class="">t</SPAN><SPAN class="">s.</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>As i</SPAN><SPAN class="">l</SPAN><SPAN class="">lustra</SPAN><SPAN class="">t</SPAN><SPAN class="">ed in t</SPAN><SPAN class="">h</SPAN><SPAN class="">e fig</SPAN><SPAN class="">u</SPAN><SPAN class="">re bel</SPAN><SPAN class="">o</SPAN><SPAN class="">w, the SPA</SPAN><SPAN class="">RK a</SPAN><SPAN class="">rchitectu</SPAN><SPAN class="">r</SPAN><SPAN class="">e fo</SPAN><SPAN class="">llows a<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">strict<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">sequence t</SPAN><SPAN class="">o ens</SPAN><SPAN class="">ure<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">data pri</SPAN><SPAN class="">v</SPAN><SPAN class="">acy and</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN></SPAN><SPAN class="">releva</SPAN><SPAN class="">n</SPAN><SPAN class="">ce.<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">W</SPAN><SPAN class="">heth</SPAN><SPAN class="">er processing historical archives (Database) or a live request (New Ticket), the data passes<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">through</SPAN><SPAN class=""><SPAN>&nbsp;</SPAN>3 main<SPAN>&nbsp;</SPAN></SPAN><SPAN class="">steps</SPAN><SPAN class="">.</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="archi_scientifique.png" style="width: 837px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371923i91AA132E286C8E43/image-dimensions/837x341?v=v2" width="837" height="341" role="button" title="archi_scientifique.png" alt="archi_scientifique.png" /></span></P><H2 id="toc-hId-1396644100"><STRONG>&nbsp;<SPAN class=""><SPAN class="">1. Anonymization (Privacy Layer)</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></STRONG></H2><P><SPAN>Before&nbsp;any&nbsp;data&nbsp;is&nbsp;processed&nbsp;by&nbsp;external&nbsp;AI&nbsp;models,&nbsp;it&nbsp;must&nbsp;be&nbsp;sanitized&nbsp;to&nbsp;protect&nbsp;sensitive information.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>The system uses&nbsp;metadata&nbsp;(knowing&nbsp;who&nbsp;the client, consultant, and&nbsp;company&nbsp;are&nbsp;beforehand) to&nbsp;apply&nbsp;</SPAN><STRONG><SPAN>Regular Expressions (Regex)</SPAN></STRONG><SPAN>. This&nbsp;systematically&nbsp;strips&nbsp;out&nbsp;names, first&nbsp;names,&nbsp;company&nbsp;names, and&nbsp;other&nbsp;specific&nbsp;identifiers.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>To&nbsp;comply&nbsp;with&nbsp;legal&nbsp;confidentiality&nbsp;requirements&nbsp;and&nbsp;prevent&nbsp;"bias" in the&nbsp;similarity&nbsp;search&nbsp;(e.g.,&nbsp;preventing&nbsp;the model&nbsp;from&nbsp;linking&nbsp;tickets&nbsp;solely&nbsp;because&nbsp;they&nbsp;share&nbsp;a&nbsp;company&nbsp;name&nbsp;rather&nbsp;than&nbsp;a&nbsp;technical&nbsp;issue).</SPAN><SPAN>&nbsp;</SPAN></P><H2 id="toc-hId-1200130595"><SPAN><SPAN class=""><SPAN class="">2. Summarization (Resume Layer)</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></H2><P><SPAN>Raw conversation logs are&nbsp;often&nbsp;noisy,&nbsp;containing&nbsp;greetings,&nbsp;scheduling&nbsp;details, or digressions&nbsp;that&nbsp;dilute&nbsp;the&nbsp;technical&nbsp;content.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>The data&nbsp;is&nbsp;sent to a&nbsp;</SPAN><STRONG><SPAN>Large&nbsp;Language&nbsp;Model (LLM)</SPAN></STRONG><SPAN>&nbsp;via the&nbsp;</SPAN><STRONG><SPAN>SAP&nbsp;GenAI&nbsp;Hub</SPAN></STRONG><SPAN>. The system uses a&nbsp;specific&nbsp;prompt to force the LLM to structure the output&nbsp;into&nbsp;three&nbsp;distinct&nbsp;sections:</SPAN><SPAN>&nbsp;</SPAN></P><OL><LI><SPAN><SPAN class=""><SPAN class=""><STRONG>PROBLEM</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;A description of the initial issue.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></LI><LI><SPAN><SPAN class=""><SPAN class=""><STRONG><SPAN class="">LEADS (Pistes</SPAN></STRONG><SPAN class=""><STRONG>)</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;The diagnostic&nbsp;</SPAN><SPAN class="">steps</SPAN><SPAN class="">&nbsp;and&nbsp;</SPAN><SPAN class="">hypotheses</SPAN><SPAN class="">&nbsp;</SPAN><SPAN class="">considered</SPAN><SPAN class="">.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></LI><LI><SPAN><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><STRONG>SOLUTION</STRONG>:</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;The final&nbsp;</SPAN><SPAN class="">resolution</SPAN><SPAN class="">&nbsp;and&nbsp;</SPAN><SPAN class="">implementation</SPAN><SPAN class="">&nbsp;</SPAN><SPAN class="">details</SPAN><SPAN class="">.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></SPAN></LI></OL><P><SPAN><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class="">The&nbsp;</SPAN></SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">LLM&nbsp; generates</SPAN></SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">&nbsp;synthetic data to augment and refine the&nbsp;</SPAN><SPAN class="">initial</SPAN><SPAN class="">&nbsp;dataset. By doing so, it enhances consistency, semantic relevance, and overall coherence, ensuring that the data better aligns with the target application.</SPAN></SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></P><H2 id="toc-hId-1003617090"><SPAN><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class="">3</SPAN><SPAN class="">. Encoding (Vector Layer)</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></SPAN></SPAN></SPAN></H2><P><SPAN>This&nbsp;step&nbsp;converts&nbsp;the&nbsp;text&nbsp;summary&nbsp;into&nbsp;a format&nbsp;that&nbsp;computers can compare&nbsp;mathematically.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>The&nbsp;text&nbsp;is&nbsp;passed&nbsp;through&nbsp;an&nbsp;</SPAN><STRONG><SPAN>Embedding&nbsp;Model</SPAN></STRONG><SPAN>&nbsp;(such&nbsp;as&nbsp;OpenAI's&nbsp;text-embedding-ada-002 or&nbsp;others&nbsp;tested&nbsp;like&nbsp;</SPAN><STRONG><SPAN>SAP NEB</SPAN></STRONG><SPAN>). This model&nbsp;converts&nbsp;the&nbsp;text&nbsp;into&nbsp;a high-dimensional&nbsp;vector&nbsp;(a&nbsp;long&nbsp;list&nbsp;of&nbsp;numbers).</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>To store&nbsp;these&nbsp;vectors&nbsp;in the&nbsp;</SPAN><STRONG><SPAN>SAP HANA&nbsp;Vector&nbsp;Engine</SPAN></STRONG><SPAN>.&nbsp;When&nbsp;a new ticket arrives,&nbsp;its&nbsp;vector&nbsp;is&nbsp;calculated&nbsp;and&nbsp;compared&nbsp;against&nbsp;stored&nbsp;vectors&nbsp;using&nbsp;</SPAN><STRONG><SPAN>Cosine&nbsp;Similarity</SPAN></STRONG><SPAN>&nbsp;to&nbsp;instantly&nbsp;find&nbsp;the "N&nbsp;closest&nbsp;tickets"&nbsp;based&nbsp;on&nbsp;semantic&nbsp;meaning&nbsp;rather&nbsp;than&nbsp;just&nbsp;keyword&nbsp;matching.</SPAN></P><P>&nbsp;</P><H1 id="toc-hId-678020866"><SPAN class=""><SPAN class="">All</SPAN></SPAN><SPAN class=""><SPAN class="">‑</SPAN></SPAN><SPAN class=""><SPAN class="">In</SPAN></SPAN><SPAN class=""><SPAN class="">‑</SPAN></SPAN><SPAN class=""><SPAN class="">One SAP</SPAN></SPAN><SPAN class=""><SPAN class="">‑</SPAN></SPAN><SPAN class=""><SPAN class="">Native Stack: The Technical Engine Behind the Solution</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;</SPAN></SPAN></H1><P><SPAN class=""><SPAN class=""><SPAN>The solution is built entirely on&nbsp;a&nbsp;SAP‑native stack: all ticket texts are stored in&nbsp;</SPAN><STRONG><SPAN>SAP HANA&nbsp;Cloud</SPAN></STRONG><SPAN>,&nbsp;leveraging&nbsp;its in‑memory columnar engine for ultra‑fast reads, writes, and auditability.</SPAN><SPAN>&nbsp;</SPAN></SPAN></SPAN></P><P><SPAN>&nbsp;Embeddings are generated and indexed by the&nbsp;</SPAN><STRONG><SPAN>HANA Vector Engine</SPAN></STRONG><SPAN>, which performs approximate‑nearest‑neighbor similarity searches directly inside the database,&nbsp;eliminating&nbsp;any external vector&nbsp;store&nbsp;and keeping latency in the single‑digit‑millisecond range.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>For the generative layer,&nbsp;</SPAN><STRONG><SPAN>SAP AI Core</SPAN></STRONG><SPAN>&nbsp;on the Business Technology Platform orchestrates any LLM&nbsp;whether open‑source models or SAP‑tuned variants&nbsp;through a unified runtime that handles model loading, scaling, and secure&nbsp;inference &amp;&nbsp;GPU/CPU provisioning.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>The entire&nbsp;pipeline&nbsp;from&nbsp;vector retrieval to prompt engineering and response generation&nbsp;is exposed via the&nbsp;</SPAN><STRONG><SPAN>SAP Cloud SDK</SPAN></STRONG><SPAN>, ensuring a single, secure endpoint for consultants while keeping all data within the trusted HANA vault and&nbsp;benefiting&nbsp;from SAP’s built‑in compliance and monitoring capabilities.</SPAN><SPAN>&nbsp;<BR /></SPAN></P><P><SPAN><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="archi_SAP (1).png" style="width: 834px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371927i32AA2CD758A9ED78/image-dimensions/834x410?v=v2" width="834" height="410" role="button" title="archi_SAP (1).png" alt="archi_SAP (1).png" /></span></SPAN></P><H1 id="toc-hId-481507361">&nbsp;</H1><H1 id="toc-hId-284993856"><SPAN><SPAN class=""><SPAN class="">Results: Finding the Perfect Model Combination</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></H1><P><SPAN><SPAN class=""><SPAN class=""><SPAN class="">We evaluated dozens of model variants&nbsp;</SPAN><SPAN class="">on a 100 thousand item dataset&nbsp;</SPAN><SPAN class="">by relying on real‑world feedback from SAP consultants. Each answer was manually annotated as relevant, partially relevant, or not relevant, and those human judgments were used to rank and refine the models. This human‑centered evaluation ensured that the final solution meets the practical needs of consultants.</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;</SPAN><BR /></SPAN><SPAN class=""><SPAN class="">&nbsp;</SPAN><BR /></SPAN><SPAN class=""><SPAN class="">From a research perspective, we quantify this relevance with the classic&nbsp;</SPAN></SPAN><SPAN class=""><SPAN class="">Normalized Discounted Cumulative Gain (</SPAN><SPAN class="">nDCG</SPAN><SPAN class="">)</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;metric, a staple in information‑retrieval evaluation. Using&nbsp;</SPAN><SPAN class="">nDCG</SPAN><SPAN class="">&nbsp;we compared several system&nbsp;</SPAN><SPAN class="">configurations, manual</SPAN><SPAN class="">&nbsp;prompting versus the automated&nbsp;</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">“</SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">PromptPerfect</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">”</SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;approach, and the impact of an&nbsp;</SPAN><SPAN class="">additional</SPAN><SPAN class="">&nbsp;</SPAN></SPAN><SPAN class=""><SPAN class="">Re‑Ranking (RR)</SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;step. The metric lets us clearly see how each variation lifts the relevance of the top‑ranked answers, guiding data‑driven decisions on which pipeline components to adopt.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></P><P>&nbsp;</P><P><SPAN><SPAN class=""><SPAN class=""><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="results.png" style="width: 832px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/371929i5D73D40A2549D55A/image-dimensions/832x410?v=v2" width="832" height="410" role="button" title="results.png" alt="results.png" /></span></SPAN></SPAN></SPAN></P><P>&nbsp;<SPAN>According&nbsp;to&nbsp;these&nbsp;tests, the&nbsp;most&nbsp;effective configuration&nbsp;achieved&nbsp;a relevance score of&nbsp;</SPAN><STRONG><SPAN>0.86</SPAN></STRONG><SPAN>.&nbsp;Interestingly, the best&nbsp;results&nbsp;(highlighted&nbsp;above) came&nbsp;from&nbsp;a "hybrid"&nbsp;approach:&nbsp;combining&nbsp;</SPAN><STRONG><SPAN>Gemini 1.5 Pro</SPAN></STRONG><SPAN>&nbsp;(accessed&nbsp;via&nbsp;GenAI&nbsp;Hub) for&nbsp;generating&nbsp;the&nbsp;summaries,&nbsp;paired&nbsp;with&nbsp;</SPAN><STRONG><SPAN>OpenAI’s&nbsp;Ada 002</SPAN></STRONG><SPAN>&nbsp;model for&nbsp;generating&nbsp;the&nbsp;text&nbsp;embeddings.</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>This proves the value of SAP GenAI Hub: it allows developers to easily swap and combine models from different providers (like Google,</SPAN><SPAN>&nbsp;</SPAN><SPAN>Anthropic or&nbsp;OpenAI) to&nbsp;optimize for&nbsp;specific tasks without changing the underlying architecture.</SPAN><SPAN>&nbsp;</SPAN></P><P>&nbsp;</P><H1 id="toc-hId-88480351"><SPAN><SPAN class=""><SPAN class="">Acknowledgments</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></H1><P><SPAN><SPAN class=""><SPAN class=""><SPAN class="">We are grateful to the research community for the peer‑reviewed paper that introduced this work. You can read the full publication&nbsp;</SPAN><SPAN class="">(in French)</SPAN><SPAN class="">&nbsp;here:</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></P><P><SPAN><SPAN class=""><SPAN class=""><A href="https://aclanthology.org/2025.jeptalnrecital-industrielle.6.pdf " target="_self" rel="nofollow noopener noreferrer">https://aclanthology.org/2025.jeptalnrecital-industrielle.6.pdf </A>&nbsp;</SPAN></SPAN></SPAN></P><P><SPAN><SPAN class=""><SPAN class=""><SPAN class=""><SPAN class="">A sincere thank‑you to SAP for providing the end‑to‑end platform</SPAN><SPAN class="">&nbsp;</SPAN></SPAN><SPAN class=""><SPAN class=""><SPAN class="">SAP BTP</SPAN></SPAN></SPAN><SPAN class=""><SPAN class="">&nbsp;</SPAN><SPAN class="">—</SPAN><SPAN class="">&nbsp;SAP&nbsp;</SPAN><SPAN class="">HANA</SPAN><SPAN class="">&nbsp;Cloud</SPAN><SPAN class="">, Vector Engine, AI Core, and Cloud SDK—that made the solution possible. Your technology and support have been essential to turning academic insights into real‑world value for SAP consultants.</SPAN></SPAN><SPAN class="">&nbsp;</SPAN></SPAN></SPAN></SPAN></P> 2026-02-16T10:57:12.850000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/decision-logic-modernization-in-sap-integrations-using-sap-rpt-1-for/ba-p/14329617 Decision Logic Modernization in SAP Integrations: Using SAP-RPT-1 for Intelligent Routing Decisions 2026-02-16T19:54:29.244000+01:00 arunmaarirajha_kv https://community.sap.com/t5/user/viewprofilepage/user-id/2033343 <P><STRONG>Integration Modernization</STRONG> concept is often discussed in two dimensions:<BR />1)<STRONG>&nbsp;Platform modernization</STRONG> – moving from older stacks like SAP NetWeaver PI/PO, SAP Neo, or legacy 3rd party platforms to SAP Integration Suite.<BR />2)&nbsp;<STRONG>Scenario modernization</STRONG> – shifting from legacy protocols to modern integration patterns such as APIs &amp; event-driven architectures; upgrading ABAP / Java mapping to graphical mapping / groovy script; or moving from basic auth to OAuth 2.0 or client certificate based authentication, and so on.</P><P>But there is another Modernization opportunity at the scenario level: <STRONG>decision logic modernization</STRONG>.</P><P>In this blog, we explore how routing decision logic, as an orchestration step — can be modernized using SAP-RPT-1, SAP’s latest foundation model hosted on BTP AI Core (Generative AI Hub) — applied to an outbound delivery routing scenario.</P><P><STRONG>The Traditional Approach: Static Rules, Lookups, and Growing Complexity:<BR /></STRONG>In SAP PI/PO and Integration Suite, routing decisions are typically implemented using deterministic, rule-based logic.&nbsp;These decisions often control how a scenario behaves – including which receiver system to call, or which transformation branch to follow. The common approaches include:</P><UL><LI>Static routing&nbsp;via Routers based on payload elements or fields</LI><LI>Dynamic configurations, such as externalized parameters passed at runtime</LI><LI>Lookups to external systems&nbsp;(e.g., via JDBC, SOAP, RFC lookups) to evaluate conditions at runtime — achieved in Integration Suite via patterns like&nbsp;<EM>Request-Reply</EM>&nbsp;or&nbsp;<EM>Content Enricher</EM></LI></UL><P>These mechanisms work — but they come with growing challenges:</P><UL><LI>Rules get&nbsp;combinatorially complex&nbsp;with more conditions and evolving business scenarios</LI><LI>Lookup tables become&nbsp;burdensome to maintain</LI><LI>Decisions become&nbsp;rigid and hard to evolve&nbsp;over time</LI></UL><P>As a result, rule-based decisioning becomes harder to maintain as business requirements evolve. Agility suffers. Adaptation becomes expensive.</P><P>What if, instead of relying on static rule-based logic, iflows could make decisions based on historical business patterns — applying predictive machine learning without the overhead of training and managing custom models? With the in-context learning capabilities of SAP’s latest foundation model, SAP-RPT-1, such predictions can be performed on-the-fly at runtime.</P><P><STRONG>Enter SAP-RPT-1: Foundation Model for Predictive Tasks:<BR /></STRONG>SAP-RPT-1 is a pre-trained foundation model available via BTP AI Core (Generative AI Hub). Unlike traditional ML services that require dedicated data science setup, SAP-RPT-1 offers:</P><UL><LI>Pre-trained model, ready for “plug and play” in your scenario</LI><LI>No ML infrastructure setup</LI><LI>No data science expertise required</LI><LI>No-code consumption via API</LI><LI>Faster time-to-market</LI><LI>Democratization of AI capabilities across IT functions</LI></UL><P>SAP-RPT-1 is pre-trained specifically for predictive tasks such as classification and regression over ERP tabular data, <SPAN>unlike many general-purpose GenAI models that are primarily trained on large corpora of natural language text. </SPAN>Instead of building and managing a full ML lifecycle, integration developers can now leverage predictive intelligence directly into iflows — without any data science related heavy-lifting.</P><P>If you are new to SAP-RPT-1, I recommend that you read <SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/a-new-paradigm-for-enterprise-ai-in-context-learning-for-relational-data/ba-p/14260221" target="_blank">this introductory blog</A></SPAN>.<BR />Also, explore this <SPAN><A href="https://rpt.cloud.sap/" target="_blank" rel="noopener nofollow noreferrer">SAP-RPT-1 playground</A></SPAN>.</P><P><STRONG>Business Scenario: Intelligent Outbound Delivery Routing with SAP-RPT-1:<BR /></STRONG>SmartSense Technologies (SST), a global provider of smart security devices, faces challenges in routing outbound deliveries from its manufacturing plants to distribution centers, retail stores, and end customers. Their current routing logic, embedded in integration flows, relies on static rules and lookup tables – employing a combination of Material, Quantity, Delivery priority, Material group, Ship to location, and so on. This leads to frequent shipping delays, cost overruns, and SLA violations whenever logistics conditions change and sub-optimal 3PL partner is chosen.</P><P>To overcome these limitations, SST decides to leverage SAP Integration Suite together with SAP-RPT-1 foundation model. By embedding machine learning–driven decision-making into its integration flow, SST aims to dynamically select the optimal logistics partner for each delivery based on historical performance, cost, and delivery times. While SAP-<SPAN>RPT-1 is capable of solving predictive tasks such as classification and regression, we will be using it for classification task — i.e. to classify given delivery (defined by delivery header &amp; item attributes) to most optimal 3PL logistics provider.</SPAN></P><P>The following diagram illustrates the target architecture implemented by SST for intelligent outbound delivery routing using SAP Integration Suite and SAP-RPT-1 model, hosted on BTP’s Generative AI hub:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_11-1771266938443.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373226i4CCAFD40AD592E0E/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_11-1771266938443.png" alt="arunmaarirajha_kv_11-1771266938443.png" /></span></P><P><STRONG><BR />Step 1: Outbound Delivery Creation<BR /></STRONG>An outbound delivery is created in SAP S/4HANA, which triggers an outbound notification event. The event is consumed by the iflow, which then calls S/4HANA OData API to fetch the following delivery related header and item attributes.</P><P><U>Delivery header fields:</U> DeliveryDate, SalesOrganization, ShipToParty, SoldToParty, DeliveryDocumentType, CreationDate, ShippingPoint, DeliveryPriority, IncotermsClassification, TransactionCurrency</P><P><U>Delivery item fields:</U> Material, DeliveryDocumentItemText, MaterialGroup, ActualDeliveryQuantity, DeliveryQuantityUnit, Plant, StorageLocation, ItemGrossWeight, ItemWeightUnit, ControllingArea, DistributionChannel, GoodsMovementType, ProfitCenter</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_12-1771266938447.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373225i9B0E4B75AE051BE8/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_12-1771266938447.png" alt="arunmaarirajha_kv_12-1771266938447.png" /></span></P><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_13-1771266938452.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373224iF4BEC5389147947E/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_13-1771266938452.png" alt="arunmaarirajha_kv_13-1771266938452.png" /></span></P><P><STRONG>Step 2: Context Selection – Random Historical Records<BR /></STRONG>We store historic labeled data i.e. delivery headers, items and corresponding optimal 3PL partner data in SAP HANA Cloud. The iflow is configured to fetch a random set of this historic data from HANA Cloud. As recommended in <SPAN><A href="https://community.sap.com/t5/artificial-intelligence-blogs-posts/sap-rpt-1-a-step-by-step-guide-on-getting-started/ba-p/14290171" target="_blank">this blog</A></SPAN>, for most enterprise use-cases, where you have about hundred thousand to million lines of labelled data, depending on how diverse your scenario data is, you may choose the context size, by iterating and finalizing the apt number of historical records to pick. By picking random records, we are able to provide statistically diverse examples, improve prediction quality and avoid bias from sequential records.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_14-1771266938461.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373231i5B72696901521D40/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_14-1771266938461.png" alt="arunmaarirajha_kv_14-1771266938461.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_15-1771266938475.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373232iC11B3E383903BD1E/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_15-1771266938475.png" alt="arunmaarirajha_kv_15-1771266938475.png" /></span></P><P>The Iflow executes the following query on the HANA Cloud DB:</P><pre class="lia-code-sample language-sql"><code>SELECT TOP ${property.NumberofRecords} "DeliveryDate", "DeliveryDocumentNumber", "CreationDate", "DeliveryPriority", "IncotermsClassification", "Material", "DeliveryDocumentItemText", "MaterialGroup", "ActualDeliveryQuantity", "DeliveryQuantityUnit", "ItemGrossWeight", "ItemWeightUnit", "DistributionChannel", "PartnerNumber" FROM "RPT1"."historic_3pl_selections" ORDER BY RAND()</code></pre><P><STRONG><BR />Step 3: Constructing the SAP-RPT-1 Request<BR /></STRONG>We now combine:</P><UL><LI>Historical random records (known 3PL partner)</LI><LI>The current outbound delivery (3PL partner to be predicted)</LI></UL><P>The current transaction includes a placeholder: [PREDICT]</P><P>The request payload contains:</P><UL><LI>900 historical labeled rows</LI><LI>1 new outbound delivery row marked for prediction</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_16-1771266938479.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373230iDB50F5FD9567EFC8/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_16-1771266938479.png" alt="arunmaarirajha_kv_16-1771266938479.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_17-1771266938482.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373235iEE67725F5F92224C/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_17-1771266938482.png" alt="arunmaarirajha_kv_17-1771266938482.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_18-1771266938487.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373236i224D7ABAF3DC24FC/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_18-1771266938487.png" alt="arunmaarirajha_kv_18-1771266938487.png" /></span></P><P>For detailed info on payload construction, refer to the <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/example-payloads-for-inferencing-sap-rpt-1#request-payloads" target="_blank" rel="noopener noreferrer">official documentation page</A></SPAN> and the <SPAN><A href="https://github.com/SAP-samples/sap-rpt-samples" target="_blank" rel="noopener nofollow noreferrer">GitHub repository</A></SPAN> (sample Postman/Bruno collections available for experimentation).</P><P>Further, while creating the inferencing request to SAP-RPT-1, we trim the columns from original 24 down to only 14 columns of delivery attributes, by removing fields like Sales Org, Controlling Area, Profit centre that do not influence partner selection. This is one of the best practices to achieve best possible predictive outputs. Read more recommended <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/example-payloads-for-inferencing-sap-rpt-1#best-practices" target="_blank" rel="noopener noreferrer">best practices here</A></SPAN>.<BR /><STRONG>&nbsp;</STRONG></P><P><STRONG>Step 4: Deploying SAP-RPT-1 model in Generative AI Hub</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_19-1771266938496.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373237i2ABECD87E5ACEC4B/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_19-1771266938496.png" alt="arunmaarirajha_kv_19-1771266938496.png" /></span></P><P><STRONG><BR />Step 5: SAP-RPT-1 inference response<BR /></STRONG>The response contains Number of predicted records, predicted field (here optimal 3PL partner) along with Confidence score.</P><P>Response from our example scenario:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_20-1771266938500.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373238i6A7834DDBC4CB100/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_20-1771266938500.png" alt="arunmaarirajha_kv_20-1771266938500.png" /></span></P><P>You can also send multiple unknown records in one batch. To read more about the number of max. records and columns supported by SAP-RPT-1, check out <SPAN><A href="https://help.sap.com/docs/sap-ai-core/generative-ai/sap-rpt-1#sap-rpt-models" target="_blank" rel="noopener noreferrer">this help page</A></SPAN>.</P><P><STRONG>Step 6: Intelligent Routing in the Iflow<BR /></STRONG>The Iflow now:</P><OL><LI>Reads the predicted PartnerNumber</LI><LI>Evaluates confidence threshold (optional safeguard)</LI><LI>Routes outbound delivery to the optimal 3PL partner</LI><LI>Optionally, logs decision + confidence for traceability in HANA Cloud.</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="arunmaarirajha_kv_21-1771266938507.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373239iADC538832CCAAFFB/image-size/large?v=v2&amp;px=999" role="button" title="arunmaarirajha_kv_21-1771266938507.png" alt="arunmaarirajha_kv_21-1771266938507.png" /></span></P><P><STRONG><BR />When AI in Iflows Brings Real Business Value</STRONG></P><UL><LI><STRONG>When routing decisions directly impact business outcomes</STRONG><BR />If sub-optimal routing can lead to SLA breaches, shipment delays, higher costs, or customer dissatisfaction, intelligent decision logic creates measurable advantage.</LI><LI><STRONG>When rule-based logic becomes complex and brittle</STRONG><BR />As business variables multiply (products, regions, priorities, partners), static rules become hard to maintain and scale. ML handles multi-dimensional patterns more effectively.</LI><LI><STRONG>When historical ERP data contains predictive patterns</STRONG><BR />If past delivery and fulfillment data reflects consistent business behavior, ML can learn from it and improve future routing decisions beyond deterministic conditions.</LI><LI><STRONG>When business context evolves over time</STRONG><BR />With changing product mixes, customer bases, or distribution models, retrainable ML models adapt dynamically — avoiding constant manual rule maintenance.</LI></UL><P><SPAN>Not every integration scenario requires ML capabilities. But where routing decisions carry operational or financial impact, replacing rigid rule sets with adaptive decision logic can significantly improve resilience and maintainability. Decision logic modernization is therefore not about adding AI everywhere — it is about applying it where it meaningfully improves orchestration.</SPAN></P><P><EM><SPAN>Note: This blog revisits </SPAN></EM><SPAN><A href="https://community.sap.com/t5/technology-blog-posts-by-sap/intelligent-orchestration-in-iflows-business-aware-routing-with-sap-btp-s/ba-p/14162148" target="_blank"><EM>our earlier implementation</EM></A><EM> in light of the planned deprecation of SAP BTP Data Attribute Recommendation service, demonstrating an updated approach using SAP-RPT-1 model.</EM></SPAN></P> 2026-02-16T19:54:29.244000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/action-required-update-your-certificate-trust-stores-for-enhanced-sap-hana/ba-p/14332703 Action Required: Update your certificate trust stores for enhanced SAP HANA Cloud security 2026-02-23T10:10:00.017000+01:00 thomashammer https://community.sap.com/t5/user/viewprofilepage/user-id/122781 <P>To enhance the security of our cloud services, SAP is updating the root certificate authority (CA) for various services, including the SAP BTP, Cloud Foundry environment, SAP BTP, ABAP environment, and SAP HANA Cloud. We are transitioning our server certificates signed by DigiCert Global Root G2 and G1/G5 cross-signed CAs to the newer and more secure "<A href="https://cacerts.digicert.com/DigiCertTLSRSA4096RootG5.crt.pem?_gl=1*dlpznz*_gcl_au*MTI3Njc1MzUzMi4xNzI4MzAzMzA1" target="_blank" rel="noopener nofollow noreferrer">DigiCert TLS RSA4096 Root G5</A>" CA, as DigiCert will stop signing new server certificates with the old root certificate.</P><P>This change is necessary to align with the latest industry security standards and recommendations, which call for stronger cryptographic keys. The new G5 certificate chain uses RSA-4096, providing a higher level of security for your connections.</P><P><STRONG>What Does This Mean for You?</STRONG></P><P>If you have client applications or services that connect to SAP HANA Cloud, you may need to take action to ensure a seamless transition and avoid connection failures.</P><P><STRONG>Call to Action: Review the official notes and update your Trust Stores as needed</STRONG></P><P>To prevent any disruption to your services, it is crucial that you update the trust stores for your client applications and services, as needed. Please review the officially provided SAP notes, giving more guidance:</P><UL><LI><STRONG><A href="https://me.sap.com/notes/3397584" target="_blank" rel="noopener noreferrer">SAP Note 3397584</A>:</STRONG>&nbsp;HANA Cloud Connections will switch from "DigiCert Global Root CA" to "DigiCert TLS RSA4096 Root G5"</LI><LI><STRONG><A href="https://me.sap.com/notes/3399573" target="_blank" rel="noopener noreferrer">SAP Note 3399573</A>:</STRONG>&nbsp;HANA Cloud Switch of DigiCert Root Certificate</LI></UL><P><STRONG>Rollout Timeline</STRONG></P><P>The switch to the G5 root CA for existing SAP HANA Cloud regions is planned for the second quarter of this year. The timeline for your landscape will be communicated via established communication channels begin of March. This blog post will also be updated with further information once available. &nbsp;</P><P><STRONG>Further Information and References</STRONG></P><P>As the root certificate change is not only impacting SAP HANA Cloud usage, but also other SAP BTP services, we highly recommend reviewing the below mentioned references/notes, to ensure the highest degree of business continuity:</P><UL><LI><STRONG>SAP Note 3566727:&nbsp;</STRONG><A href="https://me.sap.com/notes/3566727" target="_blank" rel="noopener noreferrer">Root Certificate Replacement in the SAP BTP, Cloud Foundry Environment</A></LI><LI><STRONG>Blog Post:</STRONG>&nbsp;<A href="https://community.sap.com/t5/technology-blog-posts-by-sap/sap-btp-cloud-foundry-switching-to-higher-security-level-root-certificate/ba-p/14061965" target="_blank">SAP BTP Cloud Foundry: Switching to higher security level Root Certificate Authority</A></LI><LI><STRONG>Blog Post:</STRONG>&nbsp;<A href="https://blogs.sap.com/2024/11/26/sap-btp-abap-environment-new-root-certificate-authority/" target="_blank" rel="noopener noreferrer">SAP BTP ABAP Environment – New Root Certificate Authority</A></LI><LI><STRONG>GitHub:</STRONG>&nbsp;<A href="https://github.com/sap-software/btp-trust-store" target="_blank" rel="noopener nofollow noreferrer">SAP BTP Trust Store</A></LI></UL><P>To ensure your applications continue to operate smoothly and securely, we suggest you take the necessary steps to update your trust stores as soon as possible.</P><P>Thanks,</P> 2026-02-23T10:10:00.017000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/meet-your-ops-agents-powered-by-sap-automation-pilot-and-joule/ba-p/14333881 Meet Your Ops Agents - Powered by SAP Automation Pilot and Joule 2026-02-23T11:10:00.063000+01:00 BiserSimeonov https://community.sap.com/t5/user/viewprofilepage/user-id/3334 <P>Building on our latest innovation around <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/sap-automation-pilot-empowering-it-operations-with-ai-insights/ba-p/14328086" target="_blank">SAP Automation Pilot and AI-powered IT Operations</A>, I am excited to share that we are already seeing real <STRONG>productive use cases for Day-2 Operations Agents</STRONG>&nbsp;which are powered by SAP Automation Pilot content and consumed directly in Joule in a truly agentic manner.</P><H2 id="toc-hId-1790481473"><span class="lia-unicode-emoji" title=":movie_camera:">🎥</span>&nbsp; Meet Your Ops Agent</H2><P>See an example about&nbsp;<STRONG>Ops Agent powered by SAP Automation Pilot</STRONG> and discover how easy it is to set up an MCP server, expose automations, and <STRONG>trigger them in an agentic way through Joule</STRONG>.</P><P><A href="https://community.sap.com/source-Ids-list" target="1_rb2wn4mw" rel="nofollow noopener noreferrer">&nbsp;</A></P><P>This approach unlocks a <STRONG>completely new way to ops&nbsp; automation</STRONG>&nbsp; where AI agents can reason, execute, and orchestrate operational tasks seamlessly following customers requests and needs.</P><P>The demonstration walks through a scenario about&nbsp;an S/4HANA Health Monitoring Ops Agent in Joule, designed to:</P><UL><LI>Provide system health &amp; diagnostics insights;</LI><LI>Perform automated analysis &amp; deliver actionable recommendations;</LI><LI>Proactively send alerts when potential issues are detected;</LI></UL><H2 id="toc-hId-1593967968"><STRONG><span class="lia-unicode-emoji" title=":open_book:">📖</span>Just the beginning</STRONG></H2><P>&nbsp;There is an <STRONG>unlimited range of Ops Agent scenarios</STRONG> enabled through SAP Automation Pilot, including for example:</P><UL><LI>SAP BTP Operations (e.g. Cloud Foundry applications,&nbsp; SAP HANA Cloud, SAP cTMS, etc.);</LI><LI>SAP Build and Day 2 Operations for your apps;</LI><LI>SAP Integration Suite checks and actions via automated API calls;&nbsp;</LI><LI>Integrations with custom solutions;</LI><LI>And many more …</LI></UL><P>Stay tuned as we will be <STRONG>revealing more real-life agentic operations scenarios</STRONG> in the coming weeks.</P><H2 id="toc-hId-1397454463"><SPAN><span class="lia-unicode-emoji" title=":rocket:">🚀</span></SPAN><STRONG><SPAN>&nbsp;Get involved: Join Our Customer Engagement Initiative</SPAN></STRONG></H2><P class=""><SPAN>If you would like to bring your own automation scenario and experience first-hand the AI-driven operations capabilities of SAP Automation Pilot and Joule, join our <STRONG>Customer Engagement Initiative:&nbsp;<A title="CEI - Agentic AI Operations with SAP Automation Pilot" href="https://influence.sap.com/sap/ino/#campaign/4161" target="_self" rel="noopener noreferrer">Agentic AI Operations with SAP Automation Pilot</A></STRONG></SPAN></P><P class=""><SPAN><span class="lia-unicode-emoji" title=":hourglass_not_done:">⏳</span>Registration is open until </SPAN><STRONG><SPAN>March 15, 2026</SPAN></STRONG><SPAN>&nbsp;- secure your spot soon!</SPAN></P><DIV>&nbsp;</DIV><DIV><H2 id="toc-hId-1200940958">Learn More</H2><P><SPAN>If you would like to explore MCP servers in more detail or start using them in your own landscapes, please check out the SAP Automation Pilot official documentation which provides further insights:</SPAN></P></DIV><UL><LI><SPAN><A href="https://help.sap.com/docs/automation-pilot/automation-pilot/mcp-server" target="_blank" rel="noopener noreferrer">MCP Server Overview</A></SPAN>;</LI><LI><SPAN><A href="https://help.sap.com/docs/automation-pilot/automation-pilot/managing-mcp-servers?locale=en-US&amp;state=PRODUCTION&amp;version=Cloud" target="_blank" rel="noopener noreferrer">Managing MCP Servers</A></SPAN>;</LI><LI><A title="Integrating SAP Automation Pilot with Joule Studio" href="https://help.sap.com/docs/automation-pilot/automation-pilot/integrating-service-with-joule-studio" target="_self" rel="noopener noreferrer">Integrating SAP Automation Pilot with Joule Studio</A>&nbsp;;</LI></UL><P><SPAN>Looking forward to collaborating and shaping the future of agentic IT operations together.</SPAN></P> 2026-02-23T11:10:00.063000+01:00 https://community.sap.com/t5/sap-community-leaders-finder/gregor-wolf/ba-p/14334595 Gregor Wolf 2026-02-23T18:24:58.182000+01:00 StephanieMarley https://community.sap.com/t5/user/viewprofilepage/user-id/109 <P class=""><A href="https://community.sap.com/t5/user/viewprofilepage/user-id/12545" target="_self">About Gregor</A></P> <UL> <LI><FONT face="tahoma,arial,helvetica,sans-serif">Germany |&nbsp;</FONT><FONT face="tahoma,arial,helvetica,sans-serif">SAP Mentor since 2007 |&nbsp;</FONT><FONT face="tahoma,arial,helvetica,sans-serif">Follow Gregor on <A href="https://www.linkedin.com/in/gregorwolf/" target="_self" rel="nofollow noopener noreferrer">LinkedIn</A></FONT></LI> </UL> <P><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-left" image-alt="gregor.jpg" style="width: 200px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/375814i2FFBF9F0A61929ED/image-size/small?v=v2&amp;px=200" role="button" title="gregor.jpg" alt="gregor.jpg" /></span></STRONG><SPAN>I am&nbsp;an independent SAP solution architect, developer and SAP Mentor. I focus on SAP Business Technology Platform, SAP Cloud Application Programming Model (CAP), Node.JS, SAP Fiori Elements and Security. I've supported several customers during their SAP Cloud ERP Private Edition (S/4HANA On Premise) implementation. Since the end of 2024 I support the technical topics at an SAP Cloud ERP Public Edition Customer.</SPAN></P> <P><FONT face="tahoma,arial,helvetica,sans-serif"><STRONG>Topics of interest:&nbsp;</STRONG></FONT>SAP Business Technology Platform, SAP Cloud Application Programming Model (CAP), SAP Cloud ERP public and private, SAPUI5, OpenUI5, SAP Fiori, SAP HANA Cloud</P> <P><FONT face="tahoma,arial,helvetica,sans-serif"><STRONG>Gregor, what inspired you to become an SAP Mentor?</STRONG></FONT></P> <P><FONT face="tahoma,arial,helvetica,sans-serif"><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="handshake .png" style="width: 68px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/52491i8E9D2FFB3C7BC293/image-dimensions/68x68?v=v2" width="68" height="68" role="button" title="handshake .png" alt="handshake .png" /></span></STRONG></FONT>A key influence early in my SAP Community&nbsp;journey was Mark Finnern. He is the father of the SAP Community. Mark encouraged me to move beyond answering questions and start sharing detailed technical how-to's but also event reports in blog posts. In 2007 Mark also founded the SAP Mentors&nbsp;program which gave me the opportunity to provide structured, experience-based feedback to SAP product teams and help bridge the gap between product vision and real-world implementation.</P> <P><FONT face="tahoma,arial,helvetica,sans-serif"><STRONG>What advice would like to share with other SAP community members?</STRONG></FONT></P> <P><FONT face="tahoma,arial,helvetica,sans-serif"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="298874_collaborate_blue (1).png" style="width: 65px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/52495i64D82195EFF8CCB9/image-dimensions/65x65?v=v2" width="65" height="65" role="button" title="298874_collaborate_blue (1).png" alt="298874_collaborate_blue (1).png" /></span></FONT>My advice to other SAP Community members is simple: contribute consistently and think long-term.</P> <P>Start by answering questions and helping others solve real problems - but don’t stop there. Document your lessons learned, share architectural insights, and write about what worked and what didn’t. That’s how collective expertise grows. And you will be amazed how often you research years later and find your own answers and blog posts to solve a problem you're just facing. Despite all the new stuff SAP has quite a long tail that doesn't change quickly.<BR /><BR />Engage constructively, challenge respectfully, and focus on signal over noise. The community is not just a support forum - it’s an ecosystem where practitioners can influence product direction, elevate standards, and shape the future of SAP together.</P> 2026-02-23T18:24:58.182000+01:00 https://community.sap.com/t5/technology-blog-posts-by-members/part-1-creating-and-deploying-a-sap-cap-application-with-sap-hana-on-sap/ba-p/14314095 Part 1: Creating and Deploying a SAP CAP Application with SAP HANA on SAP BTP (Cloud Foundry) 2026-02-25T07:35:57.804000+01:00 yashbhosle789 https://community.sap.com/t5/user/viewprofilepage/user-id/1892526 <P><EM>Note: This blog is the <STRONG>First Part</STRONG> of series - Building and Deploying SAP CAP Applications on BTP: From Database to Work Zone</EM></P><P>&nbsp;</P><P><STRONG>Introduction</STRONG></P><P>SAP Cloud Application Programming Model (CAP) is often demonstrated through local examples, but its real value becomes clear when the application is deployed to SAP Business Technology Platform (BTP) and backed by a managed database.</P><P>In this blog, we will go through creating a basic SAP CAP application using the <STRONG>SAP Business Application Studio (BAS) CAP template</STRONG>, binding it to <STRONG>SAP HANA Cloud,</STRONG>&nbsp;and deploying it to <STRONG>SAP BTP using the Cloud Foundry runtime</STRONG>.</P><P>The goal here is not to build a complex application, but to establish a clean, working reference point. I start with a minimal CAP project to keep the focus on deployment rather than domain complexity.</P><P>This blog serves as a <STRONG>baseline</STRONG> for a series where the same CAP application will later be deployed using different databases and runtimes. The focus here is not application complexity, but establishing a clean, working reference point.</P><P>&nbsp;</P><P><STRONG>Pre-requisites:</STRONG></P><P>Please make sure that you have these before starting the development:</P><UL><LI>SAP BTP Account</LI><LI>Cloud Foundry instance</LI><LI>Active Hana Cloud Instance</LI><LI>SAP BAS / VSCode for development</LI><LI>If using BAS, have a dev space running</LI><LI>If using VSCode then make sure that you have installed Node.js and CLI installed</LI></UL><P>&nbsp;</P><P>Now, let’s start with the development.</P><P>&nbsp;</P><P><STRONG>Create a CAP Project from Template given in the BAS:</STRONG></P><P>SAP Business Application Studio provides a template for creating a CAP project with the required structure. The positives of using the template is that it automatically builds your Multi-Tenant Application file (mta.yaml) with the required deployment modules.</P><P>&nbsp;</P><P>Follow the following steps:</P><UL><LI>Open <STRONG>BAS</STRONG></LI><LI>Click on the <STRONG>Side Menu</STRONG> on the top-left corner</LI><LI>Click on <STRONG>File</STRONG></LI><LI>Click on <STRONG>New Project From Template</STRONG></LI><LI>Select <STRONG>CAP Project</STRONG></LI><LI>Configure the project as follows:<UL><LI><STRONG>Project Name</STRONG>: cap-hana-app</LI><LI><STRONG>Runtime</STRONG>: Node.js</LI><LI><STRONG>Database</STRONG>: SAP HANA Cloud</LI><LI><STRONG>Deployment Environment</STRONG>: Cloud Foundry</LI><LI><STRONG>Runtime Capabilities</STRONG>: XSUAA</LI></UL></LI><LI>Click on <STRONG>Finish</STRONG></LI></UL><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yashbhosle789_0-1771265545156.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373190i1AB58ADFA9861458/image-size/medium?v=v2&amp;px=400" role="button" title="yashbhosle789_0-1771265545156.png" alt="yashbhosle789_0-1771265545156.png" /></span></P><P>CAP Project creation wizard with SAP HANA selected</P><P>&nbsp;</P><P>This will create a CAP project for you with mta.yaml automatically updated. The generated project follows standard CAP structure:</P><UL><LI>app – &nbsp;UI</LI><LI>db – Schema and data files</LI><LI>srv – Service Definition</LI></UL><P>&nbsp;</P><P>Now that the project is created, let’s get into development.</P><P>&nbsp;</P><P><STRONG>Development:</STRONG></P><P>Follow the following steps:</P><UL><LI>Define a Simple Data Model:</LI></UL><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">If you have schema.cds under db/ folder, then update it with the following code snippet. If you do not have schema.cds, then create one.</P><pre class="lia-code-sample language-bash"><code>namespace com.mycompany.capapp; entity Products { key ID : UUID; name : String(100); description : String(500); price : Decimal(10, 2); stock : Integer; category : String(50); createdAt : Timestamp; modifiedAt : Timestamp; }</code></pre><P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</P><UL><LI>Expose the Model via CAP Service:</LI></UL><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">If you have service.cds under srv/ folder, then update it with the following code snippet. If you do not have service.cds, then create one.</P><pre class="lia-code-sample language-bash"><code>using com.mycompany.capapp from '../db/schema'; service CatalogService { entity Products as projection on capapp.Products; }</code></pre><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">CAP automatically exposes this service as an OData endpoint without requiring manual REST configuration.</P><P>&nbsp;</P><UL><LI>Add Data:</LI></UL><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">Run the following in the project terminal:</P><pre class="lia-code-sample language-bash"><code>cds add data --records 5</code></pre><P>&nbsp;</P><UL><LI>Run the Application locally:</LI></UL><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">Run the following command:</P><pre class="lia-code-sample language-bash"><code>cds watch</code></pre><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">You should be able to see the data in the <STRONG>Products</STRONG></P><P class="lia-indent-padding-left-60px" style="padding-left : 60px;">You can also use the SQLite as a database while running locally. Execute these before starting the application:</P><pre class="lia-code-sample language-bash"><code>cds build cds deploy</code></pre><P><STRONG>&nbsp;</STRONG></P><UL><LI>Add a Managed Approuter:</LI></UL><OL><LI>Right click on <STRONG>mta.yaml</STRONG></LI><LI>Click on Create <STRONG>MTA Module from Template</STRONG></LI><LI>Select <STRONG>Approuter Configuration</STRONG></LI><LI>Fill in the required details</LI><LI>Click <STRONG>Finish</STRONG></LI></OL><P>&nbsp;</P><P>This is important to view the data once deployed.</P><P>&nbsp;</P><P><STRONG>Prepare the Project for Deployment:</STRONG></P><P>Since we have created the project from the BAS Template, we already have the mta.yaml updated and deployment ready. This file already includes:</P><UL><LI>CAP Service Modules</LI><LI>SAP HANA HDI Container</LI><LI>Required Service Bindings</LI></UL><P>&nbsp;</P><P><STRONG>Build the CAP Project:</STRONG></P><P>First things first, install node modules. Run the following in the Terminal:</P><pre class="lia-code-sample language-bash"><code>npm i</code></pre><P>To build the project, you can either run the following in the terminal:</P><pre class="lia-code-sample language-bash"><code>mbt build</code></pre><P>or can do it manually:</P><UL><LI>Right click on <STRONG>mta.yaml</STRONG></LI><LI>Select <STRONG>Build MTA Project</STRONG></LI></UL><P>This will build your project for deployment. Watch-out at the project structure. You might see a new addition to the structure <STRONG>mta_archives</STRONG> which consists the <STRONG>mtar</STRONG> file of the project.</P><P>&nbsp;</P><P><STRONG>Deploy the Application:</STRONG></P><P>To deploy the application to your BTP using Cloud Foundry, you will have to login to Cloud Foundry. Run the following in Terminal:</P><pre class="lia-code-sample language-bash"><code>cf login</code></pre><P>This will ask for your subaccount credentials. Provide them and you will be logged in successfully. Remember to map the correct <STRONG>space</STRONG> and <STRONG>org</STRONG>.</P><P>Once you are logged in successfully, you are ready to deploy the application. Remember the file generated while building the CAP project, this is the same file that will be deployed.</P><P>Run the following command:</P><pre class="lia-code-sample language-bash"><code>cf deploy mta_archives/cap-hana-app_1.0.0.mtar</code></pre><P>or, you can just right click on <STRONG>cap-hana-app_1.0.0.mtar</STRONG> under <STRONG>mta_archives</STRONG> folder, and click on <STRONG>Deploy MTA Archive.</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yashbhosle789_1-1771266472799.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373199iFD534F4F4F589993/image-size/medium?v=v2&amp;px=400" role="button" title="yashbhosle789_1-1771266472799.png" alt="yashbhosle789_1-1771266472799.png" /></span></P><P>Successful Cloud Foundry deployment</P><P>&nbsp;</P><P>During deployment:</P><UL><LI>HDI Container is created</LI><LI>Database schema is deployed</LI><LI>CAP Service is bound to the database automatically</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yashbhosle789_2-1771266472809.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/373200iD76C1135E451AE31/image-size/medium?v=v2&amp;px=400" role="button" title="yashbhosle789_2-1771266472809.png" alt="yashbhosle789_2-1771266472809.png" /></span></P><P>Confirm the Deployed Instances</P><P>&nbsp;</P><P><STRONG>Verify the Deployed Application:</STRONG></P><P>When the deployment is successful, you will get a application route. Copy the same and open it in the new tab of your browser and access the service endpoint: <STRONG>/catalog/Products</STRONG></P><P>If the deployment is successful, you can see the data displayed in a JSON format.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="yashbhosle789_0-1771269609219.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374382iF18B31DAA94E39B4/image-size/medium?v=v2&amp;px=400" role="button" title="yashbhosle789_0-1771269609219.png" alt="yashbhosle789_0-1771269609219.png" /></span></P><P>Browser showing the OData service response</P><P>&nbsp;</P><P><STRONG>Conclusion</STRONG></P><P class="">With that, we have successfully created a minimal SAP CAP application, connected it to SAP HANA Cloud, and deployed it to SAP BTP using the Cloud Foundry runtime. While the application itself is intentionally simple, the deployment pipeline we established here — from schema definition to MTA build to Cloud Foundry deployment — reflects a production-ready workflow that scales well as complexity grows.</P><P class="">The key takeaways from this exercise are that the BAS CAP template significantly reduces boilerplate by auto-generating the mta.yaml with the necessary modules and bindings, the HDI container handles schema deployment seamlessly, and CAP's convention-over-configuration approach means your OData service is ready without any manual REST wiring.</P><P class="">This baseline will serve as the foundation for the rest of this series, where we will explore deploying the same application with different databases, runtimes, and configurations — making it easier to understand the trade-offs of each approach. Stay tuned for the next part!</P><P>&nbsp;</P><P><STRONG>Next Steps</STRONG></P><P>In the upcoming blogs, we will focus on creating the same application using a <STRONG>PostgreSQL, Hyperscaler Option</STRONG>.</P> 2026-02-25T07:35:57.804000+01:00 https://community.sap.com/t5/sap-for-utilities-blog-posts/mass-invoice-reversals-without-customer-panic-a-clean-core-btp-driven/ba-p/14328325 Mass Invoice Reversals Without Customer Panic: A Clean Core, BTP Driven Strategy for ModernUtilities 2026-02-27T13:18:41.584000+01:00 Atul_Joshi85 https://community.sap.com/t5/user/viewprofilepage/user-id/2274193 <H1 id="toc-hId-1660619201">Mass Invoice Reversals Without Customer Panic: A Clean‑Core, BTP‑Driven Strategy for Modern Utilities</H1><H2 id="toc-hId-1593188415">Executive Summary</H2><P>If you’ve spent any time around utility billing operations, you know mass reversals aren’t rare — they’re part of the job. Sometimes it’s a rate issue, sometimes a meter read problem, and sometimes it’s just one of those unexpected data surprises that ripple across months of billing, these reversals often span multiple billing cycles and thousands of customers. While the operational process may take days to complete, the customer impact is immediate: as soon as reversals post, customers see large credits on their online portals. This triggers confusion, social media chatter, call‑center spikes, and reputational risk before corrected bills are ready.</P><P>What I want to share here is a practical way utilities can handle this without rewriting IS‑U or scrambling teams at 6 AM. We used SAP BTP to create a small visibility layer that sits between IS‑U and the customer portal — and honestly, it solved a problem that has bothered billing teams for years, event‑driven architecture, and lightweight AI to prevent premature credit visibility without modifying SAP IS‑U or customer‑facing systems. The solution introduces a “Billing Visibility Orchestration Layer” that controls what customers see during mass reversals, ensuring operational accuracy without customer panic.</P><H2 id="toc-hId-1396674910">The Practitioner Reality: When a Simple Reversal Becomes a Public Crisis</H2><P>Every utility billing team has lived through this scenario:</P><UL><LI>A mass correction requires reversing 8–12 months of invoices.</LI><LI>The reversal job runs for hours or days.</LI><LI>As soon as reversals post, customers see large credits on the portal.</LI><LI>Screenshots appear on social media: <EM>“Utility X just gave me $900 credit!”</EM></LI><LI>Call centers are overwhelmed.</LI><LI>Regulators ask questions.</LI><LI>The billing team is still in the middle of the correction.</LI></UL><P>The root cause is simple: <STRONG>SAP IS‑U posts reversals immediately, and customer portals read them immediately.</STRONG> And here’s the part nobody talks about: IS‑U has not built‑in buffer. The moment a reversal posts, the customer sees it. There’s no ‘hold this until we’re done’ button. That’s where the trouble starts. Utilities cannot legally notify customers in advance, and they cannot delay the operational correction. This creates a structural gap between <STRONG>internal billing processes</STRONG> and <STRONG>external customer visibility</STRONG>.</P><H2 id="toc-hId-1200161405">Why This Happens: A Technical Gap in Traditional IS‑U</H2><P>SAP IS‑U was designed with a <STRONG>real‑time posting model</STRONG>:</P><UL><LI>Reversal document posts →</LI><LI>FICA updates →</LI><LI>Portal reads →</LI><LI>Customer sees credit</LI></UL><P>There is no concept of:</P><UL><LI>“Temporary suppression”</LI><LI>“Billing in progress”</LI><LI>“Visibility hold”</LI><LI>“Batch publish”</LI></UL><P>This is not a failure of IS‑U — it’s simply a design that predates modern digital customer expectations.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Atul_Joshi85_0-1771558068995.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374549i9D6733969DC767A6/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="Atul_Joshi85_0-1771558068995.png" alt="Atul_Joshi85_0-1771558068995.png" /></span></P><H2 id="toc-hId-1003647900">The Modern Solution: A Billing Visibility Orchestration Layer on SAP BTP</H2><P>Instead of modifying IS‑U or customer portals, utilities can introduce a lightweight, clean‑core extension on SAP BTP that controls <STRONG>visibility</STRONG>, not <STRONG>billing</STRONG>.</P><P>This layer sits between:</P><H3 id="toc-hId-936217114">SAP IS‑U → BTP → Customer Portal / CIS / Mobile App</H3><P>It does not change billing logic. It does not alter FICA. It does not modify the portal. It simply orchestrates <STRONG>when</STRONG> billing information becomes visible.</P><H3 id="toc-hId-739703609">Key Capabilities</H3><UL><LI>Detect mass reversals in real time</LI><LI>Flag accounts as “under correction”</LI><LI>Temporarily suppress credit visibility</LI><LI>Replace credit display with a neutral message</LI><LI>Release final billing once corrections are complete</LI></UL><P>This simple layer kept customers from seeing half‑baked credits and gave the billing team the breathing room they needed to finish the job properly.</P><H2 id="toc-hId-414107385">Textual Representation</H2><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Atul_Joshi85_0-1771016386222.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/372488i4515749CD177C730/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="Atul_Joshi85_0-1771016386222.png" alt="Atul_Joshi85_0-1771016386222.png" /></span></P><P>&nbsp;</P><H2 id="toc-hId-217593880">How It Works Using BTP Services</H2><H3 id="toc-hId-150163094">1. SAP Event Mesh — Detect Reversals in Real Time</H3><P>Event Mesh listens for:</P><UL><LI>Billing document reversals</LI><LI>FKK document reversals</LI><LI>Mass job triggers</LI></UL><P>When reversal volume exceeds a threshold (e.g., &gt;500 accounts), it triggers the suppression workflow.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Atul_Joshi85_0-1771558513456.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/374550i5A4FB97BCF88B9B2/image-size/medium/is-moderation-mode/true?v=v2&amp;px=400" role="button" title="Atul_Joshi85_0-1771558513456.png" alt="Atul_Joshi85_0-1771558513456.png" /></span></P><H3 id="toc-hId--121581780">2. SAP BTP Workflow — Manage the Correction Window</H3><P>Workflow orchestrates:</P><UL><LI>Start of reversal window</LI><LI>Suppression period</LI><LI>Completion of rebilling</LI><LI>Release of final invoice</LI></UL><P>This creates a controlled, auditable process.</P><H3 id="toc-hId--318095285">3. SAP BTP Business Rules — Define Visibility Logic</H3><P>Rules can be configured without coding:</P><UL><LI>“If reversal count &gt; X, suppress visibility for Y hours.”</LI><LI>“If account is in correction, show neutral message.”</LI><LI>“If rebill fails, extend suppression.”</LI></UL><P>This keeps the solution clean‑core and upgrade‑safe.</P><H3 id="toc-hId--514608790">4. SAP HANA Cloud — Temporary Data Store</H3><P>Stores:</P><UL><LI>Accounts under correction</LI><LI>Suppression flags</LI><LI>Workflow status</LI><LI>Rebill completion indicators</LI></UL><P>No changes to IS‑U tables.</P><H3 id="toc-hId--711122295">5. SAP BTP AI — Predict and Prevent Customer Impact</H3><P>AI can:</P><UL><LI>Identify accounts likely to call</LI><LI>Detect social media spikes</LI><LI>Predict rebill duration</LI><LI>Recommend suppression windows</LI><LI>Flag anomalies in reversal patterns</LI></UL><P>This turns a reactive process into a proactive one.</P><H2 id="toc-hId--614232793">What the Customer Sees Instead</H2><P>Instead of a large credit, the portal displays:</P><P><STRONG>“Your billing is being updated.</STRONG> <STRONG>Your final balance will be available shortly.”</STRONG></P><P>This avoids:</P><UL><LI>Panic</LI><LI>Misinterpretation</LI><LI>Social media escalation</LI><LI>Call center overload</LI></UL><H2 id="toc-hId--810746298">Clean‑Core Alignment</H2><P>This approach is fully clean‑core because:</P><UL><LI>No IS‑U modifications</LI><LI>No FICA changes</LI><LI>No portal rewrites</LI><LI>No custom Z‑tables in SAP</LI><LI>All logic runs on BTP</LI><LI>Fully upgrade‑safe</LI><LI>Works with ECC and S/4HANA</LI></UL><P>This is exactly what SAP means by <STRONG>“extensibility without modification.”</STRONG></P><H1 id="toc-hId--713856796">Practitioner Example</H1><P>In one of the utilities I worked with, we had to reverse nearly 10 months of bills for over 180,000 customers. Within 45 minutes, screenshots of huge credits were all over social media. The call center was drowning. Regulators were pinging leadership. It was chaos — and all because the portal updated too early.</P><P>After implementing a BTP‑based visibility layer:</P><UL><LI>Mass reversals ran silently</LI><LI>No customer saw temporary credits</LI><LI>No social media chatter</LI><LI>No call center spike</LI><LI>Corrected bills were published cleanly</LI></UL><P>The billing team completed the same process with <STRONG>zero customer disruption</STRONG>.</P><H1 id="toc-hId--910370301">Conclusion</H1><P>Mass invoice reversals will always be part of utility operations. But customer visibility does not need to be immediate or uncontrolled. By introducing a Billing Visibility Orchestration Layer on SAP BTP — powered by Event Mesh, Workflow, Business Rules, HANA Cloud, and AI — utilities can protect customer experience, reduce operational risk, and maintain clean‑core principles.</P><P>This is modernization where it matters: not in rewriting billing logic, but in orchestrating how and when customers see the results.</P><P>&nbsp;</P> 2026-02-27T13:18:41.584000+01:00 https://community.sap.com/t5/technology-blog-posts-by-sap/automating-sap-hana-cloud-other-environment-db-lifecycle-management-with/ba-p/14338555 Automating SAP HANA Cloud (Other Environment) DB Lifecycle Management with SAP Automation Pilot 2026-02-28T10:56:26.280000+01:00 BiserSimeonov https://community.sap.com/t5/user/viewprofilepage/user-id/3334 <P>In our recent updates around <A href="https://community.sap.com/t5/technology-blog-posts-by-sap/meet-your-ops-agents-powered-by-sap-automation-pilot-and-joule/ba-p/14333881" target="_self">AI-enabled Ops Operations</A>, we demonstrated how<A title="SAP Automation Pilot - Discovery Center" href="https://discovery-center.cloud.sap/serviceCatalog/automation-pilot" target="_self" rel="nofollow noopener noreferrer"><STRONG> SAP Automation Pilot</STRONG></A> boosts structured, scalable, and intelligent automation for Day-2 Ops Agents through Joule.</P><P>Today, I am excited to take this one step further.</P><P><span class="lia-unicode-emoji" title=":rocket:">🚀</span>&nbsp; We are expanding our automation coverage to fully address <STRONG>SAP HANA Cloud lifecycle management in the Other Environment</STRONG>&nbsp;- bringing operational excellence to database management at scale.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AutomationPilot-HANA-Ops-Automations.png" style="width: 892px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/377969i87CF6302CDE07CB6/image-dimensions/892x407/is-moderation-mode/true?v=v2" width="892" height="407" role="button" title="AutomationPilot-HANA-Ops-Automations.png" alt="AutomationPilot-HANA-Ops-Automations.png" /></span></P><H2 id="toc-hId-1790627456">From Standard Catalogs to Broader Lifecycle Automation</H2><P>SAP Automation Pilot already provides a built-in catalog for managing SAP HANA Cloud database instances running in the Cloud Foundry environment - that is the the well-known <A title="SAP BTP Database Lifecycle Management (dblm-sapcp) Catalog" href="https://help.sap.com/docs/automation-pilot/automation-pilot/sap-btp-database-lifecycle-management-dblm-sapcp-catalog" target="_self" rel="noopener noreferrer"><STRONG>SAP BTP Database Lifecycle Management (dblm-sapcp) Catalog</STRONG></A>.</P><P>This catalog has successfully helped customers automate essential lifecycle tasks directly within SAP BTP.</P><P>However, many customers also operate SAP HANA Cloud instances in the <STRONG>Other Environment</STRONG>, where lifecycle management is equally critical and operationally sensitive. While SAP Automation Pilot already offers all the technical capabilities required to automate those tasks, we recognized the need to package common scenarios into a structured, reusable, and publicly available catalog.</P><P>That is exactly what we have now delivered!&nbsp;</P><H3 id="toc-hId-1723196670">Introducing: SAP HANA Cloud Lifecycle Management Sample Catalog (Other Environment)</H3><P>We have consolidated a wide range of SAP HANA Cloud lifecycle scenarios into a <STRONG>dedicated public sample catalog</STRONG>, now available in our GitHub repository here:&nbsp;<STRONG><A title="SAP HANA Cloud Lifecycle Management Sample Catalog" href="https://github.com/SAP-samples/automation-pilot-examples/tree/main/hana-lifecycle-management#sap-hana-cloud-lifecycle-management" target="_self" rel="nofollow noopener noreferrer">SAP HANA Cloud Lifecycle Management Sample Catalog (Other Environment)</A></STRONG></P><P>This catalog provides a comprehensive and validated set of automation commands for managing SAP HANA Cloud instances at subaccount level using Service Manager.</P><P>Instead of executing repetitive and sensitive database operations manually, you can now automate domains such as: <span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> <STRONG>Instance Lifecycle Management </STRONG><STRONG><span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span>Backup &amp; Recovery&nbsp;<span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> Snapshot Management <span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> Instance Configuration<span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> Upgrades &amp; Plugins <span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> Elastic Compute Nodes (ECN), etc.&nbsp;</STRONG></P><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>&nbsp;All commands are validated and ready to be explored by our users - to help you minimize manual effort, reduce risk, and standardize operational procedures.</P><H2 id="toc-hId-1397600446">Beyond Automation: Toward Intelligent Database Operations</H2><P>While structured lifecycle automation already delivers immediate value, the real transformation begins when automation becomes <STRONG>agentic</STRONG>.</P><P>Because all HANA Cloud-related commands are exposed through SAP Automation Pilot, they can be orchestrated, combined, and consumed in higher-level operational workflows.&nbsp;</P><P>This naturally leads to the next evolution - <STRONG>AI Ops Agents for HANA Cloud enabled with SAP Automation Pilot and Joule!&nbsp;</STRONG></P><P><STRONG><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="hana-cloud-ops-agent.png" style="width: 807px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/377970i1AA3192A6EE4C3D4/image-dimensions/807x357/is-moderation-mode/true?v=v2" width="807" height="357" role="button" title="hana-cloud-ops-agent.png" alt="hana-cloud-ops-agent.png" /></span></STRONG></P><H3 id="toc-hId-1330169660"><span class="lia-unicode-emoji" title=":open_book:">📖</span>Be Part of the Journey - Join Our Customer Engagement Initiative</H3><P>As we continue expanding agentic AI operations, collaboration with customers remains key.</P><P>If you would like to gain hands-on experience with AI-enabled Ops automation and help shape the future of intelligent database operations, we invite you to subscribe in our ongoing <STRONG>Customer Engagement Initiative</STRONG>:&nbsp;<A title="CEI: Agentic AI Operations with SAP Automation Pilot" href="https://influence.sap.com/sap/ino/#campaign/4161" target="_self" rel="noopener noreferrer"><STRONG>Agentic AI Operations with SAP Automation Pilot</STRONG></A></P><P>Through this initiative, you can:</P><UL><LI>Explore real-world AI-driven automation scenarios;</LI><LI>Bring your own use cases and operational challenges;</LI><LI>Provide direct feedback to the product team;</LI><LI>Influence upcoming capabilities;</LI></UL><P><SPAN><SPAN class="lia-unicode-emoji"><span class="lia-unicode-emoji" title=":hourglass_not_done:">⏳</span></SPAN>&nbsp;Registration is open until&nbsp;</SPAN><STRONG><SPAN>March 15, 2026</SPAN></STRONG><SPAN>&nbsp;- secure your spot soon!</SPAN></P><P>From structured lifecycle automation to fully agentic database operations - this is the next chapter of operational excellence with SAP Automation Pilot.</P><P>Let’s build it together!&nbsp;</P> 2026-02-28T10:56:26.280000+01:00