https://raw.githubusercontent.com/ajmaradiaga/feeds/main/scmt/topics/SAP-HANA-blog-posts.xml SAP Community - SAP HANA 2025-05-13T20:01:12.257524+00:00 python-feedgen SAP HANA blog posts in SAP Community https://community.sap.com/t5/technology-blog-posts-by-sap/shaping-the-future-with-data-and-ai-sap-hana-cloud-knowledge-graph-engine/ba-p/14066727 Shaping the Future with Data and AI: SAP HANA Cloud Knowledge Graph Engine and Generative AI Toolkit 2025-04-04T19:35:42.714000+02:00 stefan_baeuerle https://community.sap.com/t5/user/viewprofilepage/user-id/512708 <P>We’re pleased to announce the <STRONG>general availability</STRONG> of two groundbreaking technologies: the <STRONG>SAP HANA Cloud Knowledge Graph Engine</STRONG> and the <STRONG>Generative AI Toolkit</STRONG>. These solutions are designed to elevate your business intelligence and AI capabilities, offering enhanced data connectivity, smarter insights, and faster decision-making.</P><P>This blog post highlights the features of these new offerings and showcases how they align with SAP’s long-standing commitment to deliver innovations that drive your business forward. More importantly, it’s our way of saying: We’ve kept our promise.</P><P>&nbsp;</P><H3 id="toc-hId-1836535534"><STRONG>SAP HANA Cloud Knowledge Graph Engine: A New Era of Data Connectivity</STRONG></H3><P>At SAP TechEd 2024, we introduced a major leap forward in how businesses can manage and navigate their data. The <STRONG>SAP HANA Cloud Knowledge Graph Engine</STRONG> empowers organizations to integrate, query, and leverage their data in entirely new ways by making connections and relationships between data points transparent. The new engine in SAP HANA Cloud enables you to fully harness your data's potential, driving smarter analytics, streamlined data management, and enhanced business results.</P><P>The Knowledge Graph Engine unlocks a deeper understanding of your data by capturing and analyzing complex relationships—something traditional databases struggle to do. This feature introduces a semantic layer on top of your data, enabling you to define, connect, and visualize entities across multiple sources. It’s built to handle and optimize data in the triple store format, an efficient method for representing complex, interconnected data, which enables better decision-making and more flexible data models.</P><P><STRONG>Key Features:</STRONG></P><UL><LI><STRONG>Informed Decision-Making:</STRONG> Utilizing the Knowledge Graph Engine enables you to extract richer insights from your data, facilitating more accurate and well-informed decisions.</LI><LI><STRONG>SQL and SPARQL Interoperability: </STRONG>The Knowledge Graph Engine will support <A href="https://www.w3.org/2001/sw/wiki/SPARQL" target="_blank" rel="noopener nofollow noreferrer">SPARQL</A>, the specialized query language for knowledge graph data, while tightly integrating it with SQL.</LI><LI><STRONG>Enhanced Data Structure:</STRONG> With RDF and SPARQL's structured framework, your data models gain enhanced logical precision, leading to more dependable and accurate results.</LI><LI><STRONG>Enhanced AI &amp; Analytics:</STRONG> Provides structured context to AI-driven applications, improving decision-making, recommendations, and search accuracy.</LI><LI><STRONG>Automated Reasoning &amp; Inference: </STRONG>Uses built-in logic to infer new relationships from existing data, minimizing manual data maintenance as with the case of relational data models.</LI></UL><P>Whether you’re looking to enhance your data governance, improve the discoverability of insights, or develop smarter applications, the Knowledge Graph Engine will be a game-changer in how you think about and use data.</P><P>&nbsp;</P><H3 id="toc-hId-1640022029"><STRONG>Generative AI Toolkit: Unlocking Innovation &amp; Productivity </STRONG></H3><P>Alongside the Knowledge Graph Engine, we’re also launching the <STRONG>Generative AI Toolkit</STRONG> for SAP HANA Cloud. As the world of AI continues to evolve, our toolkit is designed to make AI more accessible and usable for our customers. The <SPAN><A href="https://github.com/SAP/generative-ai-toolkit-for-sap-hana-cloud" target="_blank" rel="noopener nofollow noreferrer">Generative AI Toolkit</A></SPAN> is a powerful solution designed to accelerate AI-driven development and data analysis. Users can for example, effortlessly query datasets using natural language and receive results through intuitive AI-driven interactions.</P><P><STRONG>Key Features:</STRONG></P><UL><LI><STRONG>Open-Source Python Library &amp; Integration:</STRONG> Leverage advanced generative AI capabilities through <A href="https://pypi.org/project/hana-ai/" target="_blank" rel="noopener nofollow noreferrer">Python</A>, enabling intuitive development and smooth integration for AI-driven tasks.</LI><LI><STRONG>Natural Language-Driven Analysis &amp; Automated Code Generation: </STRONG>Perform dataframe analysis with natural language queries and automatically generate code based on best practices and business scenarios code templates.</LI><LI><STRONG>Retrieval Augmented Generation &amp; LLMs Integration: </STRONG>Support for RAG with multiple vector stores and integration with large language models (LLMs) via the Generative AI Hub SDK for more accurate and context-aware AI solutions.</LI><LI><STRONG>Unlocking SAP HANA Cloud’s Database AI Engine Capabilities</STRONG><STRONG>: </STRONG>Utilizing targeted tools to generate machine learning models for scenarios like Time Series Forecasting on presented business data.</LI><LI><STRONG>Streamlined Machine Learning Application Development: </STRONG>Accelerate the infusion of machine learning scenarios, like classification, regression or time series forecasting into CAP applications with agentic, natural language assistance. This allows for a much faster getting-started experience, simplifying the overall embedded AI application development process.</LI></UL><P>As organizations seek to empower their workforce with smarter, AI-powered tools, the Generative AI Toolkit offers the scalability and flexibility needed to drive innovation, improve decision-making, and accelerate time-to-value across the organization.</P><P>&nbsp;</P><H3 id="toc-hId-1443508524"><STRONG>Keep the Promise</STRONG></H3><P>Both the Knowledge Graph Engine and the Generative AI Toolkit were born out of feedback from our customers and years of research into the challenges businesses face in a data-driven world. In our previous announcements and at past SAP TechEd events, we made bold commitments to revolutionize how organizations use data and AI. We’ve delivered on those promises with these two cutting-edge solutions.</P><P>In line with our dedication to providing continuous value, we’ve ensured that the Knowledge Graph Engine and Generative AI Toolkit are production-ready, offering you robust, scalable, and secure platforms to build your next-generation applications and analytics.</P><P>&nbsp;</P><H3 id="toc-hId-1246995019"><STRONG>Early Adopter Care Program: Join us on the Journey</STRONG></H3><P>As we mark the general availability of these solutions, we also want to extend an invitation to our customers and partners to join our <STRONG>Early Adopter Care (EAC) Programs</STRONG>. These exclusive programs offer personalized support and guidance as you explore and implement the Knowledge Graph Engine or Elasticity in SAP HANA Cloud.</P><P>Being part of the EAC programs means you’ll get early access to the latest features, tools, and documentation, as well as direct engagement with SAP experts who can assist you in optimizing the use of these technologies for your specific needs. Additionally, you’ll be able to influence the direction of future updates and enhancements, ensuring that SAP HANA Cloud continues to meet the evolving demands of your business.</P><P><STRONG>How to Get Involved? </STRONG></P><P><STRONG>Visit the EAC program pages</STRONG> to learn more about how to enroll:</P><UL><LI><SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/become-an-early-adopter-for-the-knowledge-graph-engine-in-sap-hana-cloud/ba-p/14021136" target="_blank">Knowledge Graph Engine - EAC Program</A></SPAN></LI><LI><SPAN><A href="https://influence.sap.com/sap/ino/#campaign/3761" target="_blank" rel="noopener noreferrer">Elasticity in SAP HANA Cloud - EAC Program</A></SPAN></LI></UL><P>By joining the EAC programs, you will be at the forefront of the next wave of innovation, helping us shape the future of data management and AI.</P><P>&nbsp;</P><H3 id="toc-hId-1050481514">Looking Ahead</H3><P>I’m incredibly excited about the opportunities these new capabilities bring to the table. From smarter data connectivity to AI-driven productivity, SAP HANA Cloud is setting the stage for a new era of innovation in data and AI.</P><P>We believe these advancements will not only help organizations thrive in an increasingly digital world, but also empower individuals across industries to make better, more informed decisions, faster. These technologies will be pivotal in shaping the future of how businesses interact with data, AI, and the world around them.</P><P>We’re committed to supporting your digital transformation journey every step of the way, and we can’t wait to see how you leverage these new capabilities to achieve greater success.</P><P>Thank you for being part of the SAP community. The journey has only just begun!</P><P>&nbsp;</P><P>&nbsp;</P> 2025-04-04T19:35:42.714000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/three-reasons-sap-businessobjects-still-matters-in-2025/ba-p/14068562 Three Reasons SAP BusinessObjects Still Matters in 2025 2025-04-07T17:00:00.028000+02:00 dallasmarks https://community.sap.com/t5/user/viewprofilepage/user-id/182221 <P>In a previous article, I answered the question "<A class="" href="https://community.sap.com/t5/technology-blogs-by-members/is-businessobjects-still-relevant-in-2025/ba-p/13979566" target="_blank">Is BusinessObjects Still Relevant in 2025?</A>" with a resounding yes, giving a number of reasons why the platform still matters. In this article, I provide three reasons why the SAP BusinessObjects platform still matters in 2025.</P><H1 id="toc-hId-1578427875">Rich Semantic Layer</H1><P>The first reason SAP BusinessObjects still matters in 2025 is its rich semantic layers. Whether we are talking about a classic BusinessObjects universe or the semantic layers provided by <a href="https://community.sap.com/t5/c-khhcw49343/BW+%252528SAP+Business+Warehouse%252529/pd-p/242586194391178517100436979900901" class="lia-product-mention" data-product="1-1">BW (SAP Business Warehouse)</a>,<a href="https://community.sap.com/t5/c-khhcw49343/SAP+HANA/pd-p/73554900100700000996" class="lia-product-mention" data-product="639-1">SAP HANA</a>, or <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Datasphere/pd-p/73555000100800002141" class="lia-product-mention" data-product="16-1">SAP Datasphere</a>, they enable self-service analytics. These semantic layers need to be redesigned or fudged when building analytics with a non-SAP toolset. Competing semantic layers often lack the flexibility and power already provided by SAP BusinessObjects. The time and effort involved in reinventing the semantic layer should be carefully estimated before reaching for another analytics tool. And in the case of the universe, it provides a layer of isolation between the data model and the user. This isolation is why organizations can move their enterprise data warehouse from an on-premise platform to a cloud platform, carefully update the existing universe with zero disruption to the user community.&nbsp;</P><H1 id="toc-hId-1381914370">Scheduling and Publishing</H1><P>The second&nbsp;reason SAP BusinessObjects still matters in 2025 is its scheduling and publishing capabilities. Information distribution is often an immature feature in "modern" analytics tools, whose vendors are quietly introducing "new" features like- wait for it- tables for users who prefer a "classic" BI experience. Of course, third parties are bringing extensions to fill these gaps. But getting the data into a table is only part of the solution - being able to burst the information in an efficient and personalized way is equally important.</P><H1 id="toc-hId-1185400865">Data Mode</H1><P>The third reason SAP BusinessObjects still matters in 2025 is its new Data Mode. Introduced in BI 4.3 and enhanced in BI 2025, Data Mode brings self-service data blending and data cleansing to <a href="https://community.sap.com/t5/c-khhcw49343/SAP+BusinessObjects+-+Web+Intelligence+%252528WebI%252529/pd-p/907900296036854683333078008146613" class="lia-product-mention" data-product="1055-1">SAP BusinessObjects - Web Intelligence (WebI)</a>&nbsp;users without introducing a separate product or vendor for that purpose. And Data Mode isn't simply a feature. It's a sign of SAP's renewed commitment to the SAP BusinessObjects platform. As witnessed in last month's release of SAP BusinessObjects Business Intelligence 2025, SAP has done far more than just slap a new About Box on an aging product. From Fiori to UI5 charting to Data Mode to cloud database support, the SAP BusinessObjects platform has been thoroughly modernized where it really counts</P><H1 id="toc-hId-988887360">Conclusion</H1><P>It's not wrong to desire to pick the best tool for the job, or to have multiple tools to choose from. But analytics leaders need to look honestly at some of the poorly designed, poorly performing deliverables that have been created on their watch with the "latest and greatest" tools. And reconsider the total cost of ownership and return on investment&nbsp;in choosing a familiar yet thoroughly modern solution that they are often tempted to overlook.</P> 2025-04-07T17:00:00.028000+02:00 https://community.sap.com/t5/technology-blog-posts-by-sap/hana-cloud-s-vector-embedding-in-cap-and-comparison-with-openai-embedding/ba-p/14068468 HANA Cloud’s VECTOR EMBEDDING in CAP and Comparison with OpenAI Embedding 2025-04-09T03:12:15.016000+02:00 lalitmohan https://community.sap.com/t5/user/viewprofilepage/user-id/1038 <H2 id="toc-hId-1707509639">&nbsp;</H2><H2 id="toc-hId-1510996134"><STRONG>Introduction</STRONG></H2><P>In the Q1 2024 release of SAP HANA Cloud, the <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/real-vector-data-type" target="_self" rel="noopener noreferrer">REAL_VECTOR</A> datatype was introduced to address the increasing demand for efficient storage and processing of high-dimensional vector embeddings. These embeddings are key in a variety of AI and machine learning applications, particularly in tasks such as semantic understanding and similarity search.</P><P>For developers using the SAP Cloud Application Programming (<A href="https://cap.cloud.sap/docs/" target="_self" rel="nofollow noopener noreferrer">CAP</A>) model, the <A href="https://www.npmjs.com/package/@cap-js/hana" target="_self" rel="nofollow noopener noreferrer">@cap-js/hana</A> package is the recommended tool for connecting to SAP HANA Cloud and utilizing its vector engine capabilities. In practice, we can define entities in our <STRONG>CDS data model</STRONG> that include elements of type <A href="https://cap.cloud.sap/docs/java/cds-data" target="_self" rel="nofollow noopener noreferrer">cds.Vector</A>. Once deployed, these elements are automatically mapped to the <STRONG>REAL_VECTOR</STRONG> datatype in the underlying SAP HANA table.</P><pre class="lia-code-sample language-javascript"><code>entity Books : managed { key ID : Integer; title : String(111) @mandatory; descr : String(5000); embedding : Vector }</code></pre><P><A href="https://help.sap.com/docs/sap-ai-core/sap-ai-core-service-guide/what-is-sap-ai-core" target="_blank" rel="noopener noreferrer">SAP AI Core</A> and the <A href="https://community.sap.com/t5/technology-blogs-by-sap/cap-llm-plugin-empowering-developers-for-rapid-gen-ai-cap-app-development/ba-p/13667606" target="_blank">CAP LLM Plugin</A> collaborate to bring advanced AI capabilities, focusing on large language models (LLMs) and vector embeddings, into CAP applications.</P><P>Custom handlers, written in NodeJS or Java, are used within CAP applications to extend and customize the framework's default behaviour. Specifically, custom handlers are utilized for tasks like embedding generation and other AI-driven operations.</P><pre class="lia-code-sample language-javascript"><code>getEmbedding = async (content) =&gt; { const vectorPlugin = await cds.connect.to("cap-llm-plugin"); const embeddingModelConfig = await cds.env.requires["gen-ai-hub"][ "text-embedding-ada-002" ]; const embeddingGenResp = await vectorPlugin.getEmbeddingWithConfig( embeddingModelConfig, content ); return embeddingGenResp?.data[0]?.embedding; }; this.after(["CREATE", "UPDATE"],"Books", async (res) =&gt; { const embedding = await this.getEmbedding(res.descr); await UPDATE(Books, res.ID).with({ embedding: embedding, }); })</code></pre><P>Once the embeddings are stored in the <STRONG>REAL_VECTOR</STRONG> columns, they enable similarity searches, such as <STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG>. These searches allow for measuring the relationships between vectors, leading to more accurate data analysis and comparisons.</P><pre class="lia-code-sample language-javascript"><code>getSimilaritySearchForAlgorithm = async (searchWord, algoName) =&gt; { const vectorPlugin = await cds.connect.to("cap-llm-plugin"); const embedding = await this.getEmbedding(searchWord); const entity = cds.entities["descr"]; const similaritySearchResults = await vectorPlugin.similaritySearch( entity.toString().toUpperCase(), entity.elements["embedding"].name, entity.elements["descr"].name, embedding, algoName, 5 ); return similaritySearchResults };</code></pre><P>The <STRONG>Q4 2024</STRONG> release of SAP HANA Cloud introduced the <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector" target="_self" rel="noopener noreferrer">VECTOR_EMBEDDING</A> functionality, which allows for the generation of vector embeddings directly within the database. The available model is <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector?locale=en-US#available-models-and-versions" target="_blank" rel="noopener noreferrer">SAP_NEB.20240715</A>, featuring a vector dimension of <STRONG>768</STRONG> and a token limit of <STRONG>256</STRONG>. It supports the following languages: <STRONG>German (de), English (en), Spanish (es), French (fr),</STRONG> and <STRONG>Portuguese (pt)</STRONG>. This is a significant advancement because, previously, embedding generation often relied on external services like SAP AI Core or CAP LLM Plugin.</P><P>In this blog post, I’ll show you how easy it is to use this new feature in CAP, enabling more efficient and seamless management of vector embeddings within the SAP HANA Cloud ecosystem.</P><H2 id="toc-hId-1314482629"><STRONG>Prerequisite</STRONG></H2><P><STRONG>&nbsp;</STRONG>During the SAP HANA Cloud instance provisioning and configuration process, the option to enable advanced settings for Natural Language Processing (NLP) is available. This allows for the use of text embedding and text analysis models. This capability is also accessible with the SAP HANA Cloud trial.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="2025-04-03_10-58-24.png" style="width: 562px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247821iE95720682A63B2A3/image-dimensions/562x327?v=v2" width="562" height="327" role="button" title="2025-04-03_10-58-24.png" alt="2025-04-03_10-58-24.png" /></span></P><H2 id="toc-hId-1117969124">&nbsp;</H2><H4 id="toc-hId-1179621057"><SPAN>Let's get Started.</SPAN></H4><H3 id="toc-hId-854024833"><STRONG>Defining Entities with Vector Data Types</STRONG></H3><P>As mentioned above, we can define entities in our <STRONG>CDS data model</STRONG> that include elements of type <STRONG>cds.Vector</STRONG> and Once deployed, these elements are automatically mapped to the <STRONG>REAL_VECTOR</STRONG> datatype.&nbsp;</P><P><A href="https://cap.cloud.sap/docs/cds/cdl#calculated-elements" target="_self" rel="nofollow noopener noreferrer">Calculated elements</A> in the CAP enable you to define fields in your data model where the values are automatically derived from other elements or expressions and <A href="https://cap.cloud.sap/docs/cds/cdl#on-write" target="_self" rel="nofollow noopener noreferrer">"On-write" (stored)</A> calculated Elements are computed when the entity is created or updated, and their values are then stored in the database.</P><P>So, when defining entities with a vector data type, you can set up the embedding as a calculated field. This ensures that the embedding is computed and stored in the database whenever the corresponding field is created or updated.</P><pre class="lia-code-sample language-javascript"><code>entity Books : managed { key ID : Integer; title : String(111) @mandatory; descr : String(5000); embedding : Vector = VECTOR_EMBEDDING( descr, 'DOCUMENT', 'SAP_NEB.20240715' ) stored; }</code></pre><P>To quickly check the HANA-specific SQL commands that would create your database tables based on your data model, use <STRONG>cds compile</STRONG>. It shows you the generated SQL DDL statements.</P><pre class="lia-code-sample language-bash"><code>cds compile db/schema --to sql --dialect hana</code></pre><P>&nbsp;By examining the SQL generated by&nbsp;<STRONG>cds compile</STRONG>, you can see how text embeddings are automatically generated during data input and updates. This process is similar to what is described in the Hana documentation (Reference: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/creating-text-embeddings-in-sap-hana-cloud?locale=en-US#how-to-automatically-create-vector-embeddings" target="_blank" rel="noopener noreferrer">How to Automatically Create Vector Embeddings</A>).&nbsp;<STRONG>Please note :</STRONG> while the generated columns approach is simpler, it does not support referencing NCLOB columns.&nbsp;</P><pre class="lia-code-sample language-sql"><code>COLUMN TABLE cap_vector_embedding_Books ( createdAt TIMESTAMP, createdBy NVARCHAR(255), modifiedAt TIMESTAMP, modifiedBy NVARCHAR(255), ID INTEGER NOT NULL, title NVARCHAR(111), descr NVARCHAR(5000), embedding REAL_VECTOR GENERATED ALWAYS AS (VECTOR_EMBEDDING(descr, 'DOCUMENT', 'SAP_NEB.20240715')), PRIMARY KEY(ID) )</code></pre><H3 id="toc-hId-657511328"><STRONG>Define Entity to Perform Similarity Search</STRONG></H3><P><A href="https://cap.cloud.sap/docs/cds/cdl#exposed-entities" target="_self" rel="nofollow noopener noreferrer">Exposing entities as views with parameters</A> is a key feature of SAP CDS that enhances data modelling flexibility, reusability, and efficient data retrieval. This allows you to easily create an entity designed to perform similarity searches, such as <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/cosine-similarity-function-vector?locale=en-US" target="_self" rel="noopener noreferrer">COSINE_SIMILARITY</A> and <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/l2distance-function-vector?locale=en-US" target="_self" rel="noopener noreferrer">L2DISTANCE</A>, to assess the relationships between vectors.</P><pre class="lia-code-sample language-javascript"><code>entity Search(query : String) as select from db.Books { ID, title, descr, :query as searchWord : String, cosine_similarity( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as cosine_similarity : String, l2distance( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as l2distance : String, } order by cosine_similarity desc limit 5;</code></pre><P>Similarly To quickly check the HANA-specific SQL commands that would create your database view based on your data model, use <STRONG>cds compile</STRONG>. It shows you the generated SQL DDL statements.</P><pre class="lia-code-sample language-sql"><code>cds compile srv/service --to sql --dialect hana</code></pre><P>By reviewing the generated SQL DDL statements for creating a VIEW, you can see how generated columns or triggers are used to automatically compute the cosine similarity and L2 distance between the embedding column and the parameterized query.&nbsp;This process is similar to what is described in the Hana documentation ( Reference : <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/creating-text-embeddings-in-sap-hana-cloud?locale=en-US#how-to-write-queries" target="_blank" rel="noopener noreferrer">How to Write Queries</A>)</P><pre class="lia-code-sample language-sql"><code>VIEW EmbeddingStorageService_Search(IN query NVARCHAR(5000)) AS SELECT Books_0.ID, Books_0.title, Books_0.descr, :QUERY AS searchWord, cosine_similarity(Books_0.embedding, to_real_vector(vector_embedding(:QUERY, 'QUERY', 'SAP_NEB.20240715'))) AS cosine_similarity, l2distance(Books_0.embedding, to_real_vector(vector_embedding(:QUERY, 'QUERY', 'SAP_NEB.20240715'))) AS l2distance FROM cap_vector_embedding_Books AS Books_0 ORDER BY cosine_similarity DESC LIMIT 5</code></pre><H3 id="toc-hId-460997823"><STRONG>Finally, the service.cds looks like:</STRONG></H3><pre class="lia-code-sample language-javascript"><code>using {cap.vector.embedding as db} from '../db/schema'; service EmbeddingStorageService { entity Books as projection on db.Books excluding { embedding }; entity Search(query : String) as select from db.Books { ID, title, descr, :query as query : String, cosine_similarity( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as cosine_similarity : String, l2distance( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as l2distance : String, } order by cosine_similarity desc limit 5; }</code></pre><H3 id="toc-hId-264484318"><STRONG>Test &amp; Output:</STRONG></H3><P>You can use the REST Client extension in Visual Studio Code to execute HTTP requests define in .http files. For testing the CAP services you can create new file with the .http extension and copy past the following code</P><pre class="lia-code-sample language-json"><code>@server = http://localhost:4004/odata/v4/embedding-storage ### post books # post_book POST {{server}}/Books Content-Type: application/json { "ID":252, "title":"Wuthering Heights", "descr":"Wuthering Heights, Emily Brontë's only novel, was published in 1847 under the pseudonym \"Ellis Bell\". It was written between October 1845 and June 1846. Wuthering Heights and Anne Brontë's Agnes Grey were accepted by publisher Thomas Newby before the success of their sister Charlotte's novel Jane Eyre. After Emily's death, Charlotte edited the manuscript of Wuthering Heights and arranged for the edited version to be published as a posthumous second edition in 1850." } ### Similarity Search # similarity_search @query = Catweazle British GET {{server}}/Search(query='{{query}}')/Set</code></pre><P>By using the first POST request, you'll be able to create a new record in the Books entity.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-02_16-15-52.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247822i7E2A382B68C40352/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-02_16-15-52.png" alt="2025-04-02_16-15-52.png" /></span></P><P>When a new book record is added, any calculated store column has its value determined by an associated expression. This expression can utilize other book properties, constants, and functions. The outcome of this calculation is immediately persisted in the store column of the book's database record.</P><P>In your case, it will evaluate the <STRONG>VECTOR_EMBEDDING</STRONG> on the description using the '<A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/vector-embedding-function-vector?locale=en-US#available-models-and-versions" target="_blank" rel="noopener noreferrer">SAP_NEB.20240715</A>' model and store the result directly in the Books table.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-02_16-24-08.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247823iE767E611BA4E4E9A/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-02_16-24-08.png" alt="2025-04-02_16-24-08.png" /></span></P><P>Following the same approach, adding further records will trigger the generation of embeddings for each new entry. For this blog post, we will utilize the<A href="https://github.com/SAP-samples/cloud-cap-samples/blob/main/bookshop/db/data/sap.capire.bookshop-Books.csv" target="_blank" rel="noopener nofollow noreferrer">&nbsp;Books.csv</A>&nbsp;file provided in the&nbsp;<STRONG>SAP-samples/cloud-cap-samples</STRONG>&nbsp;repository.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-02_17-06-00.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247824i880370978C7B409A/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-02_17-06-00.png" alt="2025-04-02_17-06-00.png" /></span></P><P>To enable similarity searches, the entities are accessible via a view that accepts parameter. The output of this view includes the <STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG> scores, which indicate the degree of similarity between the input query and the Book description vectors.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-03_09-15-24.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247825i4DB9330380BC5427/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-03_09-15-24.png" alt="2025-04-03_09-15-24.png" /></span></P><P>We can observe that creating an entity with an embedding field and performing similarity searches by exposing it as a view is a straightforward approach that avoids the need for custom handlers. Nevertheless, the necessity of implementing custom handlers is entirely dictated by the specific demands of your project.</P><P><A href="https://github.com/alphageek7443/cap-vector-embedding-sample/tree/main" target="_blank" rel="noopener nofollow noreferrer">SOURCE CODE</A></P><H2 id="toc-hId--61111906"><STRONG>Comparing HANA and Azure OpenAI Text Embeddings</STRONG></H2><P><STRONG>&nbsp;</STRONG>As discussed earlier in this blog, before SAP HANA Cloud introduced its native VECTOR_EMBEDDING functionality, developers within the SAP ecosystem had to rely on external providers for text embeddings. A key provider in this space is Azure OpenAI, which offers the text-embedding-ada-002 model, widely used for its strong balance between performance and cost-effectiveness. Let’s compare the embeddings generated by HANA’s native VECTOR_EMBEDDING with those produced by Azure OpenAI’s text-embedding-ada-002 model.</P><H3 id="toc-hId--203774061"><STRONG>Update&nbsp;</STRONG>Entities&nbsp;<STRONG>Definition&nbsp;</STRONG></H3><P>We'll add a new embedding element of type cds.Vector into the Books entity definition, which will be used to store embeddings from OpenAI’s text-embedding-ada-002 model.</P><pre class="lia-code-sample language-javascript"><code>entity Books : managed { key ID : Integer; title : String(111) @mandatory; descr : String(5000); embedding : Vector = VECTOR_EMBEDDING( descr, 'DOCUMENT', 'SAP_NEB.20240715' ) stored; embedding_openai : Vector; }</code></pre><P>We will also update the entity to enable similarity search by adding two new elements to store the cosine similarity and L2 distance calculated using OpenAI.</P><pre class="lia-code-sample language-json"><code>entity Search(query : String) as select from db.Books { ID, title, CONCAT(SUBSTRING(descr,0, 50),'...') as description, :query as query : String, cosine_similarity( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as cosine_similarity : String, l2distance( embedding, to_real_vector( vector_embedding( :query, 'QUERY', 'SAP_NEB.20240715' ) ) ) as l2distance : String, 0.0 as cosine_similarity_openai : String, 0.0 as l2distance_openai : String } order by cosine_similarity desc limit 5;</code></pre><H3 id="toc-hId--400287566"><STRONG>Add Custom Event Handler</STRONG></H3><P>Until now, we haven't created a handler for entities, as it wasn't necessary for the SAP HANA Cloud native <STRONG>VECTOR_EMBEDDING</STRONG> functionality. However, to retrieve the embedding from OpenAI and retrieve the results of a similarity search based on <STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG>&nbsp;algorithms, we now need to create.</P><pre class="lia-code-sample language-javascript"><code>module.exports = class EmbeddingStorageService extends cds.ApplicationService { init() { const { Books, Search } = cds.entities('EmbeddingStorageService') this.before (['CREATE', 'UPDATE'], Books, async (req) =&gt; { console.log('Before CREATE/UPDATE Books', req.data) }) this.after ('READ', Search, async (search, req) =&gt; { console.log('After READ Search', search) }) return super.init() }}</code></pre><H3 id="toc-hId--596801071"><STRONG>Generating Embeddings with the CAP LLM Plugin</STRONG></H3><P>As mentioned earlier, the <STRONG>CAP LLM Plugin</STRONG>, is commonly used for generating embeddings. To generate a vector for the content, we need to connect to the cap-llm-plugin with the appropriate model configuration in order to retrieve the embedding.</P><pre class="lia-code-sample language-javascript"><code>getEmbedding = async (content) =&gt; { const vectorPlugin = await cds.connect.to("cap-llm-plugin"); const embeddingModelConfig = await cds.env.requires["gen-ai-hub"][ "text-embedding-ada-002" ]; const embeddingGenResp = await vectorPlugin.getEmbeddingWithConfig( embeddingModelConfig, content ); return embeddingGenResp?.data[0]?.embedding; };</code></pre><H3 id="toc-hId--793314576"><STRONG>Performing Similarity Searches</STRONG></H3><P>The CAP LLM Plugin also furnishes APIs tailored for executing similarity searches based on the entity &nbsp;structure of&nbsp;the&nbsp;CAP model. The plugin supports the&nbsp;<STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG>&nbsp;algorithms and will continue to support future available algorithms&nbsp;too. The following code snippets highlight the steps involved in retrieving the results of a similarity search based on <STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG>&nbsp;algorithms.&nbsp;</P><pre class="lia-code-sample language-javascript"><code>getSimilaritySearch = async (searchWord) =&gt; { var [cosineSimilarities, l2Distances] = await Promise.all([ this.getSimilaritySearchForAlgorithm(searchWord, "COSINE_SIMILARITY"), this.getSimilaritySearchForAlgorithm(searchWord, "L2DISTANCE"), ]); return cosineSimilarities.map((item, i) =&gt; Object.assign({}, item, l2Distances[i]) ); }; getSimilaritySearchForAlgorithm = async (searchWord, algoName) =&gt; { const vectorPlugin = await cds.connect.to("cap-llm-plugin"); const embedding = await this.getEmbedding(searchWord); const entity = cds.entities["Books"]; const similaritySearchResults = await vectorPlugin.similaritySearch( entity.toString().toUpperCase(), entity.elements["embedding_openai"].name, entity.elements["descr"].name, embedding, algoName, 5 ); return similaritySearchResults.map(result =&gt; { return Object.assign({}, { [algoName.toLowerCase() + "_openai"]: result.SCORE, "ID": result.ID }); }) };</code></pre><H3 id="toc-hId--989828081"><STRONG>Update Custom Event Handler</STRONG></H3><P>Finally, we can update the handlers using the following code snippet. Whenever a book is created or updated, it will generate an embedding and update the Book entity with the vector data and during a read operation using the search entity, it will perform both the similarity search and update the entity records accordingly.</P><pre class="lia-code-sample language-javascript"><code>this.after (['CREATE', 'UPDATE'], Books, async (req) =&gt; { const embedding = await this.getEmbedding(req.descr); await UPDATE(Books, req.ID).with({ embedding_openai: embedding, }); }) this.after ('READ', Search, async (searches,req) =&gt; { const [{ query: query }] = req.params; const scores = await this.getSimilaritySearch(query); searches.map((search) =&gt; { const score = scores.find((score) =&gt; score.ID === search.ID); search.cosine_similarity_openai = score.cosine_similarity_openai; search.l2distance_openai = score.l2distance_openai; }); return searches; })</code></pre><H3 id="toc-hId--1186341586"><STRONG>Test &amp; Output:</STRONG></H3><P>Now test it again, you can create new file with the<STRONG> .http extension</STRONG> and copy past the following code:</P><pre class="lia-code-sample language-javascript"><code>@server = http://localhost:4004/odata/v4/embedding-storage ### update books # update_book PATCH {{server}}/Books(ID = 271) Content-Type: application/json { "title":"Catweazle", "descr":"Catweazle is a British fantasy television series, starring Geoffrey Bayldon in the title role, and created by Richard Carpenter for London Weekend Television. The first series, produced and directed by Quentin Lawrence, was screened in the UK on ITV in 1970. The second series, directed by David Reid and David Lane, was shown in 1971. Each series had thirteen episodes, most but not all written by Carpenter, who also published two books based on the scripts." } ### Similarity Search # similarity_search @query = describe me the British television series Catweazle GET {{server}}/Search(query='{{query}}')/Set</code></pre><P>In first call you will update an existing record in Books entity, by sending&nbsp;<STRONG>PATCH</STRONG> request to the specific record's endpoint. As we implemented a custom handler for the after <STRONG>UPDATE</STRONG> Event your custom logic perform after the book record has been updated in the database.&nbsp;In your case,&nbsp;it will generate an vector embedding from OpenAI embedding model and update the Book entity with the vector data.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-04_13-58-58.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247826i0C681D8BC58E6139/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-04_13-58-58.png" alt="2025-04-04_13-58-58.png" /></span></P><P>To perform similarity searches leveraging the OpenAI embedding model, we've implemented a custom handler that executes after the Search entity is read in our service. This handler, triggered by accessing a parameterized view, calculates the <STRONG>COSINE_SIMILARITY</STRONG> and <STRONG>L2DISTANCE</STRONG> to determine the relationship between the search query and the book description vectors, subsequently modifying the results object.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="2025-04-04_13-56-43.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247827i4678F39D4363D5C7/image-size/large?v=v2&amp;px=999" role="button" title="2025-04-04_13-56-43.png" alt="2025-04-04_13-56-43.png" /></span></P><P><A href="https://github.com/alphageek7443/cap-vector-embedding-sample/tree/openai_embeddings" target="_blank" rel="noopener nofollow noreferrer">SOURCE CODE</A></P><H2 id="toc-hId--1089452084"><STRONG>&nbsp;</STRONG><STRONG>Analysis of the Model Comparison:</STRONG></H2><P>While <STRONG>cosine similarity</STRONG> focuses on the angle between vectors, and <STRONG>L2 distance</STRONG> focuses on the magnitude of the difference, they are related. If two vectors have a small angle between them (high cosine similarity), their difference vector will likely have a smaller magnitude (low L2 distance). Conversely, if the angle is large (low cosine similarity), the difference vector will likely have a larger magnitude (high L2 distance).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="lalitmohan_0-1744076292878.png" style="width: 505px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247828iBECB539401B1B8BD/image-dimensions/505x308?v=v2" width="505" height="308" role="button" title="lalitmohan_0-1744076292878.png" alt="lalitmohan_0-1744076292878.png" /></span></P><P>However, it's important to remember that this relationship isn't always perfectly strict. Factors like the dimensionality and scaling of the vector space can introduce some variations. The slight variations in the trends for the HANA embedding model indicate that while the inverse relationship may not be perfectly linear in this specific dataset, it still demonstrates a meaningful and insightful pattern.</P><P>&nbsp;</P><TABLE width="663"><TBODY><TR><TD width="227.625px" height="30px"><STRONG>Metric</STRONG></TD><TD width="206.688px" height="30px"><STRONG>Hana (Text Embedding)</STRONG></TD><TD width="227.688px" height="30px"><STRONG>OpenAI (Text Embedding)</STRONG></TD></TR><TR><TD width="227.625px" height="30px"><STRONG>Mean Cosine Similarity</STRONG></TD><TD width="206.688px" height="30px">0.464</TD><TD width="227.688px" height="30px">0.7285</TD></TR><TR><TD width="227.625px" height="30px"><STRONG>Std Dev Cosine Similarity</STRONG></TD><TD width="206.688px" height="30px">0.1732</TD><TD width="227.688px" height="30px">0.0914</TD></TR><TR><TD width="227.625px" height="30px"><STRONG>Mean L2 Distance</STRONG></TD><TD width="206.688px" height="30px">0.9673</TD><TD width="227.688px" height="30px">0.7254</TD></TR><TR><TD width="227.625px" height="30px"><STRONG>Std Dev L2 Distance</STRONG></TD><TD width="206.688px" height="30px">0.185</TD><TD width="227.688px" height="30px">0.145</TD></TR></TBODY></TABLE><P><SPAN>Above graph shows an snapshot of comparison of similarity metrics based on the dataset we have used. </SPAN><SPAN>The key parameters compared are mean cosine similarity (0.7285 vs. 0.464), &nbsp;mean L2 distance (0.7254 vs. 0.9673) and standard deviations for both cosine similarity (0.0914 vs. 0.1732) and L2 distance (0.145 vs. 0.185). These are some of the parameters to analyze to check to decide on preference of embedding model. However, it's important to note that, without understanding the specific task and desired outcome of the models, it's not possible to determine which model is "better" based solely on this comparison of similarity metrics.</SPAN></P><H3 id="toc-hId--1579368596"><STRONG>Conclusion:</STRONG></H3><P>Despite the overall performance, there might be some potential positive aspects or situations where <STRONG>Hana Embedding Model </STRONG>could be considered:</P><OL><LI><STRONG>Potentially Simpler and Faster:</STRONG> If It has a significantly simpler architecture than OpenAI embedding model, it might be faster to train and generate embeddings. In resource-constrained environments or applications where speed is critical and a slight drop in accuracy is acceptable, Hana embedding model could be a viable option.</LI><LI><STRONG>Potentially Lower Computational Cost:</STRONG> A simpler model often translates to lower computational cost for both training and inference. If your infrastructure has limitations, Hana embedding model might be more feasible to deploy and run at scale.</LI><LI><STRONG>Potentially More Interpretable (Depending on the Model):</STRONG> If it is based on a more interpretable architecture, it might be easier to understand <EM>why</EM> it produces certain embeddings, which can be valuable for debugging and analysis.</LI><LI><STRONG>Performance on Specific Data Subsets:</STRONG> it's possible that on a very specific subset of the data it might perform comparably or even slightly better than OpenAI. This would require a more granular analysis of the performance across different categories or types of text within your dataset. In this specific small dataset, this isn't evident, but it's a possibility in larger, more diverse datasets.</LI></OL><H2 id="toc-hId--389593226" id="toc-hId--1482479094">Reference &amp; Further Reading</H2><UL><LI><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/sap-hana-cloud-sap-hana-database-vector-engine-guide" target="_blank" rel="noopener noreferrer">SAP HANA Cloud, SAP HANA Database Vector Engine Guide</A></LI><LI><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/creating-text-embeddings-in-sap-hana-cloud" target="_blank" rel="noopener noreferrer">Creating Text Embeddings in SAP HANA Cloud</A></LI><LI><A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-vector-engine-guide/performing-similarity-searches" target="_blank" rel="noopener noreferrer">Performing Similarity Searches</A></LI><LI><A href="https://cap.cloud.sap/docs/cds/cdl#calculated-elements" target="_blank" rel="noopener nofollow noreferrer">SAP CAP - Calculated Elements</A></LI><LI><A href="https://cap.cloud.sap/docs/cds/cdl#exposed-entities" target="_blank" rel="noopener nofollow noreferrer">SAP CAP - Exposed Entities</A></LI></UL><P>&nbsp;</P> 2025-04-09T03:12:15.016000+02:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/selective-data-migration-with-mix-and-match-conversion-strategy-to-sap-s/ba-p/14069316 Selective Data Migration with Mix and Match Conversion Strategy to SAP S/4HANA 2025-04-10T16:52:28.809000+02:00 Nitin19 https://community.sap.com/t5/user/viewprofilepage/user-id/1676686 <P><STRONG>Why Convert to S/4HANA?</STRONG></P><P>&nbsp;</P><P><STRONG>End of ECC Support</STRONG></P><P>SAP will end mainstream support for ECC by 2027. This means no more updates, patches, or bug fixes, making it crucial to migrate to stay current and secure. Without regular updates and support, businesses using ECC will face increased security risks and potential operational disruptions.</P><P><STRONG>Digital Transformation</STRONG></P><P>S/4HANA supports modern digital business processes with advanced technologies like AI, machine learning, and IoT. These capabilities enable businesses to automate routine tasks, gain deeper insights from their data, and innovate more rapidly. By adopting S/4HANA, companies can stay ahead in the digital transformation journey.</P><P><STRONG>Performance</STRONG></P><P>The in-memory database of SAP HANA offers significantly faster data processing and real-time analytics. This enhanced performance allows businesses to make quicker, data-driven decisions, improve operational efficiency, and enhance customer experiences. The ability to process large volumes of data in real-time is a game-changer in today’s fast-paced business environment.</P><P><STRONG>Competitive Edge</STRONG></P><P>Migrating to S/4HANA ensures your business remains competitive by leveraging the latest innovations and capabilities. Staying on the cutting edge of technology provides organizations with the tools needed to outperform competitors, adapt to market changes swiftly, and meet evolving customer demands. Embracing S/4HANA is not just a technical upgrade; it’s a strategic move towards sustained growth and competitiveness.</P><P>&nbsp;</P><P><STRONG>What is the Conversion?</STRONG></P><P>The conversion from SAP ECC to S/4HANA involves transitioning from the traditional ECC system to the next-generation S/4HANA system, which is built on the SAP HANA in-memory database. This process includes migrating data, adapting custom code, and configuring new functionalities.</P><P><STRONG>&nbsp;</STRONG></P><P><STRONG>Some Advantages of Converting to S/4HANA</STRONG></P><P>Unlocking the Full Potential of Modern Business Operations</P><OL><LI><STRONG>Improved Performance:</STRONG><UL><LI>Real-Time Analytics: The in-memory database architecture of SAP HANA enables real-time data processing and analytics</LI><LI>Faster Transactions: Enhanced processing speeds lead to quicker transaction times and improved overall system performance.</LI></UL></LI><LI><STRONG>Simplified Data Model:</STRONG><UL><LI>Reduced Complexity: S/4HANA's simplified data model eliminates redundant tables, reducing data footprint and maintenance.</LI><LI>Integrated Analytics: Combines transactional and analytical data, providing a unified view of business operations.</LI></UL></LI><LI><STRONG>Enhanced User Experience:</STRONG><UL><LI>SAP Fiori: Offers a modern, intuitive, and customizable user interface, improving user productivity and satisfaction</LI><LI>Mobile Access: Enables access to business processes from any device, enhancing flexibility and responsiveness.</LI></UL></LI><LI><STRONG>Advanced Capabilities:</STRONG><UL><LI>Embedded Analytics: Integrates advanced analytics and machine learning capabilities directly into business processes</LI><LI>Future-Proofing: Ensures compatibility with future SAP innovations and updates, supported until at least 2040</LI></UL></LI></OL><P>&nbsp;</P><OL><LI><STRONG>Options to move to SAP S/4HANA</STRONG></LI></OL><P>If you want to migrate to SAP S/4HANA on Premise you do have the following three options</P><UL><LI>System Conversion ("Brownfield-Approach")</LI><LI>New Implementation ("Greenfield-Approach")</LI><LI>Selective Data Transition to SAP S/4HANA</LI></UL><P><U>Comparison of time and Complexity Features </U></P><TABLE><TBODY><TR><TD><P><STRONG>Feature</STRONG></P></TD><TD><P><STRONG>System Conversion (Brownfield)</STRONG></P></TD><TD><P><STRONG>New Implementation (Greenfield)</STRONG></P></TD><TD><P><STRONG>Selective Data Transition</STRONG></P></TD></TR><TR><TD><P>Implementation Time</P></TD><TD><P>Shorter</P></TD><TD><P>Longer</P></TD><TD><P>Moderate</P></TD></TR><TR><TD><P>Business Process Optimization</P></TD><TD><P>Limited</P></TD><TD><P>Extensive</P></TD><TD><P>Customizable</P></TD></TR><TR><TD><P>Data Migration</P></TD><TD><P>Full</P></TD><TD><P>None</P></TD><TD><P>Selective</P></TD></TR><TR><TD><P>Disruption to Operations</P></TD><TD><P>Minimal</P></TD><TD><P>High</P></TD><TD><P>Moderate</P></TD></TR><TR><TD><P>Customization Retention</P></TD><TD><P>Yes</P></TD><TD><P>No</P></TD><TD><P>Selective</P></TD></TR><TR><TD><P>Suitable for Established Processes</P></TD><TD><P>Yes</P></TD><TD><P>No</P></TD><TD><P>Yes</P></TD></TR></TBODY></TABLE><UL><LI><STRONG>System Conversion ("Brownfield-Approach")</STRONG></LI></UL><P>In the Brownfield Approach, you convert your existing system to an SAP S/4HANA system using the Software Update Manager for an in-place conversion. If your current system is not yet on SAP HANA, you can combine this approach with a database migration to SAP HANA. Below is the highlighted process for the Brownfield approach</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nitin19_0-1744102839084.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247983iEB2E2FFC70C50760/image-size/medium?v=v2&amp;px=400" role="button" title="Nitin19_0-1744102839084.png" alt="Nitin19_0-1744102839084.png" /></span></P><P>&nbsp;</P><UL><LI><STRONG>New Implementation ("Greenfield-Approach")</STRONG></LI></UL><P>The Greenfield approach in SAP implementation involves starting from scratch with a new system, rather than upgrading or converting an existing one. This method allows for a fresh start, enabling the design and implementation of new processes without the constraints of legacy systems. Below is the highlighted process for the Greenfield approach</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nitin19_1-1744102839093.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247984i74D0B5C15A9B81CB/image-size/medium?v=v2&amp;px=400" role="button" title="Nitin19_1-1744102839093.png" alt="Nitin19_1-1744102839093.png" /></span></P><P>&nbsp;</P><UL><LI><STRONG>Selective Data Transition to SAP S/4HANA</STRONG></LI></UL><P>The Selective Data Transition approach to SAP S/4HANA is a hybrid method that combines elements of both the Greenfield and Brownfield approaches. It allows organizations to selectively migrate data and processes from their existing SAP ERP systems to SAP S/4HANA. This approach is particularly useful for companies that want to retain certain historical data and customizations while redesigning other parts of their system. The Selective Data Transition to SAP S/4HANA comprises two sub-scenarios:</P><UL><LI>Shell Creation and Conversion</LI><LI>Mix and Match</LI></UL><P><STRONG><U>High-Level Conversion Process lifecycle for Selective Data Transition (SDT) along with Shell System Conversion</U></STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nitin19_2-1744102839108.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247985i934F89ABCF004B00/image-size/medium?v=v2&amp;px=400" role="button" title="Nitin19_2-1744102839108.png" alt="Nitin19_2-1744102839108.png" /></span></P><P>&nbsp;</P><OL><LI>Shell System Understanding</LI></OL><P>An SAP Shell system is a streamlined version of an SAP system that contains only the essential metadata, customizations, and repository data, excluding most application data and unnecessary clients. This approach is often used in scenarios where a full system copy is not required, such as for testing, development, or selective data transitions</P><P><STRONG>Steps for Creating the Shell System</STRONG></P><UL><LI>The Shell Creation process creates a new empty system based on a source system.</LI><LI>All client-independent objects (repository and client-independent customizing) will be identical to the source system. The new system will have the same support package level and add-ons.</LI><LI>All transaction and master data are stripped from the shell system as part of the SDT migration.</LI><LI>Only client 000 will exist in the new system.</LI><LI>The typical source system is the production system or a copy of it.</LI><LI>A shell system can be created using a remote client copy.</LI></UL><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nitin19_3-1744102839119.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247986i15C754A60FB08568/image-size/medium?v=v2&amp;px=400" role="button" title="Nitin19_3-1744102839119.png" alt="Nitin19_3-1744102839119.png" /></span></P><P>&nbsp;</P><P>2.&nbsp;Shell Conversion:</P><P>SAP system will be converted to SAP S/4HANA using a standard system conversion with the Software Update Manager (SUM) or SUM with Database Migration Option (DMO).</P><P>3.&nbsp; S4HANA Goldan backup</P><P>As there is no application data (master data and transactional data) within this system, you can apply any necessary changes to the configuration of your new empty SAP S/4HANA system. For example:</P><UL><LI>You can adapt your financial model or controlling settings to meet future requirements.</LI><LI>Adopt the Fiori post-activation steps.</LI><LI>Implement add-ons as part of the post-conversion process.</LI><LI>Complete all post-conversion steps.</LI><LI>Apply all necessary post-functional and technical steps after the conversion process is completed over the shell system.</LI></UL><P><STRONG>Note</STRONG>: Since we are saving this system as a golden copy, please ensure that all common post-conversion steps are completed before the golden backup. This will help reduce effort in future system implementations.</P><P>4. Target system build</P><P>The preserved golden backup taken in step (3) will be used to create the new target system for SAP S/4HANA</P><P>5. Data Migration</P><P>The next step will be data migration (step 5). For all areas where you didn't apply fundamental changes, it will be possible to migrate historical data and transactional and modify them on the fly to fit the new SAP S/4HANA structures. Data for these areas will be migrated at the table level, and the necessary on-the-fly changes for SAP S/4HANA will be executed using special afterburner programs. These programs are like those used during a standard system conversion to SAP S/4HANA</P><P>SAP has partners that provide features for slicing data based on various factors and tools for migrating data with data slices. Data slicing is entirely dependent on business needs and the data migration approach, utilizing the available tools for migration and slicing</P><UL><LI>SDT <U>(Some data Slice criteria):</U> Time-Slice Scenario, Selective Company Code Transfer, Slice on FI and Other related selections, ETC<STRONG>.</STRONG></LI><LI>SDT Tools for <U>migration/partner companies</U>: SAP DMLT, SNP, CBS, Natuvion, ETC.</LI></UL><P>&nbsp;</P><UL><LI>Mix and Match</LI></UL><P>The basic idea of the mix and match scenario is to start with a greenfield approach while retaining your old solution for specific areas where your current SAP ERP solution meets your future demands.</P><P>To retain parts of your current solution, you will first identify and stabilize these areas. (Remember: even in Agile projects, one of the biggest reasons for budget overruns or project failures is unclear or continuously expanding scope).</P><P>After identifying the solution areas, you want to bring over to your new SAP S/4HANA system, you need to decide how to perform this migration</P><P><STRONG>There are two possibilities:</STRONG></P><OL><LI>You manually add the configuration to the new SAP S/4HANA solution.</LI><LI>You migrate/transport the solution using some tooling</LI></OL><P><STRONG>Option 1: Manual Configuration</STRONG></P><UL><LI>You will need experienced consultants to manually configure the solution in the target system.</LI></UL><P><STRONG>Option 2: Migration/Transport Using Tooling</STRONG></P><UL><LI><STRONG>Process a)</STRONG>: If there are no or minor changes between SAP ERP and SAP S/4HANA, you can transport the customizing directly from SAP ERP to SAP S/4HANA and manually rework the customizing in the target system.</LI><LI><STRONG>Process b)</STRONG>: If there are significant changes between SAP ERP and SAP S/4HANA, you can perform a Shell Creation of your SAP ERP system and convert this to SAP S/4HANA. This provides a basis from which you can transport the solution to your new SAP S/4HANA system. In this case, you will also manually rework and fit the transported elements within your SAP S/4HANA system.</LI></UL><P>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&nbsp;</P><P>Use case for (Mix and Match Approach<U>) Option 2: Migration/Transport Using Tooling “Process b”</U></P><P><STRONG>&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&gt;&nbsp;</STRONG></P><P>Landscape conversion Approach (MIX and Match) (ECC to S4HANA Conversion with SDT)</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Nitin19_1-1744103641287.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/247996iF6471AA5545D0B26/image-size/medium?v=v2&amp;px=400" role="button" title="Nitin19_1-1744103641287.png" alt="Nitin19_1-1744103641287.png" /></span></P><P>&nbsp;</P><P>&nbsp;&nbsp;&nbsp; <U>Use case highlights as below </U></P><UL><LI>Best example of the Mix and Match approach.</LI><LI>Selective data migration conversion approach.</LI><LI>We created the Shell system and converted it to the S/4HANA system.</LI><LI>Performed the post-conversion steps and then took the golden backup to prepare the subsequent systems.</LI><LI>For the selective data migration, we used the DMLT Tool from SAP.</LI><LI>As per the Mix and Match approach, current ECC changes and processes were aligned with manual retrofit changes moved to the S/4HANA system.</LI><LI>According to the project plan, along with testing phases like SIT, UAT, and Dress Rehearsal, we converted the ECC to the S/4HANA system using the Selective Data Migration approach.</LI></UL><P><U>DMLT Tool information:</U></P><P>The SAP Data Management and Landscape Transformation (DMLT) tool is designed to help organizations manage and transform their data landscapes efficiently. Here are some key aspects of the SAP DMLT tool:</P><OL><LI><STRONG>Data Migration:</STRONG> DMLT facilitates the smooth transition to SAP S/4HANA by offering selective data transition, new implementation, and system conversion options</LI><LI><STRONG>System Landscape Optimization:</STRONG> The tool supports mergers, acquisitions, divestitures, and complex organizational restructurings by transforming SAP data and system landscapes</LI><LI><STRONG>Data Unification and Harmonization:</STRONG> DMLT helps in analyzing and cleaning incorrect data from customers, partners, and products. It unifies and standardizes financial and controlling processes, improving data quality and governance</LI><LI><STRONG>System Consolidation:</STRONG> By reviewing and consolidating system landscapes, DMLT reduces the cost of operating distributed systems and facilitates better data management and reporting</LI><LI><STRONG>Holistic Approach:</STRONG> DMLT offers a comprehensive approach to data-related needs, including data quality, data integration, and information lifecycle management</LI></OL><P>This ensures a successful migration using proven technologies and methodologies. Overall, SAP DMLT is a powerful tool that helps organizations optimize their data management processes and transition smoothly to advanced IT landscapes</P><P><U>Control system Role in the DMLT tool: </U></P><P>A control system is a system that is already on the S/4HANA target version and is used for the controlled logic program, which provides the selective data migration decision from the source to the target system</P><P>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <U>Technical Challenges with DMLT Tool during conversion process:</U></P><UL><LI>During the shell creation, SAP generally does not include third-party components and Y and Z custom tables. Therefore, we must provide the list of third-party add-ons to SAP to include in the shell creation.</LI><LI>Technical tables are loaded only once during the shell creation. After the shell creation, deltas should be managed by other tools or manually.<UL><LI>Examples of technical data: RFC, LSMW, WE21, WE20, printers, certificates, jobs, job variants, user master data. All delta data should be managed outside the DMLT process.</LI></UL></LI><LI>All Y and Z tables and t-codes must be checked in the shell system along with all entries.</LI><LI>We must align with SAP on how the Y and Z table data will be managed during the DMLT process (e.g., including tables in the DMLT process or through a retrofit process).</LI><LI>SAP must provide the DMLT tool transport, which we need to align. However, during the tool setup, we may encounter some errors in the SAP system.</LI><LI>In this approach, you might have to redefine all interface connections along with AL11 and scripts, if any</LI></UL><P>Conclusion</P><P>This blog provides an understanding of the SAP S/4HANA conversion process methods available in the SAP ecosystem. It primarily focuses on the Selective Data Migration approach, which is part of the Mix and Match and Shell Conversion ap</P> 2025-04-10T16:52:28.809000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/flip-virtual-tables-in-calculation-views-from-ecc-to-s-4hana/ba-p/14070361 Flip Virtual Tables in Calculation Views from ECC to S/4HANA 2025-04-10T16:54:07.764000+02:00 NitinK https://community.sap.com/t5/user/viewprofilepage/user-id/124663 <P>As SAP prepares to end support for ECC by 2027, many companies are making the crucial transition to S/4HANA. This shift promises enhanced capabilities and performance, but it also brings its own set of challenges. One of the most significant hurdles involves the calculation views that rely on virtual tables from the ECC system. After migration, these views need to be redirected to the new S/4 system.</P><P>For many organizations, BW hybrid modeling is a common practice. This approach uses virtual tables from the ECC source system within their calculation views. With numerous calculation views depending on these ECC virtual tables, the need to point them to the new S/4 system without disrupting functionality becomes paramount.</P><P>SAP Note <A href="https://userapps.support.sap.com/sap/support/knowledge/en/3103359" target="_self" rel="noopener noreferrer">3103359</A> clearly states that there is no straightforward way to change the adapter and remote source database without dropping and recreating the remote source. This raises a critical question: **Do we need to adjust all the calculation views?**</P><P>Adjusting all calculation views is a daunting task. It requires extensive manual effort and rigorous testing, which inevitably adds to the project's cost. The prospect of manually updating each view can be overwhelming for many organizations.</P><P>However, there is a workaround that can simplify this process. By deleting the virtual table and recreating it within the same schema, organizations can effectively redirect their calculation views to the new S/4 system.&nbsp;</P><P><STRONG>Example: Redirecting Virtual Tables in HANA System</STRONG></P><P>Let's take a closer look at a practical example. Below, we see the HANA system with the "ECC" schema. There are two virtual tables, "V_EKKN" and "V_EKKO" which refer to the ECC tables "EKKN" and "EKKO".</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NitinK_1-1744142180591.png" style="width: 182px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/248304i5BEB2E39BF4D1732/image-dimensions/182x319?v=v2" width="182" height="319" role="button" title="NitinK_1-1744142180591.png" alt="NitinK_1-1744142180591.png" /></span></P><P>Now, we want to point these tables to the new S/4 system. We can achieve this by following these steps:</P><UL><LI>Delete the virtual tables.</LI><LI>Recreate the virtual tables.</LI></UL><P>These steps can be performed by running the following script:</P><P>DROP TABLE &lt;schema_name&gt;."&lt;virtual_table_name&gt;" CASCADE;<BR />CREATE VIRTUAL TABLE "&lt;schema_name&gt;"."&lt;virtual_table_name&gt;" at "&lt;remote_source&gt;"."&lt;database_name&gt;"."&lt;remote_schema_name&gt;"."&lt;remote_table_name&gt;";</P><P>DROP TABLE ECC."V_EKKN" CASCADE;<BR />CREATE VIRTUAL TABLE "ECC"."V_EKKN" at "S4X"."S4DB"."SAPPROD"."EKKO";</P><P>DROP TABLE ECC."V_EKKO" CASCADE;<BR />CREATE VIRTUAL TABLE "ECC"."V_EKKO" at "S4X"."S4DB"."SAPPROD"."EKKO";</P><P>By executing this script, we can see that the virtual tables are now pointing to the new S/4 system "S4X."</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="NitinK_2-1744143075469.png" style="width: 188px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/248326i0D4B64E7473F4C13/image-dimensions/188x315?v=v2" width="188" height="315" role="button" title="NitinK_2-1744143075469.png" alt="NitinK_2-1744143075469.png" /></span></P><P>By deleting the virtual table and recreating it within the same schema, organizations can seamlessly redirect their calculation views to the new S/4 system without needing to adjust the calculation view itself. This method, while still requiring careful execution, can significantly reduce the complexity and effort involved.</P><P data-unlink="true">Reference links:<BR />SAP Help portal for <A href="https://help.sap.com/docs/HANA_SERVICE_CF/6a504812672d48ba865f4f4b268a881e/2b615f01b9b74b399cad831dc0304abd.html" target="_self" rel="noopener noreferrer">Managing Virtual Tables</A><BR />SAP Note&nbsp;<A href="https://me.sap.com/notes/2542963/E" target="_self" rel="noopener noreferrer">2542963</A> - Is it possible to rename a Remote Source in SAP HANA?</P> 2025-04-10T16:54:07.764000+02:00 https://community.sap.com/t5/application-development-and-automation-blog-posts/sap-developer-news-april-10th-2025/ba-p/14072771 SAP Developer News April 10th, 2025 2025-04-10T21:10:00.041000+02:00 Eberenwaobiora https://community.sap.com/t5/user/viewprofilepage/user-id/1937986 <P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FkUeEVoJMsqQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DkUeEVoJMsqQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FkUeEVoJMsqQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="400" height="225" scrolling="no" title="Future of Data &amp; AI, Apeiro &amp; NeoNephos, CAP March Rel, Free Student Learning | SAP Developer News" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div></P><P>&nbsp;</P><H3 id="toc-hId-1837340040"><SPAN>DESCRIPTION</SPAN><SPAN>&nbsp;</SPAN></H3><P><SPAN>Podcast: <A href="https://podcast.opensap.info/sap-developers/2025/04/10/sap-developer-news-april-10th2025/" target="_blank" rel="noopener nofollow noreferrer">https://podcast.opensap.info/sap-developers/2025/04/10/sap-developer-news-april-10th2025/</A></SPAN><SPAN>&nbsp;</SPAN></P><P><STRONG><SPAN>Shaping the Future of Data and AI with SAP HANA Cloud:&nbsp;</SPAN></STRONG><SPAN>&nbsp;</SPAN></P><UL><LI><SPAN>Shaping the Future with Data and AI: SAP HANA Cloud Knowledge Graph Engine and Generative AI Toolkit:&nbsp;</SPAN><A href="https://community.sap.com/t5/blogs/blogarticleprintpage/blog-id/technology-blog-sap/article-id/180448" target="_blank"><SPAN>https://community.sap.com/t5/blogs/blogarticleprintpage/blog-id/technology-blog-sap/article-id/180448</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>Generative AI Toolkit for SAP HANA Cloud:&nbsp;</SPAN><A href="https://github.com/SAP/generative-ai-toolkit-for-sap-hana-cloud/blob/main/README.md" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://github.com/SAP/generative-ai-toolkit-for-sap-hana-cloud/blob/main/README.md</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>LangChain integration for SAP HANA Cloud:&nbsp;</SPAN><A href="https://pypi.org/project/langchain-hana/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://pypi.org/project/langchain-hana/</SPAN></A><SPAN>&nbsp;&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>Invitation to join HANA Tech Con:&nbsp;</SPAN><A href="https://www.linkedin.com/feed/update/urn:li:activity:7316037624064819201/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://www.linkedin.com/feed/update/urn:li:activity:7316037624064819201/</SPAN></A><SPAN>&nbsp;&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><P>&nbsp;</P><P><STRONG><SPAN>SAP Open Source Webinar Apeiro Reference Architecture&nbsp;</SPAN></STRONG><SPAN>&nbsp;</SPAN></P><UL><LI><SPAN>Registration page:&nbsp;</SPAN><A href="https://events.sap.com/apeiro-reference-architecture/en/registration.aspx" target="_blank" rel="noopener noreferrer"><SPAN>https://events.sap.com/apeiro-reference-architecture/en/registration.aspx</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>LinkedIn post:&nbsp;</SPAN><A href="https://www.linkedin.com/posts/michael-ameling_neonephos-gardener-gardenlinux-activity-7312819875003985923-D7wZ/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACgkg4UBF6JXngL8eiZlhbT4UQWinLeQkAo" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://www.linkedin.com/posts/michael-ameling_neonephos-gardener-gardenlinux-activity-7312819875003985923-D7wZ/?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACgkg4UBF6JXngL8eiZlhbT4UQWinLeQkAo</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>ApeiroRA:&nbsp;</SPAN><A href="https://apeirora.eu/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://apeirora.eu/</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>NeoNephos Foundation:&nbsp;</SPAN><A href="https://neonephos.org/" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://neonephos.org/</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><P>&nbsp;</P><P><STRONG><SPAN>CAP March 8.9 Release&nbsp;</SPAN></STRONG><SPAN>&nbsp;</SPAN></P><UL><LI><SPAN>Release Notes for March 8.9:&nbsp;</SPAN><A href="https://cap.cloud.sap/docs/releases/mar25" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://cap.cloud.sap/docs/releases/mar25</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><UL><LI><SPAN>Major Release migration Guide:&nbsp;</SPAN><A href="https://cap.cloud.sap/docs/releases/mar25#prepare-for-major-release" target="_blank" rel="noopener nofollow noreferrer"><SPAN>https://cap.cloud.sap/docs/releases/mar25#prepare-for-major-release</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><P><STRONG><SPAN>Free SAP Certification and practice systems for students and lecturers:</SPAN></STRONG><SPAN>&nbsp;</SPAN></P><P><SPAN>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- &nbsp;Blog post for student learning hub:&nbsp;</SPAN><A href="https://community.sap.com/t5/beginner-corner-blog-posts/free-sap-certification-and-practice-systems-for-students-amp-lecturers/ba-p/14052493" target="_blank"><SPAN>https://community.sap.com/t5/beginner-corner-blog-posts/free-sap-certification-and-practice-systems-for-students-amp-lecturers/ba-p/14052493</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;- What is SAP:&nbsp;</SPAN><A href="https://community.sap.com/t5/beginner-corner-blog-posts/how-and-where-do-you-start-a-consulting-career-in-sap/ba-p/221488" target="_blank"><SPAN>https://community.sap.com/t5/beginner-corner-blog-posts/how-and-where-do-you-start-a-consulting-career-in-sap/ba-p/221488</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P>&nbsp;</P><P><STRONG><SPAN>Welcome to SAP BusinessObjects BI 2025&nbsp;</SPAN></STRONG><SPAN>&nbsp;</SPAN></P><UL><LI><SPAN>Welcome to SAP BusinessObjects BI 2025:&nbsp;</SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/welcome-to-sap-businessobjects-bi-2025/ba-p/14040695" target="_blank"><SPAN>https://community.sap.com/t5/technology-blogs-by-sap/welcome-to-sap-businessobjects-bi-2025/ba-p/14040695</SPAN></A><SPAN>&nbsp;</SPAN><SPAN>&nbsp;</SPAN></LI></UL><P><SPAN>&nbsp;</SPAN></P><H3 id="toc-hId-1640826535"><STRONG><SPAN>CHAPTER TITLES&nbsp;&nbsp;</SPAN></STRONG><SPAN>&nbsp;</SPAN></H3><P><SPAN>0:00 Intro&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>00:10 Shaping the Future of Data and AI with SAP HANA Cloud&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>1:55 SAP Open Source Webinar Apeiro Reference Architecture&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>3:13 CAP March 8.9 Release&nbsp;</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>4:55&nbsp;Free SAP Certification and practice systems for students and lecturers</SPAN><SPAN>&nbsp;</SPAN></P><P><SPAN>7:50</SPAN><STRONG><SPAN>&nbsp;</SPAN></STRONG><SPAN>Welcome to SAP BusinessObjects BI 2025</SPAN><SPAN>&nbsp;</SPAN></P><H3 id="toc-hId-1444313030">&nbsp;</H3><H3 id="toc-hId-1247799525"><SPAN>TRANSCRIPTION</SPAN></H3><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Into]</STRONG> This is the SAP Developer News for April 10th, 2025.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Witalij]</STRONG> We are delighted to inform you that two cool technologies, the Knowledge Graph Engine and the Generative AI Toolkit, have been launched in SAP HANA Cloud. The Knowledge Graph Engine enables you to model and analyze complex relationships within data stored in SAP HANA Cloud. Native RDF triple store support enables semantic data storage and querying using the resource description framework. It also enhances LLM responses by grounding them in business -specific knowledge stored within the knowledge graph, providing more accurate and contextually relevant outputs. A lot of smart words, I know, so you better try it yourself. I plan to make the new SAP CodeJam exercises available shortly. Let me know in the comments if you are interested. The Generative AI Toolkit simplifies the development of intelligent data applications by providing a natural language interface for creating in-database machine learning models, reducing the need for extensive coding from your cert. Although, following recent updates in the long-chain review process, our engineering team has published a new long-chain integration for SAP HANA Cloud package. You can now easily leverage features like in database embeddings, full-text search, and vector indexes directly within LongChain. Interested in discussing this and other aspects of SAP HANA? The very first HANA TechCon will take place at SAP Headquarter in Germany on July 10th. Please check the published agenda and register if you are interested. I'm looking forward to meeting you there.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Ajay]</STRONG> Hello everyone, I have got some interesting news for the open source enthusiast. There is an upcoming SAP Open Source webinar on the topic Apeiro Reference Architecture, Strengthening Digital Sovereignty for Europe. It is on May 8th at 3 p.m. CESD. The Apeiro Reference Architecture is a part of initiative and aim to build a high-performance cloud edge infrastructure with open source standards. Peter Giese, head of SAP Open Source Program Office, will give you an overview on why and how this project has been transferred into the new NeoNephos Foundation, which will serve as a neutral entity dedicated to fostering, open collaboration, and governance. There is also a LinkedIn post by Michael Ameling explaining the NeoNephos Foundation, where several flagship open source projects such as Gardena, Garden Linux, and more are transferred to NeoNephos through this initiative. Vasu Chandrasekhara, Chief Product Owner of Apeiro, will explain the landscape of Apeiro reference architecture with an overview of the architecture and the components. Please find the registration links to the webinar in the description below.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Kevin]</STRONG> A March 2025 release of the SAP Cloud Application Programming Model is out. The new version 8.9 brings important changes to you. But before we dive in, I just wanted to remind you that there is a new CDS parser version coming out with the next major version 9.0. So make sure to comply to all the new changes and that your projects keep on running. You can use the new parser now by setting the new parser flag to true to try them out in your project today. So you might as well go ahead and do it. A new change with 8.9 brings you actions to generated child entities when<SPAN>&nbsp; </SPAN>defining a composition of aspect. This is really nice bringing you action capabilities when reflecting document structures in your domain models. Recursive hierarchies and Fiori Tree Table support is out now available in beta, allowing you to serve read requests for the SAP Fiori Tree Table control. Version 8.9 is introducing a new CDS command, which is cds-up. CDS-up will build and deploy your CAP applications to Cloud Foundry. And if you need to deploy to Kyma, simply add the argument dash dash to Kubernetes to the end of the command. And the last thing I want to mention is that you can now use the cds deploy to HANA command for deploying to Kubernetes as well. So just add the argument of dash dash on Kubernetes this at the end of the command for the magic to begin. And there are more changes in the current release, which we aren't covering here. So make sure to read the official release notes on the CAP documentation page. And as always, the link is in the description below.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Ebere]</STRONG> Hi, everyone. I've got some exciting news to share, especially if you’re a student just like me, working with SAP, or just curious to know what SAP is all about, the Learning Hub Student Edition is live. And the best part of it, it is free for all actively enrolled students. And here's what you can get. Full access to content available to customers, two certifications attempts, access to live expert sessions, hands-on cloud-based training systems, plus content to help you get certified over time. If you’re wondering, what is SAP? Don't worry, I got it covered. There is a blog post link below. Definitely give that a read if you are new to SAP world. And how do you get started? Let me break it down real quick.Create your SAP universal ID. This is like your learning passport, so<SPAN>&nbsp; </SPAN>make sure you use your email address of your university to do so. Verify that you are a student using the email address attached or tied with your enrollment. And while waiting for the activation link to get to your inbox, you can already start checking out the student's Zone and exploring courses and even start learning. You can also activate your subscription by checking your inbox to activate the link. If you don’t see it on your inbox, check the spam folder just in case. and you can also start you can only start learning and pick up from where you left or choose new topics from your student's zone account also you can also start practicing on the cloud-based training systems this is really really important because here you actually switch or you turn your theories into real skills and also you can also join live sections and ask expert questions and also learn what is happening in the field. When you think you're done or when you think you've learned enough, you can already book from the Learning Hub certification exams. Remember, you've got two free attempts. Before I go, just a few quick FAQs. Old student email address won't get you access, so make sure you are actively enrolled. And the subscription lasts for 12 months, But, if you're still enrolled, you can renew it. And remember, the certification exam is only for two attempts if only you pass on the first try. So, you're still unsure where to start? I got it covered. You can check out the Discovery SAP for students or position in SAP business suits. And remember, if you ever get stuck, the SAP community is always there for you. For more information, please check out the blog post. Happy learning and goodluck on your SAP journey. Ciao.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Witalij]</STRONG> Because of all the troubles, I'm misinforming you that SAP Business Objects BI 2025 is now available. But thanks, Josh, for reminding me. SAP Business Objects BI 2025 brings a fresh set of enhancements that make your business analytics more efficient, user-friendly, and future-proof. BI 2025 offers powerful enhancements designed to help you help your business users make faster and more informed decisions. This release isn't just about the new features. It's about ensuring that your BI platform remains secure, flexible, and ready for the future. Please check the Welcome to SAP Business Objects BI 2025 blog post and share your thoughts in the comments there. And by the way, Greg, Eric, yes, I still have it.</P> 2025-04-10T21:10:00.041000+02:00 https://community.sap.com/t5/technology-blog-posts-by-sap/understanding-amp-tips-and-tricks-for-cdc-change-data-capture/ba-p/14076051 Understanding & Tips and Tricks for CDC ( Change Data Capture) 2025-04-15T07:08:58.081000+02:00 Yogananda https://community.sap.com/t5/user/viewprofilepage/user-id/75 <P>&nbsp;</P><H2 id="toc-hId-1708369696">What is Change Data Capture (CDC)?</H2><DIV class=""><P class="">Change Data Capture (CDC) is a design pattern that tracks changes (inserts, updates, deletes) in a database and makes those changes available to downstream systems in real-time or near-real-time.</P><H3 id="toc-hId-1640938910">How CDC Works</H3><OL class=""><LI>Detects row-level changes in source databases</LI><LI>Processes these changes into structured events</LI><LI>Delivers them to data warehouses, streams, or services</LI></OL><P><EM>Best way to think of CDC (Change Data Capture) on your data coming from Source to Target</EM><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="0_e-k2XpNzDoOtVDrk.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250716i9D25CCE55C382300/image-size/large?v=v2&amp;px=999" role="button" title="0_e-k2XpNzDoOtVDrk.png" alt="0_e-k2XpNzDoOtVDrk.png" /></span></P></DIV><H3 id="toc-hId-1444425405"><FONT face="courier new,courier" color="#3366FF">Live Playground to understand&nbsp;Change Data Capture (CDC) Playground</FONT></H3><H1 id="toc-hId-989746462"><FONT face="courier new,courier"><A href="https://www.change-data-capture.com/" target="_self" rel="nofollow noopener noreferrer"><SPAN>https://www.change-data-capture.com</SPAN></A></FONT></H1><H3 id="toc-hId-1051398395">Tips and Tricks for CDC ( Change Data Capture)</H3><OL><LI><P><STRONG>Use Time Stamps for Efficient Data Extraction:</STRONG></P><UL><LI>Ensure your source tables have update and create time stamps. This allows you to efficiently track changes and extract only the modified rows.</LI></UL></LI><LI><P><STRONG>Leverage Change Logs:</STRONG></P><UL><LI>Utilize the change logs maintained by your RDBMS to capture detailed audit trails of data modifications. This can help in identifying changes more accurately.</LI></UL></LI><LI><P><STRONG>Optimize Performance with Incremental Extraction:</STRONG></P><UL><LI>Implement source-based CDC to improve performance by extracting only the changed rows, rather than the entire dataset.</LI></UL></LI><LI><P><STRONG>Set Up Proper Indexing:</STRONG></P><UL><LI>Ensure that your tables are properly indexed on the columns used for CDC. This can significantly speed up the data extraction process.</LI></UL></LI><LI><P><STRONG>Use SAP Operational Data Provisioning (ODP):</STRONG></P><UL><LI>For integration with Azure Data Factory, use the SAP ODP framework to replicate delta changes efficiently.</LI></UL></LI><LI><P><STRONG>Monitor and Tune CDC Processes:</STRONG></P><UL><LI>Regularly monitor the performance of your CDC processes and tune them as necessary to ensure optimal performance.</LI></UL></LI></OL><H3 id="toc-hId-854884890">Benefits of SAP HANA CDC</H3><OL><LI><P><STRONG>Improved Performance:</STRONG></P><UL><LI>By capturing only the changes, CDC reduces the amount of data that needs to be processed, leading to faster data integration and reduced load on the source systems.</LI></UL></LI><LI><P><STRONG>Real-Time Data Integration:</STRONG></P><UL><LI>CDC enables near real-time data integration, ensuring that your data warehouse or analytics systems are always up-to-date with the latest changes.</LI></UL></LI><LI><P><STRONG>Reduced Data Latency:</STRONG></P><UL><LI>With CDC, data latency is minimized as changes are captured and propagated almost immediately, which is crucial for real-time analytics and reporting.</LI></UL></LI><LI><P><STRONG>Cost Efficiency:</STRONG></P><UL><LI>By processing only the changed data, CDC reduces the computational and storage costs associated with full data loads.</LI></UL></LI><LI><P><STRONG>Enhanced Data Accuracy:</STRONG></P><UL><LI>CDC ensures that only the most recent and relevant data is captured, improving the accuracy and reliability of your data.</LI></UL></LI></OL><P>Implementing these tips and leveraging the benefits of CDC can significantly enhance your data management and integration processes in your database like SAP HANA. If you have any specific questions or need further details, feel free to ask!</P><P><A href="https://learn.microsoft.com/en-us/azure/data-factory/sap-change-data-capture-introduction-architecture" target="_blank" rel="noopener nofollow noreferrer">Overview and architecture of the SAP CDC capabilities - Azure Data Factory | Microsoft Learn</A></P><P><A href="https://learning.sap.com/learning-journeys/exploring-sap-data-services/using-source-based-changed-data-capture-cdc-_d706aac1-b4d7-450c-9c6c-424260072486" target="_blank" rel="noopener noreferrer">Learning Journey - Using Source-Based Changed Data Capture (CDC)</A></P><P><A href="https://learning.sap.com/learning-journeys/exploring-sap-data-services/using-target-based-changed-data-capture-cdc-_edf13fe5-0a9c-4c52-aeee-c99daddd30be" target="_blank" rel="noopener noreferrer">Learning Journey - Using Target-Based Changed Data Capture (CDC)</A></P> 2025-04-15T07:08:58.081000+02:00 https://community.sap.com/t5/technology-blog-posts-by-sap/exploring-new-sql-on-files-use-cases-with-sap-hana-cloud-qrc-04-2024-and/ba-p/14076223 Exploring New SQL on Files Use Cases with SAP HANA Cloud QRC 04/2024 and QRC 01/2025 2025-04-15T10:42:07.034000+02:00 SeungjoonLee https://community.sap.com/t5/user/viewprofilepage/user-id/204092 <TABLE border="1" width="100%"><TBODY><TR><TD><STRONG>Related Blogs:<BR /></STRONG><UL><LI><A href="https://community.sap.com/t5/technology-blogs-by-sap/unlocking-the-true-potential-of-data-in-files-with-sap-hana-database-sql-on/ba-p/13861585" target="_self">Unlocking the True Potential of Data in Files with SAP HANA Database SQL on Files in SAP HANA Cloud</A></LI></UL></TD></TR></TBODY></TABLE><P>As I mentioned in my <A href="https://community.sap.com/t5/technology-blogs-by-sap/unlocking-the-true-potential-of-data-in-files-with-sap-hana-database-sql-on/ba-p/13861585" target="_self">blog</A> last year, the SQL on Files capability in SAP HANA Cloud has unlocked new potential by providing direct read-only SQL access to files stored in <A href="https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/sap-hana-cloud-data-lake-administration-for-data-lake-files" target="_self" rel="noopener noreferrer">SAP HANA Cloud, data lake Files</A> since SAP HANA Cloud QRC 03/2024.</P><P>Starting with SAP HANA Cloud QRC 04/2024, this capability has been further extended to support direct access to the most common external object storages listed below, in addition to <A href="https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/sap-hana-cloud-data-lake-administration-for-data-lake-files" target="_self" rel="noopener noreferrer">SAP HANA Cloud, data lake Files</A>.</P><UL><LI>Amazon S3</LI><LI>Azure Blob Storage or Azure Data Lake Storage (ADLS) Gen2</LI><LI>Google Cloud Storage (GCS)</LI></UL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Overview.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250804i61951C92065E9861/image-size/large?v=v2&amp;px=999" role="button" title="Overview.png" alt="Overview.png" /></span></P><P>Although the method for creating a remote source varies by cloud service provider, the process for creating virtual tables, the supported file formats, and the read-only access remain the same as when using SQL on Files with <A href="https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/sap-hana-cloud-data-lake-administration-for-data-lake-files" target="_self" rel="noopener noreferrer">SAP HANA Cloud, data lake Files</A>. Since these aspects are already explained in my previous <A href="https://community.sap.com/t5/technology-blogs-by-sap/unlocking-the-true-potential-of-data-in-files-with-sap-hana-database-sql-on/ba-p/13861585" target="_self">blog</A>, let me focus on the differences in this one.</P><P>Additionally, as you may have already noticed from the above diagram, read-only access to external <A href="https://delta.io/sharing/" target="_self" rel="nofollow noopener noreferrer">Delta Sharing</A> is also supported from SAP HANA Cloud QRC 04/2024 with a dedicated Delta Sharing adapter.</P><P>And when it comes to CSV formats, two new options are now included to provide better flexibility and reduce the effort required when reading files in CSV format, starting with SAP HANA Cloud QRC 04/2024.</P><P>Another piece of good news is that the existing limitation on Delta tables, which currently supports only reader version 1 for both direct file access and Delta sharing, has been lifted starting with SAP HANA Cloud QRC 01/2025.</P><P>Alright, with these points in mind, let's dive deeper into the details with some examples.</P><P>&nbsp;</P><H3 id="toc-hId-1837454246">Accessing external object storages</H3><P>As highlighted, starting from SAP HANA Cloud QRC 04/2024, the SAP HANA Database SQL on Files feature has begun to support direct read-only access to files (CSV, Parquet, and Delta tables) stored in external object storage.</P><P>The supported external object storage options are, as mentioned above, Amazon S3, Azure Blob Storage or Azure Data Lake Storage (ADLS) Gen2, and Google Cloud Storage (GCS).</P><P>For Amazon S3 and GCS, both path-style and virtual-host-style endpoints are supported (GCS also supports path-style regional endpoints). For Azure, both standard and DNS endpoints are supported.</P><P>The methods for creating a remote source differ according to the specific standards and integrations established by each cloud service provider. The example below shows how to create a remote source to Amazon S3.</P><pre class="lia-code-sample language-sql"><code>-- create a remote source to Amazon S3 CREATE REMOTE SOURCE &lt;remote_source_name&gt; ADAPTER "file" CONFIGURATION ' provider=s3; endpoint=&lt;s3_endpoint&gt;;' WITH CREDENTIAL TYPE 'PASSWORD' USING 'user=&lt;access_key&gt;;password=&lt;secret_key&gt;;</code></pre><P>As described, the <EM>file</EM> adapter is still in use but now with the parameter <EM>provider=s3</EM>.</P><P>A remote source can also be created via the SAP HANA Database Explorer UI, and the example below shows how to create a remote source to Google Cloud Storage via the UI.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="GCS Remote Source.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250828i9FE68C69326F10BF/image-size/large?v=v2&amp;px=999" role="button" title="GCS Remote Source.png" alt="GCS Remote Source.png" /></span></P><P>Please refer to the links below for further details on creating a remote source for each external object storage.</P><UL><LI>SAP HANA Cloud, SAP HANA Database SQL on Files Guide: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/create-amazon-s3-remote-source" target="_self" rel="noopener noreferrer">Create an Amazon S3 Remote Source</A></LI><LI>SAP HANA Cloud, SAP HANA Database SQL on Files Guide: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/create-microsoft-azure-storage-remote-source" target="_self" rel="noopener noreferrer">Create a Microsoft Azure Storage Remote Source</A></LI><LI>SAP HANA Cloud, SAP HANA Database SQL on Files Guide: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/create-google-cloud-storage-remote-source" target="_self" rel="noopener noreferrer">Create a Google Cloud Storage Remote Source</A></LI></UL><P>Although the process for creating a remote source differs, as mentioned above, the method to create a virtual table is the same as with <A href="https://help.sap.com/docs/hana-cloud-data-lake/user-guide-for-data-lake-files/sap-hana-cloud-data-lake-administration-for-data-lake-files" target="_self" rel="noopener noreferrer">SAP HANA Cloud, data lake Files</A> and can be found in my previous <A href="https://community.sap.com/t5/technology-blogs-by-sap/unlocking-the-true-potential-of-data-in-files-with-sap-hana-database-sql-on/ba-p/13861585" target="_self">blog</A>.</P><P>&nbsp;</P><H3 id="toc-hId-1640940741">Accessing external Delta Sharing</H3><P>Also, starting from SAP HANA Cloud QRC 04/2024, SAP HANA Database SQL on Files has started to support external <A href="https://delta.io/sharing/" target="_self" rel="nofollow noopener noreferrer">Delta Sharing</A>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Delta Sharing.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250830i17284B42D67E7FA8/image-size/large?v=v2&amp;px=999" role="button" title="Delta Sharing.png" alt="Delta Sharing.png" /></span></P><P>This means you can create a remote source to Databricks Delta Sharing and also create read-only virtual tables by pointing to the tables exposed by Delta Sharing protocol.</P><pre class="lia-code-sample language-sql"><code>-- create a remote source to Delta Sharing CREATE REMOTE SOURCE &lt;remote_source_name&gt; ADAPTER "deltasharing" CONFIGURATION ' provider=databricks; endpoint=&lt;endpoint in config.share&gt;;' WITH CREDENTIAL TYPE 'OAUTH' USING 'access_token=&lt;bearerToken in config.share&gt;';</code></pre><P>As described, a dedicated <EM>deltasharing</EM> adapter is used with the parameter <EM>provider=databricks</EM>, instead of the <EM>file</EM> adapter.</P><P>Similar to external object storage access, the remote source to Databricks Delta Sharing can also be created via the SAP HANA Database Explorer UI.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Delta Sharing Remote Source.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250831iDE73EC7844851A65/image-size/large?v=v2&amp;px=999" role="button" title="Delta Sharing Remote Source.png" alt="Delta Sharing Remote Source.png" /></span></P><P>For this read-only access, Delta tables must be added to the Delta Sharing of Databricks. By creating a new recipient with a token and token lifetime, the credential (<EM>config.share</EM>) should be downloaded. Please also note that, unlike direct file access in SQL on Files, the method of creating virtual tables follows the same approach as with other SAP HANA smart data access (a.k.a., SDA) adapters.</P><pre class="lia-code-sample language-sql"><code>-- create a virtual table CREATE VIRTUAL TABLE VT_TAB AT &lt;remote_source_name&gt;.&lt;share_name&gt;.&lt;schema_name&gt;.&lt;table_name&gt;;</code></pre><P>Although Delta tables in files can be directly accessed with the <EM>file</EM> adapter, support for the Delta Sharing protocol via a dedicated <EM>deltasharing</EM> adapter can offer more flexibility if Delta Sharing is already in use and there is a requirement to use it as a unified access protocol within your organization.</P><P>Please also refer to the link below for further details.</P><UL><LI>SAP HANA Cloud, SAP HANA Database SQL on Files Guide: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/create-databricks-delta-sharing-remote-source" target="_self" rel="noopener noreferrer">Create a Databricks Delta Sharing Remote Source</A></LI></UL><P>&nbsp;</P><H3 id="toc-hId-1444427236">Two new options for CSV format</H3><P>For better usability and reduced effort, two new options were introduced with SAP HANA Cloud QRC 04/2024.<BR />One is SKIP FIRST N ROWS, which literally skips the first N number of rows when creating a virtual table by pointing to CSV files. Obviously, if the number of rows is less than N, an error will be thrown.</P><pre class="lia-code-sample language-sql"><code>-- skip the first 5 rows CREATE VIRTUAL TABLE TEST_SCHEMA.TAB1 ( A INT, B INT ) AT TEST_REMOTE."/path/to/csv/files1" AS CSV SKIP FIRST 5 ROW;</code></pre><P>This is useful if there are multiple header rows, or if you intentionally ignore certain rows for any business reasons.</P><P>Another option is COLUMN LIST IN FIRST ROW, which is similar to the above option but can be used to more precisely recognize the first row as the header and ensure the match between virtual table field names and the field names in the file header.</P><pre class="lia-code-sample language-sql"><code>-- recognize the first row as the header CREATE VIRTUAL TABLE TEST_SCHEMA.TAB2 ( A INT, B INT ) AT TEST_REMOTE."/path/to/csv/files2" AS CSV COLUMN LIST IN FIRST ROW;</code></pre><P>In this case, since the case sensitivity of field names is also compared, if the field name of the virtual table does not match the field name in the file header, an error is thrown.</P><P>In addition, the <EM>VIRTUAL_TABLE_FILES</EM> system view has been extended to include these two new configuration values. Please also refer to the link below for further details.</P><UL><LI>SAP HANA Cloud, SAP HANA Database SQL on Files Guide: <A href="https://help.sap.com/docs/hana-cloud-database/sap-hana-cloud-sap-hana-database-sql-on-files-guide/create-virtual-table" target="_self" rel="noopener noreferrer">Create a Virtual Table</A></LI></UL><P>&nbsp;</P><H3 id="toc-hId-1247913731">Latest reader version support for Delta table</H3><P>As briefly mentioned above, before QRC 01/2025, if the reader version of the Delta table was higher than 1, a virtual table could not be created. In most use cases, supporting reader version 1 was sufficient because, when converting Parquet files into Delta tables, the lowest possible reader version is generally chosen.</P><P>Nevertheless, if a specific data type like <EM>TimestampNtz</EM> (<EM>Timestamp</EM> without timezone) is included in your data or specific Delta table features like the deletion vector are enabled, a higher reader version like 3 is required, and SQL on Files could not read these before.</P><P>However, starting with SAP HANA Cloud QRC 01/2025, SQL on Files has begun supporting reader versions 2 and 3, meaning the existing limitation has now been fully lifted.</P><P>&nbsp;</P><H3 id="toc-hId-1051400226">Conclusion</H3><P>In conclusion, the enhanced capabilities of SAP HANA Database SQL on Files in SAP HANA Cloud provide more flexible and efficient ways to manage your data. With the ability to access vast volumes of data from well-managed object storages, you can fully leverage the robust features of the SAP HANA Cloud as a platform. This includes in-memory processing, elasticity, the Predictive Analysis Library (PAL), machine learning capabilities, and more.</P><P>Furthermore, these advancements facilitate seamless data integration with SAP applications like S/4HANA, empowering you to derive even greater value by combining your object storage data with data from SAP applications. If you're interested in reading data from SAP applications like S/4HANA in SAP HANA Cloud, please refer to my blog, <A href="https://community.sap.com/t5/technology-blogs-by-sap/taking-data-federation-to-the-next-level-accessing-remote-abap-cds-view/ba-p/13635034" target="_self">Taking Data Federation to the Next Level: Accessing Remote ABAP CDS View Entities in SAP HANA Cloud</A>.</P><P>As we remain committed to innovation, stay tuned for upcoming updates that will continue to expand and enrich your SAP HANA Cloud experience.</P> 2025-04-15T10:42:07.034000+02:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-members/terminate-long-logged-in-users-via-abap-program/ba-p/14073375 Terminate Long-Logged-In Users via ABAP Program 2025-04-16T10:37:47.230000+02:00 snekha https://community.sap.com/t5/user/viewprofilepage/user-id/874350 <P class="lia-align-center" style="text-align: center;"><STRONG>How to Automatically Terminate Long-Logged-In Users via ABAP Program</STRONG></P><P><STRONG>Introduction:</STRONG></P><P>In this blog post, we'll explore how you can implement an ABAP solution to scan user sessions and terminate those that have been active for more than one hour. Although SM04 transaction provides the option to log off the users manually, doing this automatically through ABAP programming adds efficiency.</P><P><STRONG>Step1</STRONG>: To fetch the data from T-Code (<STRONG>SM04</STRONG>) which show the details of user and their systems through submit of standard program (<STRONG>RSM04000_ALV_NEW). CL_SALV_BS_RUNTIME_INFO=&gt;SET</STRONG> is used to grab ALV output.</P><P><STRONG>Note:</STRONG> If the class-method is not called, u can’t fetch the data through submit program.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_0-1744363258321.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249681i0A729C4802C8D9F3/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_0-1744363258321.png" alt="snekha_0-1744363258321.png" /></span></P><P><STRONG>Step 2:</STRONG> Retrieve the data using <STRONG>CL_SALV_BS_RUNTIME_INFO=&gt;GET_DATA_REF</STRONG> and move to internal table as mentioned in the below the image.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_1-1744363258323.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249682i1B1922D421A1E8CD/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_1-1744363258323.png" alt="snekha_1-1744363258323.png" /></span></P><P>Declaration of internal table with limited fields which is used in perform.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_2-1744363258324.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249684iED37EC4A84ACDDD2/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_2-1744363258324.png" alt="snekha_2-1744363258324.png" /></span></P><P><STRONG>Step 3:</STRONG> Get the user log on time from the table(<STRONG>USR02</STRONG>)</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_3-1744363258327.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249685i973A6C309ABA1038/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_3-1744363258327.png" alt="snekha_3-1744363258327.png" /></span></P><P><STRONG>Step 4:</STRONG> Loop the data which is fetched from SM04 and read the record of user from USR01. Calculate the system time with logon time. Check the time which is more than an hour and perform the delete_user which can terminate the users who have been logged in more than an hour.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_4-1744363258329.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249686i4C20F20F3FD3338A/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_4-1744363258329.png" alt="snekha_4-1744363258329.png" /></span></P><P><STRONG>Log off the users through SM04:</STRONG></P><P><STRONG>Step 1: </STRONG>Go to transaction SM04.</P><P>Data of users :</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_5-1744363258345.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249687iDA746D5B466AB369/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_5-1744363258345.png" alt="snekha_5-1744363258345.png" /></span></P><P><STRONG>Step 2:</STRONG> Select the user which need to log off and follow the same as image shown.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_6-1744363258358.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249689iA4A4DBD330471C02/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_6-1744363258358.png" alt="snekha_6-1744363258358.png" /></span></P><P><STRONG>Step3:</STRONG> If u want to log off the user, click YES.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="snekha_7-1744363258368.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/249688i471B8715C75BE173/image-size/medium?v=v2&amp;px=400" role="button" title="snekha_7-1744363258368.png" alt="snekha_7-1744363258368.png" /></span></P><P>This is manual process to log off the users.</P><P><STRONG>Conclusion.</STRONG></P><P>By developing a custom ABAP report, we can identify user sessions that have been active for <STRONG>more than one hour</STRONG> and terminate them automatically. To make this solution fully automated, the ABAP report can be <STRONG>scheduled as a background job</STRONG> to run <STRONG>at regular intervals</STRONG>—for example, once every hour. This scheduled job will routinely scan for long-running sessions and take appropriate action without any manual oversight, making it a proactive approach to session management in SAP.</P> 2025-04-16T10:37:47.230000+02:00 https://community.sap.com/t5/supply-chain-management-blog-posts-by-members/evolution-of-pricing-in-sap/ba-p/14076687 Evolution of Pricing in SAP 2025-04-16T11:29:09.793000+02:00 pranavgarud https://community.sap.com/t5/user/viewprofilepage/user-id/887754 <P>In this blog we will discuss about pricing concept in SAP, Evolution from ECC to S/4HANA with key benefits.</P><P><STRONG>ECC 16 pricing fields in detail are as follow:</STRONG></P><UL><LI>Steps: Place the condition in sequence.</LI><LI>Counter: If there is no space between two steps to add one more condition type.</LI><LI>Condition type:&nbsp; Specifies the type of the pricing element.</LI><LI>Description: To describe the condition types in sales document pricing.</LI><LI>From: Help to determine the base value for calculating the condition type value in sales document pricing.</LI><LI>To: Used to cumulate the value of multiple steps that are in a sequence.</LI><LI>Manual: If we check manual, then the system will not determine the condition type automatically into the sales document.</LI><LI>Requirement: If we check required, then system will not allow to save the sales document if the condition type is missing</LI><LI>Statistical: If we check statistical then it will have two effects, value doesn’t have any effect on net value &amp; not be posted into accounting.</LI><LI>Print: Controls whether to print the condition type into output printout or not.</LI><LI>Subtotal: Used to store the value of condition type in some temporary tables and fields for the purpose of further calculations.</LI><LI>Requirements: The system will check while determining condition type into sales document, if condition fulfilling then condition type determine into sales document.</LI><LI>Alternate calculation type (Formula): Use alternate calculation type, if the calculation part of condition type is not standard.</LI><LI>Alternate Base type (Formula): Use alternate base type when the base value of the condition type is not standard.</LI><LI>Account Key: It is a parameter to determine revenue G/L account while posting invoice values into accounting.</LI><LI>Accrual Key: Keeping some amount aside from each transaction into provisional account to meet the further requirements of rebate settlement.</LI></UL><P><STRONG>Now SAP S/4HANA pricing procedure has expanded from 16 pricing field to 18 pricing field are follow:</STRONG></P><P><STRONG>1. Statistical and Relevant for Account Determination:</STRONG></P><P><STRONG>Statistical:</STRONG> It does not impact the net price but is recorded for reporting purposes.</P><P><STRONG>Relevant for Account Determination: </STRONG>It still flows into financial accounting, even though it’s statistical. We can use this field indicator to define that the statistical price condition is posted to account-based Profitability Analysis (CO-PA) as journal entry to an extension ledger of Financial Accounting.&nbsp;&nbsp;&nbsp;</P><P>Example: Imagine you are delivering the free delivery above 200 rupees, previously this delivery cost maybe be ignored in financial posting because it has marked as Statistic.</P><P>Now, you can post it to an extension ledger in CO-PA, giving you a clear picture of how much free delivery impacts your bottom line.</P><P><STRONG>Scenario:</STRONG> In this scenario Tax will display and post, but no impact on Net price:</P><TABLE width="607"><TBODY><TR><TD width="90"><P><STRONG>Condition</STRONG></P></TD><TD width="102"><P><STRONG>Statistical</STRONG></P></TD><TD width="160"><P><STRONG>Account Relevant</STRONG></P></TD><TD width="78"><P><STRONG>Impact on Net Price</STRONG></P></TD><TD width="78"><P><STRONG>G/L Posting</STRONG></P></TD><TD width="99"><P><STRONG>Use Case</STRONG></P></TD></TR><TR><TD width="90"><P>ZXYZ -Tax</P></TD><TD width="102"><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Yes</P></TD><TD width="160"><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Yes</P></TD><TD width="78"><P><span class="lia-unicode-emoji" title=":cross_mark:">❌</span> No</P></TD><TD width="78"><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Yes</P></TD><TD width="99"><P>Legal tax display + accounting</P></TD></TR></TBODY></TABLE><P>This shows the Tax ZXYX on invoice, it will be posted to a special liability GL account and will not impact the product net price (i.e., customer doesn’t pay directly).</P><P>In system:</P><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Visible on sales order and invoice as a separate line</P><P><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span> Posted to G/L account (e.g., Environmental Provisions) using Account Key ZEV</P><P><span class="lia-unicode-emoji" title=":cross_mark:">❌</span> Does not affect the net selling price (because it’s statistical)</P><P><STRONG>2.&nbsp;</STRONG><STRONG>Access level (New in 2023): </STRONG>Access levels is used to control the access of your business users to price elements in business documents.</P><P>Access levels consist of a four-digit number and a description, you can use all numbers from 0001 to 9999.</P><P>Default value 0000 grants full access to all users.</P><P>For example, the profit margin in a sales order can be displayed only by a sales manager.</P><P><STRONG>Business Need:</STRONG> Can we hide certain price details (like Profit Margin or Internal Cost) from some users in sales documents or invoices?</P><P>Solution: In SAP S/4HANA, introduced Role-Based Access level to Price Elements. This controls who can view or edit specific price conditions (e.g., profit margin) in sales orders, returns, or invoices — either at header or item level. You can set access as no access, display only, or editable, based on the user’s role.</P><P>Recommendation by SAP: The higher the sensitivity, the higher is the number of the access level.<BR /><BR /></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pranavgarud_0-1744722365794.png" style="width: 474px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251005i7E27AFE5EFB22042/image-dimensions/474x294?v=v2" width="474" height="294" role="button" title="pranavgarud_0-1744722365794.png" alt="pranavgarud_0-1744722365794.png" /></span></P><P><STRONG>Benefit of S/4HANA Pricing: </STRONG></P><UL><LI>Improved performance as pricing is stored in the single table PRCD_ELEMENTS, it leads to execution of sales order creation and billing creation.</LI><LI>Reduces errors by replacing multiple condition tables into one condition table.</LI><LI>Enhanced user experience with Fiori apps.</LI><LI>Supports Real-time profitability analysis.</LI><LI>Enhanced customization, increased range of digits, can create more Access sequences, Condition tables, Formula/Routine and Condition type counter.</LI></UL><P><STRONG>Key Components of Pricing Procedure:</STRONG></P><UL><LI>Condition Records: It stores the pricing information, taxes, and discounts. (e.g., prices, surcharge, taxes, discounts)</LI><LI>Condition Types: Define types of pricing components, such as- PR00 Basic price, K004 material discount. Each pricing type represents a particular pricing element.</LI><LI>Pricing Procedures: A pricing procedure is a collection of condition types arranged in a specific sequence, it defines the rules and logic for pricing that how to calculate conditions where conditions are mandatory, optional, or statistical.</LI><LI>Access Sequences: It searches the strategy for finding valid condition records.</LI><LI>Condition Tables: It specifies the combination of fields used to maintain condition records (e.g., customer and material, sales organization).</LI><LI>Master Data Integration: Pricing leverages data from Customer master and material master.</LI></UL><P><STRONG>Difference between ECC pricing &amp; S/4HANA pricing:</STRONG></P><TABLE width="751"><TBODY><TR><TD width="200"><P><STRONG>Feature</STRONG></P></TD><TD width="252"><P><STRONG>ECC Pricing</STRONG></P></TD><TD width="299"><P><STRONG>S/4HANA Pricing</STRONG></P></TD></TR><TR><TD width="200"><P>Tables Used</P></TD><TD width="252"><P>Uses multiple tables: KONH, KONP, KONV</P></TD><TD width="299"><P>Merged into single table: PRCD_ELEMENTS</P></TD></TR><TR><TD width="200"><P>Pricing Execution Speed</P></TD><TD width="252"><P>Slower due to multiple tables joins.</P></TD><TD width="299"><P>Faster due to simplified data model (single table)</P></TD></TR><TR><TD width="200"><P>Pricing procedure Fields</P></TD><TD width="252"><P>16 fields</P></TD><TD width="299"><P>18 fields</P></TD></TR><TR><TD width="200"><P>Memory Consumption</P></TD><TD width="252"><P>Higher (due to table redundancy)</P></TD><TD width="299"><P>Lower (HANA in-memory capabilities)</P></TD></TR><TR><TD width="200"><P>Customer Pricing Procedure Length</P></TD><TD width="252"><P>1 digit</P></TD><TD width="299"><P>1 to 2 digits</P></TD></TR><TR><TD width="200"><P>Document Pricing Procedure Length</P></TD><TD width="252"><P>1 digit</P></TD><TD width="299"><P>1 to 2 digits</P></TD></TR><TR><TD width="200"><P>Access Sequence Capacity</P></TD><TD width="252"><P>Max 99 condition tables</P></TD><TD width="299"><P>Max 999 condition tables</P></TD></TR><TR><TD width="200"><P>Condition Table Range</P></TD><TD width="252"><P>Table numbers: 1–999 only</P></TD><TD width="299"><P>Table numbers: 1–999 + 483 additional tables</P></TD></TR><TR><TD width="200"><P>Condition Type Counter</P></TD><TD width="252"><P>2 digits</P></TD><TD width="299"><P>2 to 3 digits</P></TD></TR><TR><TD width="200"><P>Formulas/Routines</P></TD><TD width="252"><P>2 to 3 digits (e.g., 123)</P></TD><TD width="299"><P>3 to 7 digits (e.g., 1234567)</P></TD></TR></TBODY></TABLE><P><STRONG>Conclusion: </STRONG>The evolution from ECC to S/4HANA in pricing procedure is not just as technical upgrade but it is transformation that brings greater flexibility, transparency, improved performance, time saving and error free. with the expansion from 16 to 18 pricing fields, the system now supports more accurate financial reporting and role-based access control through features like statistical posting to CO-PA and Access Levels.</P><P>&nbsp;</P> 2025-04-16T11:29:09.793000+02:00 https://community.sap.com/t5/application-development-and-automation-blog-posts/sap-developer-news-april-17th-2025/ba-p/14079338 SAP Developer News April 17th, 2025 2025-04-17T21:10:00.033000+02:00 Eberenwaobiora https://community.sap.com/t5/user/viewprofilepage/user-id/1937986 <P><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F04Xcj789Gaw%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D04Xcj789Gaw&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F04Xcj789Gaw%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="400" height="225" scrolling="no" title="Removal of Themes, SQL on Files, BTP Root Certificate, Integration Migration | SAP Developer News" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div></P><H3 id="toc-hId-1837544616"><STRONG>DESCRIPTION</STRONG></H3><P><STRONG>Podcast</STRONG>: <A href="https://podcast.opensap.info/sap-developers/2025/04/17/sap-developer-news-april-17th-2025/" target="_blank" rel="noopener nofollow noreferrer">https://podcast.opensap.info/sap-developers/2025/04/17/sap-developer-news-april-17th-2025/</A></P><P><STRONG>Upcoming Removal of SAP Fiori Themes and UI5 Renovate Preset Config Tool</STRONG></P><UL><LI>SAP Fiori Belize Theme Removal Announcement: <SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/announcement-removal-of-belize-theme-of-sap-fiori/ba-p/14061924" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/announcement-removal-of-belize-theme-of-sap-fiori/ba-p/14061924</A></SPAN></LI><LI>Upcoming Removal of SAP Fiori Themes Belize and Blue Crystal: <SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/upcoming-removal-of-sap-fiori-themes-belize-and-blue-crystal/ba-p/14063111" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/upcoming-removal-of-sap-fiori-themes-belize-and-blue-crystal/ba-p/14063111</A></SPAN></LI><LI>UI5 Renovate Preset Config Tool: <SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/stay-up-to-date-with-the-ui5-renovate-preset-config/ba-p/14070649" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/stay-up-to-date-with-the-ui5-renovate-preset-config/ba-p/14070649</A></SPAN></LI></UL><P><STRONG>New SQL on Files Use Cases with SAP HANA Cloud QRC 01</STRONG></P><UL><LI>Exploring New SQL on Files Use Cases with SAP HANA Cloud QRC 04/2024 and QRC 01/2025: <SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/exploring-new-sql-on-files-use-cases-with-sap-hana-cloud-qrc-04-2024-and/ba-p/14076223" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/exploring-new-sql-on-files-use-cases-with-sap-hana-cloud-qrc-04-2024-and/ba-p/14076223</A></SPAN></LI><LI>Tutorials to Explore SAP HANA Cloud, SAP HANA Database SQL on Files: <SPAN><A href="https://developers.sap.com/tutorials/hana-dbx-sof.html" target="_blank" rel="noopener noreferrer">https://developers.sap.com/tutorials/hana-dbx-sof.html</A></SPAN></LI></UL><P><STRONG>SAP BTP Cloud Foundry: switching to higher security level Root Certificate Authority</STRONG></P><UL><LI>Blog post <SPAN><A href="https://community.sap.com/t5/technology-blogs-by-sap/sap-btp-cloud-foundry-switching-to-higher-security-level-root-certificate/ba-p/14061965" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/sap-btp-cloud-foundry-switching-to-higher-security-level-root-certificate/ba-p/14061965</A></SPAN></LI></UL><P><STRONG>Migration Tooling now supporting migration via the pipeline approach </STRONG></P><UL><LI><SPAN>Blog post </SPAN><SPAN><A href="https://community.sap.com/t5/application-development-and-automation-blog-posts/introducing-the-new-project-creation-wizard-within-sap-build-lobby/ba-p/14079038" target="_blank">https://community.sap.com/t5/application-development-and-automation-blog-posts/introducing-the-new-project-creation-wizard-within-sap-build-lobby/ba-p/14079038</A></SPAN></LI></UL><P>&nbsp;</P><H3 id="toc-hId-1641031111"><STRONG>CHAPTER TITLES</STRONG></H3><P>0:00 Intro</P><P>0:10 Upcoming Removal of SAP Fiori Themes Belize and Blue Crystal</P><P>0:53 UI5 Renovate Preset Config Tool</P><P>1:41 New SQL on Files Use Cases with SAP HANA Cloud QRC 01</P><P>2:59 SAP BTP Cloud Foundry: switching to higher security level Root Certificate Authority</P><P>3:46 <SPAN>Migration Tooling now supporting migration via the pipeline approach</SPAN></P><P>&nbsp;</P><H3 id="toc-hId-1444517606"><STRONG>TRANSCRIPTION</STRONG></H3><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Into]</STRONG> This is the SAP Developer News for April 10th, 2025.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Michelle]</STRONG> SAP has announced the removal of the Belize and Blue Crystal themes from its Fiori user interface, starting in the later half of this 2025. Belize along with Blue Crystal, will be phased out to make way for more modern themes like Quartz and Horizon These new themes promise enhanced design and usability, giving users a more streamlined interface SAP urges users to transition to these new themes to benefit from current and future improvements. By embracing Quartz and Horizon, organizations can ensure they remain aligned with SAP's innovation focus. if you want to learn more about the removal of these themes and to prepare for it. checkout the links below. In other news, SAP has introduced the UI5 Renovate Preset Config a tool designed to keep SAP UI5 projects effortlessly up-to-date. It automates the dependency update process, allowing developers and organizations to remain current with the latest framework improvements and security patches. The UI5 Renovate Preset Config minimizes manual intervention saving time while improving application stability This tool is an efficient and straightforward solution for developers eager to enhance project reliability with regular updates! for more information check below for link to blog post about this new tool. As always, thanks for watching and happy coding!</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Witalij]</STRONG> The SQL on files capability in SAP HANA Cloud has unlock new potential by providing direct read-only SQL access to files stored in SAP HANA Cloud and data lake files since QRC03 2024. Starting with the QRC 04 2024 of SAP HANA Cloud, this capability has been further extended to support direct access to the most common external object storages in addition to SAP HANA Cloud-owned data lake files, namely Amazon S3, Azure Blob Storage or ADLS Gen2, and Google Cloud Storage. With the ability to access vast volumes of data from managed object storage, you can fully leverage the robust features of SAP HANA Cloud as a platform. This includes in-memory processing, machine learning capabilities, predictive analytics library, and so forth. Another piece of good news is that the existing limitations on delta tables, which previously supported only Reader version 1 for both direct file access and delta sharing, has been lifted starting with the SAP HANA Cloud QRC01 this year. For hands-on exercises, check the updated tutorials at Explore SAP HANA Cloud, SAP HANA Database SQL on files.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[Mamikee]</STRONG> The SAP process orchestration to SAP integration suite now support automatic creation of integration artifacts supporting the pipeline for cloud integration. This update simplifies the shots from SAP process orchestration by using a pattern based method that automatically generate integration artifacts and makes your integration flow cleaner, smarter, and way easier to manage. Now, the pipeline model offers better error handling, fewer queues to juggle, and reusable components that save you time. No more stretching over rigid templates. In fact, it identifies supported patterns, making more scenarios eligible for migration. To learn more about this, check the link in the description below.</P><P class="lia-align-justify" style="text-align : justify;"><STRONG>[DJ]</STRONG> If you're a user of the SAP Business Technology platform, whether using applications or services offered by SAP or by others, this is an important item, especially if you manage trust certificates yourself. There's a change to the root certificate authority, CA, used to issue certificates on SAP domains for Cloud Foundry, basically the TLS handshake between your browser and the server. This is for environments on BTP. This change is due to a recommendation from the Bundesamt für Sicherheit in der Informationstechnik, that's the German Federal Office for Information Security. In essence, there's an old G2 certificate, the DigiCert Global Route CA, and that's being replaced by a newer DigiCert TLS RSA 4096 Route G5. You can manage Route Certificate Authority certificates yourself, or, and this is what we recommend because it makes it a lot easier, you can use the BTP Trust Store. In any case, you must make sure that your trust store does contain this new G5 certificate, especially if you do manage the trust store yourself. Anyway, for all the details, check out the blog post that's linked in the description.</P> 2025-04-17T21:10:00.033000+02:00 https://community.sap.com/t5/madrid-blog-posts/getting-started-with-generative-ai-hub-on-sap-ai-core-codejam/ba-p/14082066 🇪🇸 Getting started with Generative AI Hub on SAP AI Core CodeJam 2025-04-21T09:33:21.102000+02:00 ajmaradiaga https://community.sap.com/t5/user/viewprofilepage/user-id/107 <P>On the 30th of May, there will be an&nbsp;<A href="https://groups.community.sap.com/t5/sap-codejam/gh-p/code-jam" target="_self" rel="noopener noreferrer">SAP CodeJam</A><SPAN>&nbsp;</SPAN>event on the topic of<STRONG><SPAN>&nbsp;</SPAN>Getting started with Generative AI Hub on SAP AI Core</STRONG><SPAN>&nbsp;and Python</SPAN>. You do not need to have any background in the topic, but a lot of curiosity!</P><H3 id="toc-hId-1838256808"><EM><STRONG>RSVP Here</STRONG> -&gt; <A href="https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core/ec-p/14082043#M820" target="_blank">https://community.sap.com/t5/sap-codejam/getting-started-with-generative-ai-hub-on-sap-ai-core/ec-p/14082043#M820</A></EM></H3><P><SPAN>In this CodeJam you will learn how to use Generative AI Hub on SAP AI Core to implement a retrieval augmented generation (RAG) use case to improve the responses of large language models (LLMs) and reduce hallucinations. You will learn how to deploy an LLM and the orchestration service on SAP AI Core and query it via SAP AI Launchpad and the Python SDK. Furthermore, you will learn about the most important genAI concepts and create and use embeddings to improve your RAG response and build your own chatbot with memory.</SPAN></P><P>This is an<SPAN>&nbsp;</SPAN><STRONG>in-person event</STRONG>&nbsp;<STRONG>only</STRONG><SPAN>&nbsp;</SPAN>(not virtual) and is planned for <STRONG>Friday, May 30th, 2025 at the<SPAN>&nbsp;</SPAN><A href="https://maps.app.goo.gl/PVV5hE2sD4gM6xxS7" target="_self" rel="nofollow noopener noreferrer">SAP office in Madrid</A><SPAN><SPAN class="lia-unicode-emoji">&nbsp;<span class="lia-unicode-emoji" title=":spain:">🇪🇸</span></SPAN></SPAN></STRONG><SPAN>&nbsp;</SPAN>from 09:30 - 14:30 local time. The language of the event and the content will be English.</P> 2025-04-21T09:33:21.102000+02:00 https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/linux-ai-agent-automating-sap-hana-installation/ba-p/14079103 Linux AI Agent: Automating SAP HANA Installation 2025-04-22T10:53:02.548000+02:00 L_Skorwider https://community.sap.com/t5/user/viewprofilepage/user-id/172246 <H1 id="toc-hId-1579377158">The New SAP AI Agent</H1><P>I created an SAP AI agent. Again.&nbsp;This time, I challenged myself with an operating system-level task and specifically came up with the idea of installing the SAP HANA database using an AI Agent.&nbsp;</P><P>If you're already familiar with installing SAP HANA, you know it involves a lot of hands-on work in the terminal. Sure, you could use a graphical interface or automation tools, but for now, think of this as a proof of concept to show what this agent can do. We're aiming for a versatile tool here, not something that's only good with SAP HANA installation, but something that can handle a wide range of applications and tasks right from the Linux terminal.</P><H1 id="toc-hId-1382863653"><div class="video-embed-center video-embed"><iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FSYZWSoxvKAw%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DSYZWSoxvKAw&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FSYZWSoxvKAw%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="200" height="112" scrolling="no" title="Linux AI Agent: SAP HANA Installation" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe></div></H1><H1 id="toc-hId-1186350148"><BR />The Linux Agent</H1><P>For various tasks, we already know that being able to work in a terminal like a human is more effective than just executing commands, even though the latter might be easier to implement.&nbsp;Don't get me wrong - I've always said that while using human-like interfaces might not be the most efficient way for artificial intelligence to function, it's still essential because not everything can be done differently.</P><P>When working in a Linux terminal, the main tools you rely on are your eyes to see what's happening and your fingers to quickly press the keys.&nbsp;And actually, agents can be equipped with the same tools and expected to work in a manner similar to humans.&nbsp;I've equipped the agent with these three tools. Why three not two? To keep things simple:</P><UL><LI>We have a tool that lets you input text. It's pretty straightforward and obvious, so there's not much to delve into here.</LI><LI>The other tool may be slightly less obvious and is used for handling special keys and their combinations. For example, Backspace, Delete, Print Screen, Escape, Ctrl-C, Ctrl-V, and others. It is clear that this is not plain text, and for instance, the ESC key is invaluable when using VI. In fact, not only there.</LI><LI>And finally, we have a tool to preview the current contents of the terminal window. Something obvious to use during long-running operations when it is not really expected for the agent to press any keys.</LI></UL><P>&nbsp;</P><H1 id="toc-hId-989836643">OpenAI Agents SDK</H1><P>In contrast to my <A href="https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/sap-gui-ai-agent-architecture-amp-technical-details/ba-p/14032043" target="_self">SAP GUI AI Agent</A>, where LangChain was used, this time I opted for the OpenAI Agents SDK framework.&nbsp;Why this one and not LangChain? The answer is simple. In such a dynamically changing industry, you have to try different solutions. And since the OpenAI framework doesn't tie me to models from just this company, but is universal, I decided to try it out. I conducted my initial tests specifically on the DeepSeek V3 model.&nbsp;Of course, when choosing other models, we lose some functionality, but the most important part is available.</P><P>&nbsp;</P><H1 id="toc-hId-793323138">Model Context Protocol</H1><P>Lately, there's been quite a buzz about MCP, and for good reason - it's truly revolutionary. The power of agents comes from their tools, but what makes MCP stand out is its standardization and reusability. By designing an MCP server to work with a Linux terminal, I know I'll be able to use it later with another agent or even a regular chat client.&nbsp;Even the OpenAI desktop application will soon support MCP servers.</P><P>I'm confident that MCP will have a world-changing impact. We can use them for a range of tasks without always depending on external agents, just by adding tools to our favorite client. Keep in mind, MCP isn't just about tools; it also includes prompts and other resources. It's clear that someone designed it with the future in mind, potentially to replace whole external agents.&nbsp;I get the feeling that developing well-designed MCP servers will become more critical than creating agents. That's why it's essential to get to know this technology now, and that's why I opted for MCP.</P><P>&nbsp;</P><H1 id="toc-hId-596809633">The LLM</H1><P>Although the OpenAI Agents SDK allows me to use virtually any model, I decided to test using GPT 4.1. This model was unveiled to the world just a few days ago. It is, in a sense, the successor to GPT-4o, but it is better at understanding commands and using tools, which is critical for agent applications.</P><P>Since my agent is an experimental solution, the choice of a new model was quite obvious. However, we should be aware that it is not a reasoning model and does not belong to the top tier of the best LLMs. It is, however, economical and very fast, which is evident in the agent's operation.</P><P>&nbsp;</P><H1 id="toc-hId-400296128">Hardware Environment</H1><P>If you're familiar with my approach to miniPCs, it won't be a shocker that I picked this platform for testing. Sure, the cloud is super convenient and definitely the future, but running a system at home still has its perks.&nbsp;I used the Firebat AK2 with 16GB of RAM and installed Red Hat Enterprise Linux version 8.8 on it. It's free for developers and supported for SAP HANA.</P><P>&nbsp;</P><H1 id="toc-hId-203782623">Reflections on Safety</H1><P>Safety in artificial intelligence is a crucial aspect that's often overlooked, but it's definitely something we should focus on.&nbsp;See how we are increasingly easily relinquishing control over our computers connected to the network to artificial intelligence.&nbsp;This approach carries some risk because we still don't completely understand how large language models operate. If you're curious about this topic, I strongly recommend checking out my earlier post: <A href="https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/beyond-the-black-box-the-illusion-of-control/ba-p/14070694" target="_self">Beyond the Black Box: The Illusion of Control</A>.</P><P>Here's the ironic part: having so many MCP servers available and being so easy to use can actually create problems. This happens when artificial intelligence is set up by people who don't fully understand the risks involved.</P><P>&nbsp;</P><H1 id="toc-hId-7269118">Summary</H1><P>The video I recorded as part of the Agent tests is short, but be aware that these are just initial attempts. I'm surprised at how well it went the first time. While I haven't tested DeepSeek on an SAP HANA installation, only on simpler tasks, I see that GPT 4.1 handles agent tasks much better.</P><P>We're just starting out on this journey, but it's already clear how much potential there is. I see my agents as a proof of concept that shows how current models can handle a wide range of tasks they weren't specifically trained for. This flexibility is exactly what sets the agent-based approach apart from traditional automation. Sure, automating SAP HANA installations is straightforward and efficient. However, in this case, the agent executes a task efficiently even without prior training. This suggests it could tackle many other tasks just as well.</P><P>Where is this going to take us? We'll find out. But one thing's for sure - we're living in interesting times.</P><P>I invite you to join the discussion and feel free to ask any questions. If there's enough interest, I'd be more than happy to create a follow-up video.</P> 2025-04-22T10:53:02.548000+02:00 https://community.sap.com/t5/technology-blog-posts-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/ba-p/14078591 SAP Datasphere External Access Overview: APIs, CLI and SQL 2025-04-22T10:55:27.647000+02:00 henri_hosang https://community.sap.com/t5/user/viewprofilepage/user-id/1395426 <H1 id="toc-hId-1579351488"><FONT color="#808080"><STRONG>Querying and Managing SAP Datasphere with Python, Postman, Open SQL and the Command Line Interface</STRONG></FONT></H1><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId-1382837983">Introduction</H1><P>This blog post aims to provide an overview of different external tooling options for SAP Datasphere as these resources are scattered across different Help pages, the Business Accelerator Hub and other community Blogs. This blog post does not aim to be exhaustive or cover every detail, but lists the different possibilities to perform actions in Datasphere or create, read, update or delete objects and data using external tools like Postman, Python, the CLI or open SQL.</P><P>Here is an overview of Topics covered in the Blog Post:</P><OL><LI>Rest API: TLS server certificates API; Connections API; Data Sharing Cockpit API and SCIM 2.0 API for user management</LI><LI>Command Line Interface: Manage User Access, Spaces, Modeling Objects, the Data Marketplace, Tasks and Task Chains as well as Connectivity</LI><LI>OData API: Get assets from the SAP Datasphere Catalog; Consume the datasets and metadata from consumable data assets.</LI><LI>Open SQL schema and ODBC/JDBC: Query the SAP HANA Cloud database with database users using SQL Statements like SELECT; CREATE; UPDATE; INSERT and ALTER Tables and Views.</LI></OL><P>Every Agenda Item is split into two sections: What? and How? The first section explains which use cases are supported by the shown technology and provides links to the relevant documentation. Second, a simple example is shown for each technology using the appropriate tools like Postman, Python, SQL or the CLI. This example can then easily be adapted and extended for future options explained in the What? section following the documentation. Additionally further links are provided to gain a deeper understanding of possible use case scenarios. Often the same action can be achieved by multiple options. E.g. It is possible to create and list connections via the CLI or via the REST API.</P><P>Of course, there is also the option to integrate SAP Datasphere directly with third party applications via e.g. OData or ODBC/JDBC connections or pushing the data to target systems like AWS S3 or GCP Cloud Storage using Replication Flows. However, these options are not part of this blog post, because application specific scenarios must be considered.</P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId-1186324478">REST API</H1><P>REST APIs are based on a standard architecture that uses HTTP methods like GET, POST, PUT, and DELETE. They allow you to perform actions in Datasphere regarding User &amp; Role Management, Connection &amp; Certificate Management and the usage of the Data Sharing Cockpit. You can call them via an API Platform such as Postman or via Programming Languages like Python or Type Script.</P><H2 id="toc-hId-1118893692"><STRONG>What?</STRONG></H2><UL><LI>Use the Certificates API to create, read and delete TLS server certificates to Datasphere - <A href="https://api.sap.com/api/CertificateManagement/overview" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/CertificateManagement/overview</A></LI><LI>Use the Connections API to list, validate, delete, update or create connections in a space - <A href="https://api.sap.com/api/ConnectionManagement/overview" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/ConnectionManagement/overview</A></LI><LI>Use the Data Sharing Cockpit API to maintain your data provider profile; create and edit data product; manage licenses; create and publish releases and manage contexts - <A href="https://api.sap.com/api/DataSharingCockpit/overview" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/DataSharingCockpit/overview</A></LI><LI>Use the SCIM 2.0 API to create, read, modify and delete users; add roles and get information on the identity provider - <A href="https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/1ca8c4a9467f43df9ae6d4ed3734f05a.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/1ca8c4a9467f43df9ae6d4ed3734f05a.html</A></LI></UL><H2 id="toc-hId-922380187"><STRONG>How?</STRONG></H2><P>Using the REST API involves generally two steps.</P><OL><LI>Creating an OAuth 2.0 Client to Authenticate Against SAP Datasphere</LI><LI>Using Postman or another technology to obtain an access token and then calling the REST API with the access token</LI></OL><H3 id="toc-hId-854949401">1 Creating an OAuth 2.0 Client to Authenticate Against SAP Datasphere</H3><P>To create an OAuth2.0 Client users need the DW Administrator role. Under System -&gt; Administration -&gt; App Integration a new OAuth Client can be added.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_0-1744874906487.png" style="width: 593px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251773iA167760436FF3ABA/image-dimensions/593x304?v=v2" width="593" height="304" role="button" title="henri_hosang_0-1744874906487.png" alt="henri_hosang_0-1744874906487.png" /></span></P><P>In the OAuth Client configuration enter a name and choose the following settings:</P><OL><LI>Purpose: API Access</LI><LI>Access: Select the appropriate access (e.g. User Provisioning if you want to use the SCIM 2.0 API)</LI><LI>Security: Client Credentials</LI><LI>Token Lifetime</LI></OL><P>Click Add and copy the Client ID and Client Secret from the next screen (the client secret can only be copied now and you need to create a new client if you lose it!).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_1-1744874906492.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251772iC1BB70D5DED411B8/image-size/medium?v=v2&amp;px=400" role="button" title="henri_hosang_1-1744874906492.png" alt="henri_hosang_1-1744874906492.png" /></span></P><P>Additionally, please copy the Authorization URL and Token URL from the App Integration overview as they are needed later to authenticate.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_2-1744874906497.png" style="width: 737px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251774iAC8248F25411BEFD/image-dimensions/737x232?v=v2" width="737" height="232" role="button" title="henri_hosang_2-1744874906497.png" alt="henri_hosang_2-1744874906497.png" /></span></P><H3 id="toc-hId-658435896">2 Using Postman or another technology to obtain an access token and calling the REST API</H3><P>Now we can use the Authorization URL, Token URL, Client ID and Client Secret to first obtain an access token and then call the REST API that we need.</P><P>The next steps will be shown in (1) <STRONG>Postman</STRONG> and (2) <STRONG>Python</STRONG> as two alternative approaches</P><P><STRONG>(1) Postman</STRONG>: Create a collection and a new GET request within that collection. Provide the copied token URL and add grant_type=client_credentials in the Parameters.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_3-1744874906500.png" style="width: 781px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251776i61D60BEBFFD8C848/image-dimensions/781x186?v=v2" width="781" height="186" role="button" title="henri_hosang_3-1744874906500.png" alt="henri_hosang_3-1744874906500.png" /></span></P><P>Then switch to the Authorization tab use Authorization type Basic Auth and enter the Client ID as username and the client secret as password. You can now send the request and get the access_token with its lifetime as result. Copy the token for the next step.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_4-1744874906512.png" style="width: 786px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251777iDE72C8AE7699B052/image-dimensions/786x408?v=v2" width="786" height="408" role="button" title="henri_hosang_4-1744874906512.png" alt="henri_hosang_4-1744874906512.png" /></span></P><P>Now we can call the REST API endpoint as documented in the resources linked in the <STRONG>What?</STRONG> Section (mainly <A href="https://api.sap.com/package/sapdatasphere/rest" target="_blank" rel="noopener noreferrer">https://api.sap.com/package/sapdatasphere/rest</A>). For this simple example we will just get a list of connections from one of the Datasphere space by calling this endpoint:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_5-1744874906513.png" style="width: 767px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251775iF6CEF9816AAA8682/image-dimensions/767x46?v=v2" width="767" height="46" role="button" title="henri_hosang_5-1744874906513.png" alt="henri_hosang_5-1744874906513.png" /></span></P><P>Follow these steps:</P><OL><LI>Create a new GET request in Postman</LI><LI>Enter the URL <SPAN><A target="_blank" rel="noopener">https://&lt;host&gt;/api/v1/datasphere/spaces/&lt;spaceId&gt;/connections</A></SPAN><UL><LI>Host refers to the URL of your Datasphere Host; it can be copied from the browser. Copy everything until the first “/”</LI><LI><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_6-1744874906515.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251778iF0DDCD5226D21DBA/image-size/medium?v=v2&amp;px=400" role="button" title="henri_hosang_6-1744874906515.png" alt="henri_hosang_6-1744874906515.png" /></span></LI><LI>The space ID is the ID of the space from which you want to get the connections. You can see available spaces if you navigate to Space Management in Datasphere.</LI></UL></LI><LI>Add x-sap-sac-custom-auth=true in the Headers section in Postman</LI><LI>Specify the Auth Type as Bearer Token by using the Token from the previous step.</LI></OL><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_7-1744874906519.png" style="width: 759px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251779iE43A99626A53A4F9/image-dimensions/759x184?v=v2" width="759" height="184" role="button" title="henri_hosang_7-1744874906519.png" alt="henri_hosang_7-1744874906519.png" /></span></P><P>Once you send the request you get a list of all connections from that space returned. You can now adapt this example to use any other REST API mentioned above by simply changing the URL + HTTP Method to the one specified in the documentation and adding relevant parameters.</P><P>Note: If you want to use the SCIM 2.0 API to create, modify or delete users you also need a so called CSRF Token by calling <SPAN><A target="_blank" rel="noopener">https://&lt;host&gt;/api/v1/csrf</A></SPAN>&nbsp;with the obtained access token and x-sap-sac-custom-auth=true and x-csrf-token=fetch as Headers.</P><P>&nbsp;</P><P><STRONG>(2) Python</STRONG>: The steps in Python are like the ones in Postman, using the same credentials and parameters. First an access_token needs to be obtained and then the connections API is called.</P><P>To simplify the scenario the Authorization URL, Token URL, Client ID and Client Secret are stored in global variables. However, in a productive scenario a secret store or environment variables should be used instead.</P><P>For this demonstration Python 3.9 is used. The only import that is needed is the requests library to make the API calls. Additionally, the credentials are stored as variables as mentioned. That is all for the setup.</P><pre class="lia-code-sample language-python"><code>import requests token_url = xxx username = xxx password = xxx</code></pre><P>To get the list of connections again two calls are made. (1) to get the access_token and (2) to the actual connections API endpoint. Each call is a function.</P><P>The get_token function specifies the authentication context using the client ID as username and the client secret as password. Via the request library the token_url is called with the defined authentication context. If the call was successful (code 200) the access_token is read from the API response.</P><pre class="lia-code-sample language-python"><code>def get_token(): # Use basic authentication with username and password auth = requests.auth.HTTPBasicAuth(username, password) # API call response = requests.get(token_url, auth=auth) # Check result and return if response.status_code == 200: return response.json()['access_token'] else: print("HTTP Error occurred", response.status_code)</code></pre><P>In the second function the first function is called to get the access_token. Then it is passed on with the header x-sap-sac-custom-auth=true to the URL <SPAN><A target="_blank" rel="noopener">https://&lt;host&gt;/api/v1/datasphere/spaces/&lt;spaceId&gt;/connections</A></SPAN>&nbsp;as shown in the postman section.</P><pre class="lia-code-sample language-python"><code>def get_connections(): connection_url = xxx # Get Token bearer_token = get_token() # Define headers and use bearer token as authentication method headers = { "Authorization" : f"Bearer {bearer_token}", "x-sap-sac-custom-auth" : "true", "Content-Type": "application/json" } # API call response = requests.get(connection_url, headers=headers) # Check result and return if response.status_code == 200: return response.json() else: print("HTTP Error occurred", response.reason)</code></pre><P>As with the Postman option you can now adapt this example to use any other REST API mentioned above by simply changing the URL + HTTP Method to the one specified in the documentation and adding relevant parameters.</P><P>To use the SCIM API, first a csrf token needs to be generated and then send in combination with the bearer token as headers to the Endpoint for deleting, modifying or creating users in the system. Instead of calling the API Endpoints directly from requests, a session needs to be created via <EM>requests.Session()</EM> to obtain the csrf token and call the create user Endpoint from one session.&nbsp;</P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId-203756953">Command Line Interface (CLI)</H1><P>The Command Line Interface is a business user friendly toolset to achieve a variety of tasks in Datasphere without needing to write any code or use API platforms. Datasphere end users can run simple one-line statements in the command line after authenticating to perform admin tasks as well as to work with modeling objects in Datasphere.</P><H2 id="toc-hId-136326167"><STRONG>What?</STRONG></H2><UL><LI>Work with Global &amp; Scoped Roles; List, Add, Remove Users from Global and Scoped Roles; Manage Users&nbsp;<A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/3a3d0ef3d4954797acac12afbcf9ab5d.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/3a3d0ef3d4954797acac12afbcf9ab5d.html</A></LI><LI>List, Read, Create, Update and Delete Spaces; Manage Space Users and Database Users; Set Space Priorities and Statement Limits - <A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/5eac5b71e2d34c32b63f3d8d47a0b1d0.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/5eac5b71e2d34c32b63f3d8d47a0b1d0.html</A></LI><LI>List, Read, Create, Update and Delete Modeling Objects via the Command Line - <A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/6f5c65f209004751aa48f9682ee2ec45.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/6f5c65f209004751aa48f9682ee2ec45.html</A></LI><LI>Manage the Data Providers, Products, Licenses, Releases and Contexts of the Data Marketplace via the Command Line - <A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/5a815f6c21e9468eb96d0be95b9d2def.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/5a815f6c21e9468eb96d0be95b9d2def.html</A></LI><LI>Manage Tasks and Task Chains via the Command Line - <A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/2b26a31f197444dea314495bc0008eae.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/2b26a31f197444dea314495bc0008eae.html</A></LI><LI>List, Upload and Delete TLS Certificates; List, Read, Create, Validate, Edit and Delete Connections - <A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/8eb811898d1049fbb426339e44a2eb70.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/8eb811898d1049fbb426339e44a2eb70.html</A>&nbsp;</LI></UL><H2 id="toc-hId--60187338"><STRONG>How?</STRONG></H2><P>To use the command line with SAP Datasphere it is recommended to use an OAuth 2.0 Client with interactive usage. The setup is quite similar to the one shown above for REST API, but some parameters have to be set up differently.</P><P>Again, to create an OAuth2.0 Client users need the DW Administrator role. Under System -&gt; Administration -&gt; App Integration a new OAuth Client can be added.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_8-1744874906534.png" style="width: 749px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251780i5249284A563C7866/image-dimensions/749x384?v=v2" width="749" height="384" role="button" title="henri_hosang_8-1744874906534.png" alt="henri_hosang_8-1744874906534.png" /></span></P><P>In the OAuth Client configuration different settings are used to use that Client with the CLI instead of REST API:</P><OL><LI>Purpose: Interactive Usage</LI><LI>Authorization Grant: Authorization Code</LI><LI>Redirect URI: This is the URI the user will be redirected to after authorization. For the Command Line Interface, you can simply start a localhost server on your machine using <A href="http://localhost:8080" target="_blank" rel="noopener nofollow noreferrer"><EM>http://localhost:8080</EM></A></LI><LI>Token Lifetime</LI><LI>Refresh Token Lifetime</LI></OL><P>Click Add and copy the Client ID and Client Secret from the next screen (the client secret can only be copied now and you need to create a new client if you lose it!).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_9-1744874906538.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251782i44B6025DDA336E0E/image-size/medium?v=v2&amp;px=400" role="button" title="henri_hosang_9-1744874906538.png" alt="henri_hosang_9-1744874906538.png" /></span></P><P>Additionally, please copy the Authorization URL and Token URL from the App Integration overview as they are needed later to authenticate.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_10-1744874906544.png" style="width: 569px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251783i8FA0E5D8D1E68C76/image-dimensions/569x179?v=v2" width="569" height="179" role="button" title="henri_hosang_10-1744874906544.png" alt="henri_hosang_10-1744874906544.png" /></span></P><P>&nbsp;</P><H3 id="toc-hId--202849493">Using the Command Line Interface directly</H3><P>To use the CLI for Datasphere Node.js &gt;= 18 and &lt;= 22 as well as npm &gt;= 8 and &lt;= 10 need to be installed. Npm is automatically installed with Node.js. Node.js can be downloaded from <A href="https://nodejs.org/" target="_blank" rel="noopener nofollow noreferrer">https://nodejs.org/</A></P><P>You can test the installation by running the following commands in your command line</P><pre class="lia-code-sample language-bash"><code>$ node -v $ npm -v</code></pre><P>Then run the following command to install the datasphere related commands:</P><pre class="lia-code-sample language-bash"><code>$ npm install -g /datasphere-cli</code></pre><P>Some packages will be installed and you can check the successful installation by running</P><pre class="lia-code-sample language-bash"><code>$ datasphere –version</code></pre><P>Here is the summary of the installation commands:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="henri_hosang_11-1744874906546.png" style="width: 470px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251781i94A932B6BEC496A9/image-dimensions/470x167?v=v2" width="470" height="167" role="button" title="henri_hosang_11-1744874906546.png" alt="henri_hosang_11-1744874906546.png" /></span></P><P>Next step is to log in to Datasphere. As a best practice, make sure to always clean host, cache and secrets before logging in again. However, if this is the first time you are using the CLI no credentials will be available. Run these commands</P><pre class="lia-code-sample language-bash"><code>$ datasphere config host clean $ datasphere config cache clean $ datasphere config secrets reset</code></pre><P>Then log in to Datasphere using the <EM>datasphere login</EM> command. You will be prompted with the necessary credentials for log in. For this step you need the host URL you see when opening Datasphere in your browser (see above). Additionally, client ID and client secret that are copied from the OAuth client are needed.</P><pre class="lia-code-sample language-bash"><code>$ datasphere login ✔ URL of the system to connect to: … &lt;host&gt; ✔ Please enter your client ID: … &lt;client ID&gt; ✔ Please enter your client secret: … &lt;client secret&gt;</code></pre><P>After entering the client secret a browser window to the redirect URI provided in the OAuth Client will open and the log in will be automatically handled. You can continue in the CLI, where you are logged in now. Just start running commands to Datasphere documented in the What? Section above. Here is a simple example to get all the spaces in your Datasphere Tenant.</P><pre class="lia-code-sample language-bash"><code>$ datasphere spaces list -H &lt;host&gt;</code></pre><P>Here are all commands:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="henri_hosang_12-1744874906548.png" style="width: 505px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251784i0ADF8C3DEEE52F4F/image-dimensions/505x192?v=v2" width="505" height="192" role="button" title="henri_hosang_12-1744874906548.png" alt="henri_hosang_12-1744874906548.png" /></span></P><P>Note: For now you need to provide the host URL of the Datasphere Tenant via the -H option in every command. But the host can also be set as default and then it must not be used in every statement:</P><pre class="lia-code-sample language-bash"><code>$ datasphere config host set &lt;host&gt;</code></pre><P>And now you can run</P><pre class="lia-code-sample language-bash"><code>$ datasphere spaces list</code></pre><P>Once you are done, log out from Datasphere again</P><pre class="lia-code-sample language-bash"><code>$ datasphere logout</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="henri_hosang_13-1744874906549.png" style="width: 563px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251787iCFEDD262AD7F4680/image-dimensions/563x152?v=v2" width="563" height="152" role="button" title="henri_hosang_13-1744874906549.png" alt="henri_hosang_13-1744874906549.png" /></span></P><P>There are many more options when using the CLI to work with Datasphere. E.g. the credentials can be stored in a secrets file, so you don’t have to paste them every time you log in. Additionally, a refresh token can be extracted once you are logged in and passed on when running a command, so you do not have to log in at the beginning of every session. Please refer to the documentation for more details.</P><P>&nbsp;</P><H3 id="toc-hId--399362998">Using the Command Line Interface within a scripting language</H3><P>While this introduction uses the CLI directly, it should be mentioned that users can also use scripting languages like Python to automate the CLI usage by calling the commands directly from their code. In Python, the <EM>subprocess</EM> library is used to call CLI commands from code. This is a simple example to log in and list all available spaces in Datasphere. The credentials are stored in a JSON file:</P><pre class="lia-code-sample language-python"><code>import subprocess subprocess.run("datasphere config host clean", shell=True) subprocess.run("datasphere config cache clean", shell=True) subprocess.run("datasphere config secrets reset", shell=True) subprocess.run("datasphere login --host &lt;host&gt; --options-file ./dsp_cli_secrets.json", shell=True) subprocess.run("datasphere spaces list -H &lt;host&gt;", shell=True)</code></pre><P>Using this option is handy if you want to automate running CLI commands and chaining multiple commands together. An example would be to first get all spaces, then use a for loop to get all connections per space and use another command to validate all the connections.</P><P>Here is a great blog post that shows how you can combine the power of CLI and Python to generate views in Datasphere: <A href="https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-view-generation-with-python-and-the-command-line-interface/ba-p/13558181" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-view-generation-with-python-and-the-command-line-interface/ba-p/13558181</A></P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId--9070489">OData API</H1><P>So far REST APIs and the CLI are shown to perform certain actions in Datasphere. But what if you want to consume and report on data within the Datasphere Tenant? In that case, you can use the next two options: OData API and the OpenSQL schema can be used.</P><H2 id="toc-hId--498987001"><STRONG>What?</STRONG></H2><UL><LI>Catalog: List Spaces and Assets exposed for consumption<UL><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html#loio7a453609c8694b029493e7d87e0de60a__section_catalog_service" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html#loio7a453609c8694b029493e7d87e0de60a__section_catalog_service</A></LI><LI><A href="https://api.sap.com/api/DatasphereCatalog/resource/SAP_Datasphere_Consumption_Catalog" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/DatasphereCatalog/resource/SAP_Datasphere_Consumption_Catalog</A></LI></UL></LI><LI>Consumption: Retrieve Analytic Models, retrieve views exposed for consumption, retrieve metadata of Assets<UL><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html#loio7a453609c8694b029493e7d87e0de60a__section_analytical_data" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/43509d67b8b84e66a30851e832f66911/7a453609c8694b029493e7d87e0de60a.html#loio7a453609c8694b029493e7d87e0de60a__section_analytical_data</A></LI><LI><A href="https://api.sap.com/api/DatasphereConsumption/overview" target="_blank" rel="noopener noreferrer">https://api.sap.com/api/DatasphereConsumption/overview</A></LI></UL></LI></UL><H2 id="toc-hId--695500506"><STRONG>How?</STRONG></H2><P>As mentioned, the OData API is mainly used to consume objects from Datasphere. It can be accessed directly from the browser, via an API Platform like Postman, a scripting language like Python or it can be used by 3rd party applications like SAP Analytics Cloud and PowerBI to consume data from Datasphere in a reporting scenario.</P><P><STRONG>&nbsp;</STRONG></P><H3 id="toc-hId--1185417018">Using the Browser to consume OData Requests</H3><P>If you are privileged to see objects in Datasphere you can directly access the OData API from the browser, you are already logged in to Datasphere, since the log in context is just reused for the OData API without any additional setup. If there is e.g. an analytic model that you build and want to consume via OData, you can directly do so by just opening a new window in the browser and pasting the OData request URL. The OData request URL can be crafted yourself by referring to the documentation or you can use the Generate OData request available from the Datasphere UI for all objects exposed for consumption.</P><P>To generate the OData request, open the asset from the data builder. If it is exposed for consumption (Analytic Models are exposed by default; Views have a switch in the details pane to expose them) an icon appears in the header section under “Tools” to generate the OData request.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_14-1744874906550.png" style="width: 488px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251786i2DD050F41A39EB8A/image-dimensions/488x76?v=v2" width="488" height="76" role="button" title="henri_hosang_14-1744874906550.png" alt="henri_hosang_14-1744874906550.png" /></span></P><P>A pop-up opens to customize the OData URL. In the top, there is a selection if the actual data of the object should be received or its metadata. Additionally, variables and query parameters can be defined – as shown below. If the generate OData request is opened to retrieve data and the default settings are used the OData request URL look like this:</P><UL><LI>Exposed Views: &lt;host&gt;/api/v1/dwc/consumption/<STRONG>relational</STRONG>/&lt;space Id&gt;/&lt;object technical name&gt;/&lt;object technical name&gt;</LI><LI>Exposed Analytic Models: &lt;host&gt;/api/v1/dwc/consumption/<STRONG>analytical</STRONG>/&lt;space Id&gt;/&lt;object technical name&gt;/&lt;object technical name&gt;</LI></UL><P>As one can see the relational URL is used for views while the analytical is used for analytic models. This is because analytic models allow for more features like restricted measures and exception aggregations that are processed like a multidimensional statement and are dependent on the aggregation state defined via the variables and parameters. The relational URL for views just receives the result in a row-by-row fashion. Here is a Blog post exploring the differences in more detail: <A href="https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-analytical-and-relational-odata-apis/ba-p/13573797" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-analytical-and-relational-odata-apis/ba-p/13573797</A></P><P>Changes you do to the variables and query parameters are reflected in the OData request URL. Variables are defined during the objects modeling process. If a default value is set for a variable it is used by default. If no default value is set, you must set a value for the variable to call the OData request. On the other hand, query parameters are not defined in the modeling process but are used in only that specific OData request. Query parameters are standard URL parameters used to filter, sort and limit the result set. Here is an overview of the usable query parameters</P><UL><LI>$select – return only specified columns</LI><LI>$filter – restrict result according to the provided criteria</LI><LI>$orderby – sorts the result by the specified column</LI><LI>$count – returns the count of the number of records</LI><LI>$top – limits the number of returned records to &lt;n&gt;</LI><LI>$skip – excludes the first &lt;n&gt; items</LI><LI>sap-language – returns the data in the specified language (if available)</LI></UL><P>In this example an analytic model to monitor task chain runs is shown. It has one variable INCLUDE_FAILURES_ONLY with the default value YES and query parameters are set to show only task chain steps with replication flows, ordered by the end date of the task chain run and limited to show only the last 100 results.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_15-1744874906554.png" style="width: 562px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251788i1838F24C782DFE88/image-dimensions/562x538?v=v2" width="562" height="538" role="button" title="henri_hosang_15-1744874906554.png" alt="henri_hosang_15-1744874906554.png" /></span></P><P>To see the result of the OData request you can click Preview or copy the URL to a new browser window. If everything is configured correctly a value array of objects is shown, where each object is one result row. As mentioned, this works without any additional setup since the log in context from your browser is reused.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_16-1744874906562.png" style="width: 754px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251789iD60DCF01B58DA842/image-dimensions/754x328?v=v2" width="754" height="328" role="button" title="henri_hosang_16-1744874906562.png" alt="henri_hosang_16-1744874906562.png" /></span></P><P>Note: There is the limitation, that a user can send a maximum of 300 OData requests per minute. Pagination is used by default with 50.000 records per page. Via $skip and $top parameters client-side pagination can be implemented.</P><P>&nbsp;</P><H3 id="toc-hId--1381930523">Using Postman to consume OData Requests</H3><P>So far, we have seen, how to consume data in Datasphere via the OData API directly from the browser without any additional setup. However, if you want to use a 3rd party tool like Postman or Python to call OData requests you have to setup an OAuth client under System -&gt; Administration -&gt; App Integration with Interactive usage and a redirect URI. <STRONG>Please refer to the How? section under Command Line Interface (CLI) as the setup is the same</STRONG>. Once you got the client id, client secret, authentication and token URL you can continue in Postman by creating a new GET request. Copy &amp; Paste the URL that you generated via the generate OData request function in Datasphere. You will see that the query parameters are automatically shown in the Parameters section of Postman.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_17-1744874906570.png" style="width: 743px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251790i8BF42724E27B3125/image-dimensions/743x221?v=v2" width="743" height="221" role="button" title="henri_hosang_17-1744874906570.png" alt="henri_hosang_17-1744874906570.png" /></span></P><P>The main step to use OData from Postman is to setup the Authorization correctly. Use Auth Type = OAuth 2.0 and header Prefix = Bearer (default).</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_18-1744874906580.png" style="width: 706px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251793i869B37D4BC8D349D/image-dimensions/706x293?v=v2" width="706" height="293" role="button" title="henri_hosang_18-1744874906580.png" alt="henri_hosang_18-1744874906580.png" /></span></P><P>Then configure a New Token in the section below.</P><UL><LI>Grant type = Authorization Code</LI><LI>Callback URL = &lt;Your Callback URL set in the OAuth client (<A href="http://localhost:8080" target="_blank" rel="noopener nofollow noreferrer">http://localhost:8080</A>)&gt;</LI><LI>Auth URL = &lt;Authorization URL copied from System -&gt; Administration -&gt; App Integration&gt;</LI><LI>Access Token URL = &lt;Token URL copied from System -&gt; Administration -&gt; App Integration&gt;</LI><LI>Client ID = &lt;Client ID copied from the OAuth client&gt;</LI><LI>Client Secret = &lt;Client secret copied from the OAuth client&gt;</LI><LI>Client Authentication = Send as Basic Auth Header.</LI></UL><P>Then you have to click on “Get New Access Token”.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_19-1744874906594.png" style="width: 685px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251792i9A1DF676FF44E47B/image-dimensions/685x428?v=v2" width="685" height="428" role="button" title="henri_hosang_19-1744874906594.png" alt="henri_hosang_19-1744874906594.png" /></span></P><P>Postman automatically handles the redirect and opens an embedded browser where you need to log in with your business user, using your normal Datasphere credentials. If the log in is successful, an Access Token Is generated, and you can click on Use Token.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_20-1744874906601.png" style="width: 497px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251791iA88BD31C62D3BF13/image-dimensions/497x266?v=v2" width="497" height="266" role="button" title="henri_hosang_20-1744874906601.png" alt="henri_hosang_20-1744874906601.png" /></span></P><P>The Token has the lifetime defined in the OAuth client. After it is expired you just click on get new access Token again and use the new Token. Having the Token, you can now send the API request to consume an analytic model or exposed view. Here we consume the same analytic model as we did when we generated the OData request and opened the URL in the browser.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_21-1744874906619.png" style="width: 639px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251797i908B7AE4237050D8/image-dimensions/639x372?v=v2" width="639" height="372" role="button" title="henri_hosang_21-1744874906619.png" alt="henri_hosang_21-1744874906619.png" /></span></P><P>That’s it for consuming data via OData in the browser and via Postman. Of course, you can also use Python or another language to replicate Postman’s behavior in handling the redirect URI and authorization against Datasphere by calling the authorization URL and handling the callback via the redirect URI. Since this approach involves a bit more coding than the other consumption options, I will publish the scenario in a separate Blog post. Please see my upcoming blog post [TBD].</P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId--991638014">HANA Database Explorer &amp; Open SQL Schema</H1><P>The last option to consume data from Datasphere via external tools is using SQL Statements to access the HANA Cloud Database underneath Datasphere directly instead of querying the objects in the modeling UI of Datasphere. Because SQL provides two-dimensional results in a row-by-row fashion, only views and tables can be consumed. Analytic models are multidimensional statements, so the OData API should be used to consume them instead. Third party tools like Tableau and Power BI use a JDBC/ODBC connection to the HANA Cloud to report on data exposed in views. This section shows how you can use this connection to consume and create objects of the HANA Cloud via the HANA Database Explorer and via Python.</P><H2 id="toc-hId--1481554526"><STRONG>What?</STRONG></H2><UL><LI>Access the space schema and read data from the space using SQL SELECT statements and<UL><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/3de55a78a4614deda589633baea28645.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/3de55a78a4614deda589633baea28645.html</A></LI><LI><A href="https://userapps.support.sap.com/sap/support/knowledge/en/3428316" target="_blank" rel="noopener noreferrer">https://userapps.support.sap.com/sap/support/knowledge/en/3428316</A></LI></UL></LI><LI>Create Tables and Views to write data to a space<UL><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/3de55a78a4614deda589633baea28645.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/3de55a78a4614deda589633baea28645.html</A></LI><LI><A href="https://userapps.support.sap.com/sap/support/knowledge/en/3428316" target="_blank" rel="noopener noreferrer">https://userapps.support.sap.com/sap/support/knowledge/en/3428316</A></LI></UL></LI><LI>Write SQL Script Procedures and add them to a task chain - <A href="https://help.sap.com/docs/SAP_DATASPHERE/c8a54ee704e94e15926551293243fd1d/59b9c773035a48c5beb54ce9bb29f1d8.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/c8a54ee704e94e15926551293243fd1d/59b9c773035a48c5beb54ce9bb29f1d8.html</A></LI><LI>Run Machine Learning Algorithms from HANA APL &amp; PAL via open SQL<UL><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/b78ad208f8c4494489aabf97284679b6.html" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/be5967d099974c69b77f4549425ca4c0/b78ad208f8c4494489aabf97284679b6.html</A></LI><LI><A href="https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/287194276a7d4d778ec98fdde5f61335.html?q=PAL+APL" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/9f804b8efa8043539289f42f372c4862/287194276a7d4d778ec98fdde5f61335.html?q=PAL+APL</A></LI><LI><A href="https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/hands-on-tutorial-machine-learning-with-sap-datasphere/ba-p/13796417" target="_blank">https://community.sap.com/t5/artificial-intelligence-and-machine-learning-blogs/hands-on-tutorial-machine-learning-with-sap-datasphere/ba-p/13796417</A></LI></UL></LI></UL><H2 id="toc-hId--1678068031"><STRONG>How?</STRONG></H2><P>To consume data via ODBC/JDBC you need either a database user or a database analysis user. The database user is limited to read from and/or write to an Open SQL schema with restricted access to the space schema whereas the Database analysis users have read only access to all space schemas (if configured).</P><P>To simplify the scenario, we will use a database analysis user for this introduction, because this user has default access to all the objects in Datasphere. To create a database analysis user, you need to be an administrator. Go to System -&gt; Configuration -&gt; Database Access -&gt; Database Analysis User. Then click create to create a new user.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_22-1744874906627.png" style="width: 691px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251796i2E238D0353BFEA2F/image-dimensions/691x533?v=v2" width="691" height="533" role="button" title="henri_hosang_22-1744874906627.png" alt="henri_hosang_22-1744874906627.png" /></span></P><P>The Database Analysis User starts with “DWCDBUSER#” and you have to provide a custom suffix for the user. Additionally enable Space Schema Access to also consume data that is available in the spaces of your datasphere tenant. Click Create</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_23-1744874906629.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251795i674753DA0E6C5D3E/image-size/medium?v=v2&amp;px=400" role="button" title="henri_hosang_23-1744874906629.png" alt="henri_hosang_23-1744874906629.png" /></span></P><P>After the user is created the database Host Name, Port, Password and Username are displayed. Copy all four credentials, as they are needed to log into the HANA cloud. In difference to OAuth Client used for the CLI and OData API you can simply request a new password for this user if you lose the old one.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_24-1744874906631.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251798i437E808F6798C8E6/image-size/medium?v=v2&amp;px=400" role="button" title="henri_hosang_24-1744874906631.png" alt="henri_hosang_24-1744874906631.png" /></span></P><P>That is the setup for now if you want to access the HANA Cloud via the HANA Cockpit or HANA Database Explorer. To consume data in Datasphere, the HANA Database Explorer can be used. Open your HANA Database explorer (e.g. via Space Management -&gt; Edit -&gt; Database Access -&gt; Select a Database User -&gt; Open Database Explorer). To use the created Database Analysis user the HANA Database Instance has to be added again with the analysis user. Click on the “+” sign in the upper left corner, select SAP HANA Database as Instance type and paste the credentials from the user creation.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_25-1744874906637.png" style="width: 781px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251799iC5DDD10759A9206F/image-dimensions/781x603?v=v2" width="781" height="603" role="button" title="henri_hosang_25-1744874906637.png" alt="henri_hosang_25-1744874906637.png" /></span></P><P>You see the added connection in the upper left section of the HANA Database Explorer. The explorer helps you by automatically creating Select statements. To find an element to consume open the Instance and the Catalog Option. Choose Tables to consume Tables or Views to consume Views. By default, all Tables / Views from all schemas are shown. You can search for a specific Table / View or filter by schema. The schema option shows all schemas available on the HANA Cloud, but you can simply search for the name of one of your spaces that holds the object you want to consume.</P><P>In this example I select the table SalesOrders_TestUpload by selecting Tables from the catalog and filtering for my Space schema COE_EMEA_DW_DM and searching for test. By right clicking the table you have multiple options. To just consume the data via OpenSQL click “Generate SELECT Statement” and execute the statement. Data will be shown.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_26-1744874906648.png" style="width: 725px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251800iC069FFC73A363BED/image-dimensions/725x560?v=v2" width="725" height="560" role="button" title="henri_hosang_26-1744874906648.png" alt="henri_hosang_26-1744874906648.png" /></span></P><P>Instead of using the HANA Database Explorer you can also consume data in the HANA Cloud from any other Database client (e.g. DBeaver) or from a scripting language like Python by defining a connection to that HANA Database. To consume assets from Python you first need to allowlist your environment’s external IP address. Go to System -&gt; Configuration -&gt; IP Allowlist -&gt; Trusted IPs and add your external IP address. You can get your external IP Address by running the command <EM>curl ifconfig.me</EM> on Linux/macOS or opening a website like <A href="https://ifconfig.me/" target="_blank" rel="noopener nofollow noreferrer">https://ifconfig.me/</A>. If you are using a VPN client, investigate the settings of your client and check if the external IP address is provided there. For testing IP range 0.0.0.0/0 can be used.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="henri_hosang_27-1744874906661.png" style="width: 728px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251801i557AF390F7D20854/image-dimensions/728x562?v=v2" width="728" height="562" role="button" title="henri_hosang_27-1744874906661.png" alt="henri_hosang_27-1744874906661.png" /></span></P><P>Then install the hdbcli extension module for Python. This defines the necessary API specification to directly send SQL queries to the HANA Cloud. You can find the documentation here: <A href="https://pypi.org/project/hdbcli/" target="_blank" rel="noopener nofollow noreferrer">https://pypi.org/project/hdbcli/</A> and install it in the command line via the command:</P><pre class="lia-code-sample language-bash"><code>$ pip install hdbcli</code></pre><P>In a new Python file import the dbapi from hdbcli and define the connection with the four credentials from the Database Analysis user similarly to the HANA Database Explorer. Write a SQL command or copy the one generated from the HANA Database Explorer and execute it. Finally, run <EM>cursor.fetchall()</EM> to get a row-by-row result. Here is the full code</P><pre class="lia-code-sample language-python"><code>from hdbcli import dbapi conn = dbapi.connect( address="&lt;HANA host&gt;", port=443, user="DWCDBUSER#&lt;suffix&gt;", password="&lt;DB Analysis User Password&gt;", ) sql = 'SELECT * FROM "COE_EMEA_DW_DM"."SalesOrders_TestUpload"' cursor = conn.cursor() cursor.execute(sql) cursor.fetchall()</code></pre><P>Certainly, you can now start creating more complex SQL statements or working with the result set by e.g. passing the <EM>fetchall()</EM> command into a pandas DataFrame. As shown in the What? Section there are many more possibilities when working with the HANA Open SQL schema, like running SQL script procedures in Task Chains or executing HANA machine learning algorithms that are beyond this introductory blog post. Take a look at the articles linked in the What? Section to get you started with additional scenarios.</P><P>&nbsp;</P><P>&nbsp;</P><H1 id="toc-hId--1412994838">Conclusion</H1><P>That’s it for this introduction to external access to Datasphere. By now you should have a good understanding of all the major ways to interact with Datasphere from tools like the CLI; Postman and Python. As mentioned in the beginning Replication Flows, OData and the ODBC/JDBC connection can also directly be used by 3rd party application to retrieve data from Datasphere. This was not part of this blog post as the configuration differs between all the possible targets. There is extensive documentation and there are many community blogs available to explain application specific setups.</P><P>If you have questions or noticed a scenario I didn’t cover, feel free to leave a comment below the blog post.</P><P>Cheers, Henri</P> 2025-04-22T10:55:27.647000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/troubleshooting-sap-os-collector-saposcol-issues-and-resolving-st06-data/ba-p/14040389 Troubleshooting SAP OS Collector (saposcol) Issues and Resolving ST06 Data Access Problems 2025-04-22T18:52:18.629000+02:00 Vimal_Soosairaj https://community.sap.com/t5/user/viewprofilepage/user-id/844755 <P class=""><FONT size="4"><STRONG>Introduction:</STRONG></FONT></P><P class="">&nbsp; &nbsp; &nbsp; &nbsp;In this article, I will share my experience troubleshooting issues related to the SAP OS Collector (saposcol) not running, which led to challenges in accessing ST06 data and analyzing system performance. We will explore the common symptoms such as shared memory errors and missing dump status data, discuss the underlying causes, and outline the steps taken to resolve the issue. This guide aims to help SAP administrators quickly identify and address similar issues in their environments.</P><H3 id="toc-hId-1834506090"><FONT size="4">Key Notes:</FONT></H3><OL><LI><P class=""><STRONG>Always Use the Latest saposcol</STRONG><BR />SAP recommends using the latest version of saposcol. It is part of the SAPHOSTAGENT.SAR package.</P></LI><LI><P class=""><STRONG>Root Privileges Required for saposcol</STRONG><BR />To start saposcol, ensure you have root privileges, not the SIDadm user. This is because saposcol collects all OS statistics.</P></LI><LI><P class=""><STRONG>Location of saposcol Executables</STRONG><BR />To start or check the status of saposcol, navigate to the following directory:&nbsp;/usr/sap/hostctrl/exe</P></LI><LI><P class=""><STRONG>useful commands for saposcol:</STRONG></P><P>./saposcol -s (to check the status)</P><P>./saposcol -l (to start)</P><P>./saposcol -k (to stop)</P><P>./saposcol -d (to access saposcol by dialog mode, once ./saposcol -d executed then type <STRONG>help,</STRONG> it will show the list of dialog mode commands in the dialog mode)</P></LI></OL><H3 id="toc-hId-1637992585"><FONT size="4">Solution:</FONT></H3><P class="">In my case, I used saposcol’s dialog mode and executed the launch command to start it. After starting, I encountered a <STRONG>NipConnect error</STRONG> that kept displaying. Initially, I thought it was a persistent error, so I waited without closing the command prompt. After approximately 5 minutes, saposcol successfully started.</P><P class="">If you're still facing issues when checking the history of collected saposcol data in ST06, follow the solutions provided in the SAP Notes listed below.</P><H3 id="toc-hId-1441479080"><FONT size="4">Recommended SAP Notes:</FONT></H3><P class="">&nbsp;</P><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> <A class="" href="https://me.sap.com/notes/1915341" target="_new" rel="noopener noreferrer">SAP Note 1915341 – ST06: No historical data available</A></P><P class=""><span class="lia-unicode-emoji" title=":backhand_index_pointing_right:">👉</span> <A class="" href="https://me.sap.com/notes/2338673" target="_new" rel="noopener noreferrer">SAP Note 2338673 – SM12: Lock entry remains in system</A></P><P>&nbsp;</P><H3 id="toc-hId-1244965575"><FONT size="4">Additional Steps:</FONT></H3><P class="">In my case, I resolved the issue by restarting the sapstart service. The command for restarting the service is as follows:</P><P class="">sapcontrol -nr &lt;instance nr&gt; -function RestartService</P><P class="">After restarting the service, I was able to see the ST06 history for previous hours.</P><H3 id="toc-hId-1048452070"><FONT size="4">Additional Useful SAP Notes:</FONT></H3><UL><LI><P class=""><A class="" href="https://me.sap.com/notes/1102124" target="_new" rel="noopener noreferrer">SAP Note 1102124</A></P></LI><LI><P class=""><A class="" href="https://me.sap.com/notes/19227" target="_new" rel="noopener noreferrer">SAP Note 19227</A></P></LI><LI><P class=""><A class="" href="https://me.sap.com/notes/1031096" target="_new" rel="noopener noreferrer">SAP Note 1031096</A></P></LI><LI><P class=""><A class="" href="https://me.sap.com/notes/548699" target="_new" rel="noopener noreferrer">SAP Note 548699</A></P></LI></UL><P class=""><STRONG>Conclusion:</STRONG></P><P class="">By following the steps outlined above and referring to the relevant SAP Notes, I was able to resolve the issue and restore access to ST06 data. If you encounter similar problems, I hope this guide will provide you with the necessary steps to troubleshoot and fix the issue in your SAP environment.</P><P>&nbsp;</P> 2025-04-22T18:52:18.629000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/how-to-restore-hana-backup-with-encryption-root-keys-in-target-system/ba-p/14078389 How to restore HANA Backup with ENCRYPTION ROOT KEYS in Target System. 2025-04-24T13:35:08.558000+02:00 shalabhkumar https://community.sap.com/t5/user/viewprofilepage/user-id/2036322 <P>Source System Details.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Target System Details.</P><P>DB SID HXP.&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; DB SID is HXD.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_0-1744866979706.png" style="width: 652px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251680i2FB34CD098BD77A3/image-dimensions/652x168?v=v2" width="652" height="168" role="button" title="Kumarshalabh_0-1744866979706.png" alt="Kumarshalabh_0-1744866979706.png" /></span></P><P>&nbsp;</P><P><STRONG>Prerequisites</STRONG>--- 1.&nbsp; &nbsp;root key password require.&nbsp; 2.&nbsp; HANA Studio require.</P><P>We are restoring HXP backup in HXD system, but we don’t have root key then how to restore backup in Target (HXD) HANA DB.</P><P>Facing the error message below in HANA Studio while restoring backup in HANA HXD.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_2-1744867448376.png" style="width: 728px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251683iC6041ABC0E89817C/image-dimensions/728x131?v=v2" width="728" height="131" role="button" title="Kumarshalabh_2-1744867448376.png" alt="Kumarshalabh_2-1744867448376.png" /></span></P><P><STRONG>Backup rootkey 3ef7cae23ce897c7748807d98689abc1bfd3d265654140b244b4ae8918c6e096 is required for decryption of backup files, but not found in key store. Please make sure you have provided the necessary keys before </STRONG></P><P><STRONG>recovery.,</STRONG></P><P><STRONG>No root key matching hash=3ef7cae23ce897c7748807d98689abc1bfd3d265654140b244b4ae8919c6e096</STRONG></P><P><STRONG>Reason</STRONG> is During the HANA DB installation below option selected Yes and HXP backup was taken with encryption key.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_3-1744867510651.png" style="width: 836px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251684i384397CCF58487E8/image-dimensions/836x46?v=v2" width="836" height="46" role="button" title="Kumarshalabh_3-1744867510651.png" alt="Kumarshalabh_3-1744867510651.png" /></span></P><P>While restore HANA DB backup in Target system you have to provide root key password other wise backup will not restore and throw the error.</P><P><STRONG>Source System Step (HXP)</STRONG></P><P><STRONG>Solution </STRONG></P><P><STRONG>Step 1.&nbsp;</STRONG></P><P>Login to the source system (HXP) at os level and go below path.</P><P>Take the backup of the ssfs keys.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_4-1744867626128.png" style="width: 726px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251685iA95479604BA972A5/image-dimensions/726x138?v=v2" width="726" height="138" role="button" title="Kumarshalabh_4-1744867626128.png" alt="Kumarshalabh_4-1744867626128.png" /></span></P><P>We have taken backup with ssfs_backup</P><P><STRONG>Step 2.</STRONG></P><P>Login in Source DB HXP and run below query in HANA Studio.</P><P>From SYSTEM DB we need to run below query.</P><P>We need to set ENCRYPTION root key password.</P><P>ALTER SYSTEM SET ENCRYPTION ROOT KEYS BACKUP PASSWORD (password)</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_5-1744867678399.png" style="width: 747px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251686iCA22DAC129928143/image-dimensions/747x198?v=v2" width="747" height="198" role="button" title="Kumarshalabh_5-1744867678399.png" alt="Kumarshalabh_5-1744867678399.png" /></span></P><P><STRONG>Step 3.</STRONG></P><P>Now Validate the password from below query in HANA Studio.</P><P>ALTER SYSTEM VALIDATE ENCRYPTION ROOT KEYS BACKUP PASSWORD (password)</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_6-1744867731555.png" style="width: 754px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251688iBD9F8B8D1F205B40/image-dimensions/754x198?v=v2" width="754" height="198" role="button" title="Kumarshalabh_6-1744867731555.png" alt="Kumarshalabh_6-1744867731555.png" /></span></P><P><STRONG>Step 4.</STRONG></P><P>Now run this query for checking DB ID in tenant DB.</P><pre class="lia-code-sample language-abap"><code>SELECT DATABASE_NAME, CASE WHEN (DBID = '' AND DATABASE_NAME = 'SYSTEMDB') THEN 1 WHEN (DBID = '' AND DATABASE_NAME &lt;&gt; 'SYSTEMDB') THEN 3 ELSE TO_INT(DBID) END DATABASE_ID FROM (SELECT DISTINCT DATABASE_NAME, SUBSTR_AFTER (SUBPATH,'.') AS DBID FROM SYS_DATABASES.M_VOLUMES);</code></pre><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_7-1744867807239.png" style="width: 743px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251689i10A2C3D8C7B7BD8D/image-dimensions/743x379?v=v2" width="743" height="379" role="button" title="Kumarshalabh_7-1744867807239.png" alt="Kumarshalabh_7-1744867807239.png" /></span></P><P>HXP DB ID is 3.</P><P><STRONG>Step 5.</STRONG></P><P>Login in source system HXP at os level and take the Backup of root keys.</P><P>hdbnsutil -backupRootKeys /backup/backup/rootkey/rootkey_HXP.rkb --dbid=3 --type='BACKUP'</P><P>We used here dbid 3 which we checked in Tenant DB earlier screen shot.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_8-1744867861074.png" style="width: 925px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251690i497B5CB81710686D/image-dimensions/925x133?v=v2" width="925" height="133" role="button" title="Kumarshalabh_8-1744867861074.png" alt="Kumarshalabh_8-1744867861074.png" /></span></P><P><STRONG>Step 6.</STRONG></P><P>Login to source system&nbsp; os level validate root keys.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_9-1744867887128.png" style="width: 756px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251691i357681324A4EAF12/image-dimensions/756x121?v=v2" width="756" height="121" role="button" title="Kumarshalabh_9-1744867887128.png" alt="Kumarshalabh_9-1744867887128.png" /></span></P><P>This is rootkey location and root key copy from HXP to HXD.</P><P>/backup/backup/rootkey</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_10-1744867915013.png" style="width: 750px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251692iFFDD33A9F4F65978/image-dimensions/750x197?v=v2" width="750" height="197" role="button" title="Kumarshalabh_10-1744867915013.png" alt="Kumarshalabh_10-1744867915013.png" /></span></P><P>Note: Copy the rootkey_HXP.rkb file from HXP (Source ) to HXD (Target)</P><P><STRONG>Target System Step (HXD)</STRONG></P><P>Step 7.&nbsp; (HXD Target DB)</P><P>Check the encryption status</P><P>Login in Target system in SYSTEM DB and run below query in HANA Studio.</P><P>SELECT * FROM SYS_DATABASES.M_ENCRYPTION_OVERVIEW</P><P>Encryption Status TRUE Coming. It means now Encryption is ok and we can restore backup easlily.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_11-1744867976990.png" style="width: 747px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251693iE861CBA0A9EB4722/image-dimensions/747x241?v=v2" width="747" height="241" role="button" title="Kumarshalabh_11-1744867976990.png" alt="Kumarshalabh_11-1744867976990.png" /></span></P><P><STRONG>Step 8.</STRONG></P><P>Login to target system (HXD) SYSTEMDB and Check the DBID with below query in HANA Studio.</P><P><STRONG>&nbsp; &nbsp; &nbsp; &nbsp;</STRONG></P><pre class="lia-code-sample language-abap"><code>SELECT DATABASE_NAME, CASE WHEN (DBID = '' AND DATABASE_NAME = 'SYSTEMDB') THEN 1 WHEN (DBID = '' AND DATABASE_NAME &lt;&gt; 'SYSTEMDB') THEN 3 ELSE TO_INT(DBID) END DATABASE_ID FROM (SELECT DISTINCT DATABASE_NAME, SUBSTR_AFTER (SUBPATH,'.') AS DBID FROM SYS_DATABASES.M_VOLUMES);</code></pre><P><STRONG>&nbsp; &nbsp; &nbsp;&nbsp;</STRONG>We can see DB ID 3 same coming which was coming in HXP (Source DB )</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_12-1744868135090.png" style="width: 739px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251694iB45AD65EC39225C4/image-dimensions/739x220?v=v2" width="739" height="220" role="button" title="Kumarshalabh_12-1744868135090.png" alt="Kumarshalabh_12-1744868135090.png" /></span></P><P>DB ID is 3.</P><P><STRONG>Step 9.</STRONG></P><P>Login to target system HXD os level and take the backup of ssfs keys.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_13-1744868162319.png" style="width: 738px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251695iB56801885B35F916/image-dimensions/738x107?v=v2" width="738" height="107" role="button" title="Kumarshalabh_13-1744868162319.png" alt="Kumarshalabh_13-1744868162319.png" /></span></P><P>We have taken backup with ssfs_backup.</P><P><STRONG>Step 10.</STRONG></P><P>Login to target system (HXD) SYSTEMDB and Stop tenant DB.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_14-1744868184104.png" style="width: 734px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251696i1D02F05E463629D9/image-dimensions/734x323?v=v2" width="734" height="323" role="button" title="Kumarshalabh_14-1744868184104.png" alt="Kumarshalabh_14-1744868184104.png" /></span></P><P><STRONG>Step 11.</STRONG></P><P>Login to target system (HXD) OS level and Import the root keys which we export from HXP system.</P><P>Go to same location /backup/backup/rootkey and run below command for import</P><P>/backup/backup/rootkey</P><P>hdbnsutil -recoverRootKeys rootkey_HXP.rkb --dbid=3 --password="password" --type='BACKUP'</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_15-1744868214366.png" style="width: 749px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251697i525B74AEEAC0DF48/image-dimensions/749x131?v=v2" width="749" height="131" role="button" title="Kumarshalabh_15-1744868214366.png" alt="Kumarshalabh_15-1744868214366.png" /></span></P><P><STRONG>Step 12.</STRONG></P><P>Now you can start Restore again it will work.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_18-1744868961393.png" style="width: 742px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251701i29C10F88CEE182A2/image-dimensions/742x160?v=v2" width="742" height="160" role="button" title="Kumarshalabh_18-1744868961393.png" alt="Kumarshalabh_18-1744868961393.png" /></span></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Kumarshalabh_19-1744868986605.png" style="width: 748px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/251702i910CB75FAD009194/image-dimensions/748x202?v=v2" width="748" height="202" role="button" title="Kumarshalabh_19-1744868986605.png" alt="Kumarshalabh_19-1744868986605.png" /></span></P><P>Thanks for watching it. <span class="lia-unicode-emoji" title=":smiling_face_with_smiling_eyes:">😊</span></P><P>&nbsp;</P> 2025-04-24T13:35:08.558000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/sap-datasphere-cli-amp-python-exporting-modeling-objects-to-csv-files-for/ba-p/14087080 SAP Datasphere CLI & Python: Exporting Modeling Objects to CSV Files for Each Artifact 2025-04-29T18:57:17.498000+02:00 vikasparmar88 https://community.sap.com/t5/user/viewprofilepage/user-id/1528256 <H1 id="toc-hId-1382837983" id="toc-hId-1580240381">Introduction</H1><P>In this blog post, we'll explore how to use Python alongside SAP Datasphere CLI to extract modeling objects and export them to CSV files. The script allows users to handle artifacts such as remote tables, views, replication flows, and more, for each space in SAP Datasphere.<BR />This solution is particularly useful for automating repetitive tasks and ensuring structured data handling across different modeling objects</P><P><FONT size="5" color="#000000"><STRONG>Prerequisites</STRONG></FONT></P><P>Steps to install SAP Datasphere CLI:&nbsp;</P><P><A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68a44e28.html/" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68a44e28.html/</A>&nbsp;</P><P><A href="https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/bc-p/14086942#M180986/" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/bc-p/14086942#M180986/</A>&nbsp;</P><P><FONT size="5" color="#000000"><STRONG>Step-by-Step Process</STRONG></FONT></P><P><STRONG>Step 1: Prepare Login.Json file</STRONG></P><P>Create OAuth Client with Purpose as Interactive Usage and Redirect URL as <A href="http://localhost:8080/" target="_blank" rel="noopener nofollow noreferrer"><EM>http://localhost:8080</EM></A></P><P>Get the value of all below fields from the OAuth Client and prepare the Login.json file.</P><P>&nbsp;</P><pre class="lia-code-sample language-json"><code>{ "client_id": "", "client_secret": "", "authorization_url": "", "token_url": "", "access_token": "", "refresh_token": "" }</code></pre><P>&nbsp;</P><P>&nbsp;<STRONG>Step 2: Create Model_Object.py file with below code</STRONG></P><P><STRONG>dsp host</STRONG> : give URL of Datasphere Tenant.</P><P><STRONG>secrets_file</STRONG> : Give Path of Login.json file.</P><P>&nbsp;</P><pre class="lia-code-sample language-abap"><code>import subprocess import pandas as pd import sys def manage_Modeling_Object(Modeling_Object): # Step 1: Login to Datasphere using host and secrets file dsp_host = '&lt;URL of Datasphere&gt;' secrets_file = '&lt;path&gt;/Login.json' command = f'datasphere login --host {dsp_host} --secrets-file {secrets_file}' subprocess.run(command, shell=True) # Execute the login command # Step 2: Retrieve a list of all spaces in JSON format command = ['datasphere', 'spaces', 'list', '--json'] result_spaces = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command and capture output # Step 3: Parse the list of spaces from the command's output spaces = result_spaces.stdout.splitlines() # Split output into individual lines ModelingObject_data = [] # Initialize a list to store Modeling Object data # Step 4: Check if the Modeling Object is 'spaces' if Modeling_Object == 'spaces': for space in spaces: if space == "[" or space == "]": continue # Skip brackets in the JSON output space_id = space.strip() # Extract space ID # Add space details to the data list ModelingObject_data.append({ 'Space ID': space_id.replace('"', '').replace(',', ''), 'Technical Name': space_id.replace('"', '').replace(',', ''), 'TYPE': Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name }) # Step 5: Process Modeling Objects for each space else: for space in spaces: if space == "[" or space == "]": continue # Skip brackets in the JSON output space_id = space.strip() # Extract space ID # Step 6: Retrieve Modeling Objects for the current space command = ['datasphere', 'objects', Modeling_Object, 'list', '--space', space_id.replace('"', '').replace(',', '')] result_ModelingObject = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command # Step 7: Parse the Modeling Object data from the output ModelingObject_info = result_ModelingObject.stdout.splitlines() # Split output into individual lines print("Checking "+Modeling_Object.upper()+" for space : "+space_id.replace('"', '').replace(',', '')) # Log the space being checked # Step 8: Process each Modeling Object if len(ModelingObject_info) &gt; 1: for flow in ModelingObject_info: if '{' in flow or '}' in flow or '[' in flow or ']' in flow: continue # Skip brackets or braces in the output cleaned_flow = flow.replace('"technicalName":', '').replace('"', '').strip() # Clean up the output # Step 9: Add Modeling Object details to the data list ModelingObject_data.append({ 'Space ID': space_id.replace('"', '').replace(',', ''), 'Technical Name': cleaned_flow, 'TYPE': Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name }) # Step 10: Write the collected data into a CSV file if ModelingObject_data: df = pd.DataFrame(ModelingObject_data) # Create a DataFrame from the data list df.to_csv(Modeling_Object.upper()+'.csv', index=False) # Save the DataFrame to a CSV file without the index print("Space vise all "+Modeling_Object.upper()+" have been written to "+Modeling_Object.upper()+".csv.") # Log success message else: print("No Modeling Objects found.") # Log message if no data was collected print('------------------------------------------------------------------------------------------------------------------------------------') # Separator for readability if __name__ == "__main__": # Check if an argument is provided via the command line if len(sys.argv) &gt; 1: # Pass the first argument to the method manage_Modeling_Object(sys.argv[1]) else: print("Please provide a Modeling Object name as an argument.") # Log error message if no argument is provided # Execute for predefined Modeling Objects manage_Modeling_Object('remote-tables') manage_Modeling_Object('local-tables') manage_Modeling_Object('views') manage_Modeling_Object('intelligent-lookups') manage_Modeling_Object('data-flows') manage_Modeling_Object('replication-flows') manage_Modeling_Object('transformation-flows') manage_Modeling_Object('task-chains') manage_Modeling_Object('analytic-models') manage_Modeling_Object('data-access-controls')</code></pre><P>&nbsp;</P><P><STRONG>Step 3: Open command prompt and execute the Model_Objects.py file</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="vikasparmar88_0-1745646785952.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/255123i23E96AEF92C443CA/image-size/medium?v=v2&amp;px=400" role="button" title="vikasparmar88_0-1745646785952.png" alt="vikasparmar88_0-1745646785952.png" /></span></P><P>Once the program execution is done it will generate CSV files for all the Datasphere artifactes mention in python code</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="vikasparmar88_1-1745646877909.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/255124i2D0343903B2ECFC7/image-size/medium?v=v2&amp;px=400" role="button" title="vikasparmar88_1-1745646877909.png" alt="vikasparmar88_1-1745646877909.png" /></span></P><P>each CSV file will have 3 columns :&nbsp;</P><P>1) Space ID : Name of the space</P><P>2) Technical Name : Exact Technical Name of Object&nbsp;</P><P>3) Type : Type of Object&nbsp;( i.e view, local-table, remote-table, replication flw etc)</P><P>&nbsp;</P><H1 id="toc-hId-1382837983" id="toc-hId-1383726876">Conclusion</H1><P>This script demonstrates how Python and SAP Datasphere CLI can collaborate to streamline artifact management and export data systematically. By following the steps provided, users can extend or adapt the code to suit their requirements.</P> 2025-04-29T18:57:17.498000+02:00 https://community.sap.com/t5/technology-blog-posts-by-members/sap-datasphere-cli-amp-python-automation-extract-all-artifacts-of/ba-p/14087857 SAP Datasphere CLI & Python Automation : Extract All Artifacts of Datasphere in CSV files. 2025-05-05T10:47:20.047000+02:00 VikasParmar055 https://community.sap.com/t5/user/viewprofilepage/user-id/1716232 <H1 id="toc-hId-1382837983" id="toc-hId-1580247983">Introduction</H1><P><BR />In this post, we'll look at how to use Python with SAP Datasphere CLI to extract data objects and save them as CSV files. The script helps you manage items like remote tables, views, replication flows, and more for every space in SAP Datasphere. It's a great tool for automating repeated tasks and keeping data organized across different objects.<BR /><BR /></P><H1 id="toc-hId-1382837983" id="toc-hId-1383734478">Usecases:</H1><P><SPAN><BR /></SPAN><SPAN>1. <STRONG>Validate Namig Convesion</STRONG> : Generated files can be used as source in Datasphere to validate the naming convension for all artifactes</SPAN><SPAN><BR /></SPAN><SPAN>2. Identify and Delete unncessary objects from Datasphere Tenant.</SPAN></P><H1 id="toc-hId-1382837983" id="toc-hId-1187220973"><STRONG>Prerequisites</STRONG></H1><P>Steps to install SAP Datasphere CLI:&nbsp;</P><P><A href="https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68a44e28.html/" target="_blank" rel="noopener noreferrer">https://help.sap.com/docs/SAP_DATASPHERE/d0ecd6f297ac40249072a44df0549c1a/f7d5eddf20a34a1aa48d8e2c68...</A>&nbsp;</P><P><A href="https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cli-and-sql/bc-p/14086942#M180986/" target="_blank">https://community.sap.com/t5/technology-blogs-by-sap/sap-datasphere-external-access-overview-apis-cl...</A>&nbsp;</P><P><FONT size="5" color="#000000"><STRONG>Step-by-Step Process</STRONG></FONT></P><P><STRONG>Step 1: Prepare Login.Json file</STRONG></P><P>Create OAuth Client with Purpose as Interactive Usage and Redirect URL as<SPAN>&nbsp;</SPAN><A href="http://localhost:8080/" target="_blank" rel="noopener nofollow noreferrer"><EM>http://localhost:8080</EM></A></P><P>Get the value of all below fields from the OAuth Client and prepare the Login.json file.</P><PRE>{ "client_id": "", "client_secret": "", "authorization_url": "", "token_url": "", "access_token": "", "refresh_token": "" }</PRE><P>&nbsp;<STRONG>Step 2: Create Model_Object.py file with below code</STRONG></P><P><STRONG>dsp host</STRONG><SPAN>&nbsp;</SPAN>: give URL of Datasphere Tenant.</P><P><STRONG>secrets_file</STRONG><SPAN>&nbsp;</SPAN>: Give Path of Login.json file.</P><PRE>import subprocess import pandas as pd import sys def manage_Modeling_Object(Modeling_Object): # Step 1: Login to Datasphere using host and secrets file dsp_host = '&lt;URL of Datasphere&gt;' secrets_file = '&lt;path&gt;/Login.json' command = f'datasphere login --host {dsp_host} --secrets-file {secrets_file}' subprocess.run(command, shell=True) # Execute the login command # Step 2: Retrieve a list of all spaces in JSON format command = ['datasphere', 'spaces', 'list', '--json'] result_spaces = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command and capture output # Step 3: Parse the list of spaces from the command's output spaces = result_spaces.stdout.splitlines() # Split output into individual lines ModelingObject_data = [] # Initialize a list to store Modeling Object data # Step 4: Check if the Modeling Object is 'spaces' if Modeling_Object == 'spaces': for space in spaces: if space == "[" or space == "]": continue # Skip brackets in the JSON output space_id = space.strip() # Extract space ID # Add space details to the data list ModelingObject_data.append({ 'Space ID': space_id.replace('"', '').replace(',', ''), 'Technical Name': space_id.replace('"', '').replace(',', ''), 'TYPE': Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name }) # Step 5: Process Modeling Objects for each space else: for space in spaces: if space == "[" or space == "]": continue # Skip brackets in the JSON output space_id = space.strip() # Extract space ID # Step 6: Retrieve Modeling Objects for the current space command = ['datasphere', 'objects', Modeling_Object, 'list', '--space', space_id.replace('"', '').replace(',', '')] result_ModelingObject = subprocess.run(command, capture_output=True, shell=True, text=True) # Run the command # Step 7: Parse the Modeling Object data from the output ModelingObject_info = result_ModelingObject.stdout.splitlines() # Split output into individual lines print("Checking "+Modeling_Object.upper()+" for space : "+space_id.replace('"', '').replace(',', '')) # Log the space being checked # Step 8: Process each Modeling Object if len(ModelingObject_info) &gt; 1: for flow in ModelingObject_info: if '{' in flow or '}' in flow or '[' in flow or ']' in flow: continue # Skip brackets or braces in the output cleaned_flow = flow.replace('"technicalName":', '').replace('"', '').strip() # Clean up the output # Step 9: Add Modeling Object details to the data list ModelingObject_data.append({ 'Space ID': space_id.replace('"', '').replace(',', ''), 'Technical Name': cleaned_flow, 'TYPE': Modeling_Object[:-1].upper() # Set the TYPE as uppercase version of the input Modeling Object name }) # Step 10: Write the collected data into a CSV file if ModelingObject_data: df = pd.DataFrame(ModelingObject_data) # Create a DataFrame from the data list df.to_csv(Modeling_Object.upper()+'.csv', index=False) # Save the DataFrame to a CSV file without the index print("Space vise all "+Modeling_Object.upper()+" have been written to "+Modeling_Object.upper()+".csv.") # Log success message else: print("No Modeling Objects found.") # Log message if no data was collected print('------------------------------------------------------------------------------------------------------------------------------------') # Separator for readability if __name__ == "__main__": # Check if an argument is provided via the command line if len(sys.argv) &gt; 1: # Pass the first argument to the method manage_Modeling_Object(sys.argv[1]) else: print("Please provide a Modeling Object name as an argument.") # Log error message if no argument is provided # Execute for predefined Modeling Objects manage_Modeling_Object('remote-tables') manage_Modeling_Object('local-tables') manage_Modeling_Object('views') manage_Modeling_Object('intelligent-lookups') manage_Modeling_Object('data-flows') manage_Modeling_Object('replication-flows') manage_Modeling_Object('transformation-flows') manage_Modeling_Object('task-chains') manage_Modeling_Object('analytic-models') manage_Modeling_Object('data-access-controls')</PRE><P><STRONG>Step 3: Open command prompt and execute the Model_Objects.py file</STRONG></P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VikasParmar055_0-1745819896727.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/255331iD13C18E5F83CA56D/image-size/medium?v=v2&amp;px=400" role="button" title="VikasParmar055_0-1745819896727.png" alt="VikasParmar055_0-1745819896727.png" /></span></P><P>&nbsp;</P><P>Once the program execution is done it will generate CSV files for all the Datasphere artifactes mention in python code</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="VikasParmar055_1-1745819896735.png" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/255332iD2C8108036EAC9AD/image-size/medium?v=v2&amp;px=400" role="button" title="VikasParmar055_1-1745819896735.png" alt="VikasParmar055_1-1745819896735.png" /></span></P><P>each CSV file will have 3 columns :&nbsp;</P><P>1) Space ID : Name of the space</P><P>2) Technical Name : Exact Technical Name of Object&nbsp;</P><P>3) Type : Type of Object&nbsp;( i.e view, local-table, remote-table, replication flw etc)</P><H1 id="toc-hId-1382837983" id="toc-hId-990707468">Conclusion</H1><P>This script demonstrates how Python and SAP Datasphere CLI can collaborate to streamline artifact management and export data systematically. By following the steps provided, users can extend or adapt the code to suit their requirements.</P> 2025-05-05T10:47:20.047000+02:00 https://community.sap.com/t5/supply-chain-management-blog-posts-by-sap/sap-green-token-goodbye-blockchain-hello-faster-and-more-efficient/ba-p/14075099 🚀 SAP Green Token: Goodbye Blockchain, Hello Faster and More Efficient Auditability 2025-05-06T09:00:00.027000+02:00 GJFigaroa https://community.sap.com/t5/user/viewprofilepage/user-id/1471776 <P class="">Blockchain has been a long-standing and energy-efficient component of <a href="https://community.sap.com/t5/c-khhcw49343/SAP+Green+Token/pd-p/73555000100800004161" class="lia-product-mention" data-product="1225-1">SAP Green Token</a>. As we continuously strive to increase the value to our customers and partners, Green Token also continues to evolve. As part of Green Token’s continuous evolution, the strategic decision has been made to&nbsp;<STRONG>sunset the blockchain component</STRONG>.</P><HR /><H2 id="toc-hId-1708340037"><span class="lia-unicode-emoji" title=":magnifying_glass_tilted_left:">🔍</span>What were the initial drivers to adopt blockchain?</H2><P class="">When Green Token was first developed in 2019, the vision was to create a blockchain network where companies in the value chain could collaborate on <STRONG>end-to-end traceability and transparency</STRONG> across the supply chain.</P><HR /><H2 id="toc-hId-1511826532"><span class="lia-unicode-emoji" title=":question_mark:">❓</span>Why are we now removing it?</H2><P class="">The reality is that the market and supply chains have responded differently to the level of transparency required and the level of transparency that is willingly disclosed beyond tier-1 supply chain partners.</P><P class="">Even though the transition to a sustainable economy is accelerating, the <STRONG>adoption of extensive transparency milestones across the end-to-end supply chain still lags</STRONG>. <A href="https://www.mckinsey.com/capabilities/operations/our-insights/supply-chain-risk-survey?utm_source=chatgpt.com" target="_self" rel="nofollow noopener noreferrer">A recent study by McKinsey (2024)</A> showed that even though tier-1 transparency continues to increase, deeper tier-n visibility is decreasing.</P><P class="">In fact, <STRONG>blockchain fatigue</STRONG> has become a well-known reality across the industry. According to Gartner, 90% of blockchain-based supply chain initiatives were expected to suffer from blockchain fatigue by 2023, placing the technology firmly in the <EM>Trough of Disillusionment</EM> on the hype cycle. This mirrors direct feedback from our customers—<STRONG>interest in blockchain networks simply didn’t materialize</STRONG>.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Transparency Evolution.jpg" style="width: 400px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250283i0FBABFF717FC13AA/image-size/medium?v=v2&amp;px=400" role="button" title="Transparency Evolution.jpg" alt="Transparency Evolution.jpg" /></span></P><HR /><H2 id="toc-hId-1315313027"><span class="lia-unicode-emoji" title=":white_heavy_check_mark:">✅</span>What do customers actually want?</H2><P class="">The main need and ask for Green Token is and always has been: <STRONG>chain-of-custody accounting of sustainable materials across a single organization</STRONG> (including intra-company transfers) and its tier-1 material movements.</P><P class="">Which is also the main value proposition Green Token continues to offer today.</P><P class="">At the end of the day, organizations and value chains really value two things:</P><UL><LI><P class=""><STRONG>Auditability and traceability</STRONG> of transactions</P></LI><LI><P class=""><STRONG>Fast and efficient performance</STRONG> without unnecessary complexity</P></LI></UL><HR /><H2 id="toc-hId-1118799522"><span class="lia-unicode-emoji" title=":counterclockwise_arrows_button:">🔄</span>What’s changing under the hood?</H2><P class="">As part of ongoing improvements, Green Token is being migrated to <STRONG>SAP HANA</STRONG>, enhancing speed, efficiency, and security while maintaining full auditability.</P><P class="">From the start, Green Token has been built on the principle of <STRONG>digital twins</STRONG>, which act as a real-world representation of materials, transactions, and ownership. The blockchain component supplemented auditability capabilities, but the foundation was—and remains—in Green Token’s <STRONG>persistence layer</STRONG>, including its digital twin and audit logging system.</P><P class="">By removing the blockchain, we’re increasing Green Token’s performance:</P><UL><LI><P class="">Transactions are <STRONG>faster</STRONG> due to optimized data storage and retrieval</P></LI><LI><P class=""><STRONG>Security and traceability remain intact</STRONG>, ensuring compliance with reporting requirements</P></LI></UL><HR /><H2 id="toc-hId-922286017"><span class="lia-unicode-emoji" title=":locked_with_key:">🔐</span>Trust without blockchain?</H2><P class="">Blockchain is often used to represent trust. But <STRONG>trust isn’t built by technology alone</STRONG>—it’s built through <STRONG>verifiable data, traceability, and robust audit trails</STRONG>.</P><P class="">SAP Green Token continues to provide the same level of data integrity and transaction traceability, ensuring continuous compliance with changing regulations and standards—<STRONG>without the overhead of blockchain</STRONG>.</P><HR /><H2 id="toc-hId-725772512"><span class="lia-unicode-emoji" title=":light_bulb:">💡</span>So what <EM>is</EM> a token?</H2><P class="">With that, SAP Green Token remains SAP Green Token. After all, <STRONG>tokenization ≠ blockchain</STRONG>.</P><P class="">A token is a <STRONG>representation of something in the real world</STRONG>, and that is exactly what we continue to provide.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-center" image-alt="Green Token - Material Balance.jpg" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/250282i6278D16B33771055/image-size/large?v=v2&amp;px=999" role="button" title="Green Token - Material Balance.jpg" alt="Green Token - Material Balance.jpg" /></span></P><BLOCKQUOTE><P class="">A token is like a marble in a bucket—you can still see how many there are, where they came from, and where they go, without needing a blockchain.</P></BLOCKQUOTE><P class="">With the migration to SAP HANA, the digital twin tokens will reside in SAP HANA along with audit logs supported by SAP BTP, providing transparency over all transactions.</P><HR /><H2 id="toc-hId-529259007"><span class="lia-unicode-emoji" title=":articulated_lorry:">🚛</span>For certified material movements, we’re just getting faster</H2><P class="">When it comes to <STRONG>certified sustainable material movements and chain-of-custody accounting</STRONG>, SAP Green Token remains a <STRONG>secure and auditable solution for supply chain compliance</STRONG>—just faster and more efficient going forward.</P><P class="">&nbsp;</P><BLOCKQUOTE><P class=""><span class="lia-unicode-emoji" title=":small_blue_diamond:">🔹</span> <EM>If you're already live or currently implementing Green Token, no action is required—this change will not impact your setup or ongoing implementation.<BR /><BR /></EM></P></BLOCKQUOTE><HR /><P>Hi!<SPAN>&nbsp;<span class="lia-unicode-emoji" title=":waving_hand:">👋</span>&nbsp;</SPAN>I’m Gloria Figaroa. Since 2015, I live and breathe sustainability and supply chain management.&nbsp;I’ve had the pleasure to work on Green Token for the past 4+ years as a Product Manager. Now in my latest role as Product Marketing Manager, I take care of everything Go-to-Market.&nbsp;<span class="lia-unicode-emoji" title=":globe_showing_europe_africa:">🌍</span></P><P>Follow me to stay up to date on the SAP Green Token solution, its newest features and functions, and detailed overviews of its capabilities. You will hear it first here!&nbsp;<span class="lia-unicode-emoji" title=":loudspeaker:">📢</span></P> 2025-05-06T09:00:00.027000+02:00 https://community.sap.com/t5/enterprise-resource-planning-blog-posts-by-sap/data-volume-problems-during-an-sap-s-4hana-conversion/ba-p/14079194 Data volume problems during an SAP S/4HANA Conversion 2025-05-07T12:06:41.075000+02:00 YvonneNozukoMaboyi https://community.sap.com/t5/user/viewprofilepage/user-id/478549 <P>An SAP S/4HANA conversion refers to migration from a legacy SAP ERP system, such as SAP ECC, to the modern SAP S/4HANA business suite. This process is essentially a platform upgrade that includes adapting data system configurations, and custom developments to ensure compatibility with S/4HANA. Click <A href="https://help.sap.com/doc/e2048712f0ab45e791e6d15ba5e20c68/2023/en-US/FSD_OP2023_latest.pdf" target="_blank" rel="noopener noreferrer">here</A> to read more about the features of S/4HANA.</P><P>There are three approaches available when embarking on this journey (read a more in-depth discussion about them <A href="https://www.leanix.net/en/wiki/tech-transformation/s4hana-greenfield-vs-brownfield-approach" target="_blank" rel="noopener nofollow noreferrer">here)</A></P><OL><LI>Greenfield approach - means starting from a clean slate.</LI><LI>Brownfield approach - is more like a system upgrade.</LI><LI>Hybrid approach - lets you select the best parts of Greenfield and Brownfield implementations.</LI></OL><P>Many customers embarking on this transformation journey opt for the brownfield conversion approach. The brownfield approach retains your historical data which can lead to the technical challenge we will discuss and solve in this blog.</P><P>SAP provides a process for the conversion to SAP S/4HANA, this is illustrated in the figure below which gives an overview of the tools, the phases, and the activities involved in the process. The activity that we will be focusing on is t5 (Software Update Manager (SUM)), specifically the “Data Conversion” task:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="S4 Conversion process.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252114i0F5E2BB01D10A3D1/image-size/large?v=v2&amp;px=999" role="button" title="S4 Conversion process.png" alt="S4 Conversion process.png" /></span></P><P><STRONG><EM>For further reading:</EM></STRONG></P><UL><LI>Conversion phases and related activities are documented in the&nbsp; <A href="https://help.sap.com/doc/2b87656c4eee4284a5eb8976c0fe88fc/2023/en-US/CONV_OP2023.pdf" target="_blank" rel="noopener noreferrer">“Conversion Guide for SAP S/4HANA and SAP S/4HANA Cloud Private Edition 2023”</A>, which can be found under the Implement section of the <A href="https://help.sap.com/docs/SAP_S4HANA_ON-PREMISE" target="_blank" rel="noopener noreferrer">SAP S/4HANA | SAP Help Portal</A>.</LI><LI>For information specific to accounting components please refer to the following SAP note <A href="https://me.sap.com/notes/2332030" target="_blank" rel="noopener noreferrer">2332030 - Conversion of accounting to SAP S/4HANA - SAP for Me</A>.</LI></UL><P>The “Data Conversion” task is the conversion of your data into the <STRONG>new data structure</STRONG> used by SAP S/4HANA.</P><P>At this point you could be wondering <STRONG>“What new data structure used by SAP S/4HANA?”,</STRONG> well I’m glad you asked, let me explain.</P><P>The data model used in SAP S/4HANA has been simplified drastically when comparing it with the data model used in SAP ERP. If you look at the Finance section of the below diagram you will see that many of the legacy tables have been consolidated into a simplified data model focused on the ACDOCA table.</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Finance tables S4HANA.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252128iC156F7648421FF2B/image-size/large?v=v2&amp;px=999" role="button" title="Finance tables S4HANA.png" alt="Finance tables S4HANA.png" /></span></P><P>Furthermore, the effects of Accounting Data Model Changes and how they impact the conversion are described as follows in an extract from the <A href="https://help.sap.com/doc/71b0ce5b15c2432da915b70bccdc79da/S4_FIN_CONVERSION_DEV/en-US/577ed44f726d2176e10000000a42189c.pdf" target="_blank" rel="noopener noreferrer">Converting Your Accounting Components to SAP S/4HANA</A> guide:</P><P><EM>To use Finance in SAP S/4HANA you must migrate the existing user data from the General Ledger, Asset Accounting, Controlling and Material Ledger. The data migration is necessary because Finance in SAP S/ 4HANA rests on a uniform data model for all accounting areas. The comprehensive ACDOCA data table contains all line-item documents. <STRONG>After the migration, all postings of the named applications are written into the ACDOCA table.</STRONG> </EM></P><P><EM>The following tables are replaced by views using the same technical names: </EM></P><UL><LI><EM>The line item, totals tables and application index tables of the General Ledger (GLT0, BSIS, BSAS and FAGLFLEXA, FAGLFLEXT, FAGLBSIS, FAGLBSAS).</EM></LI><LI><EM>The totals tables and application index tables of Accounts Receivable and Accounts Payable (KNC1, KNC3, LFC1, LFC3, BSID, BSIK, BSAD, BSAK).</EM></LI><LI><EM>The line item and totals tables of Controlling (COEP for certain value types, COSP and COSS).</EM></LI><LI><EM>The material ledger tables for parallel valuations (MLIT, MLPP, MLPPF, MLCR, MLCD, CKMI1, BSIM).</EM></LI><LI><EM>The Asset Accounting tables (ANEK, ANEP, ANEA, ANLP, ANLC).</EM></LI></UL><P><STRONG>This is the crux of our challenge:</STRONG> data that was previously distributed across multiple tables in the legacy system is now consolidated into a single table in the S/4HANA environment, that is ACDOCA.</P><P>Although this may not appear to be an issue under normal circumstances, in high-volume scenarios, technical limitations of the HANA database platform can arise when a single table (or partition) approaches a very large number of records. Specifically, each table or partition in HANA is limited to a maximum of 2 billion entries.</P><P><STRONG>The solution:</STRONG> table partitioning!</P><P>Let’s do a deep dive into partitioning and its role during conversions by exploring a few questions. &nbsp;</P><P><STRONG>Question 1: What is table partitioning?</STRONG></P><P>According to &nbsp;<A href="https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/c2ea130bbb571014b024ffeda5090764.html?version=2.0.08" target="_blank" rel="noopener noreferrer">SAP Help Portal</A>: The partitioning feature of the SAP HANA database splits column-store tables horizontally into disjunctive sub-tables or partitions.<STRONG> In this way, large tables can be broken down into smaller, more manageable parts. </STRONG>Partitioning is typically used in multiple-host systems, but it may also be beneficial in single-host systems.</P><P><STRONG>Question 2:&nbsp;Why do we need to do partitioning?</STRONG></P><P>As mentioned earlier, there is a limitation of 2 billion records per table (partition) on the HANA database. This is described in more detail in SAP Note <A href="https://me.sap.com/notes/2154870" target="_blank" rel="noopener noreferrer">2154870 - How-To: Understanding and defining SAP HANA Limitations - SAP for Me</A>.</P><P>If the limit is reached the following errors could be generated:</P><UL><LI>2048: column store error: [17] Docid overflow in current delta.</LI><LI>2055: maximum number of rows per table or partition reached: '&lt;schema&gt;.&lt;table&gt;'</LI><LI>Maximum number of rows per partition reached.</LI><LI>Max row count exceeded.</LI><LI>exception 1: no.3100025: Allowed rowcount exceeded.</LI></UL><P>Below is an example of a short dump with the error “maximum number of rows per table or partition reached” that can occur in step ID MUJ “Data Migration into Unified Journal: Line Items”:</P><P>&nbsp;</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ACDOCA error.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252119i5BB42235D9D2D697/image-size/large?v=v2&amp;px=999" role="button" title="ACDOCA error.png" alt="ACDOCA error.png" /></span></P><P><STRONG>Question 3:&nbsp;How do we know that we need to do partitioning for the conversion?</STRONG></P><P>The Readiness Check which forms part of the Preparation Phase can guide the need for partitioning. It details the table records in the existing tables in ECC (classic General Ledger or New General Ledger), and also gives an estimate of the number of records expected in ACDOCA. See the table below as an example:</P><P><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="RC tables.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252153iA58646AFDC05D81C/image-size/large?v=v2&amp;px=999" role="button" title="RC tables.png" alt="RC tables.png" /></span></P><P>The calculation is not exact, but it already gives you an idea that the number of records will exceed 1 billion for ACDOCA, in this case, partitioning is required.</P><P><STRONG>Further Reading:</STRONG> Great community blog - <A href="https://community.sap.com/t5/enterprise-resource-planning-blogs-by-sap/sap-s-4hana-simplification-item-check-how-to-do-it-right/ba-p/13386669" target="_blank">SAP S/4HANA Simplification Item Check - How to do it right.</A></P><P><STRONG>Note:</STRONG> Ensure that all relevant notes are implemented to get thorough analysis and results.</P><P><STRONG>Question 4: How should we partition the tables?</STRONG></P><P>Luckily there is quite a bit of guidance on partitioning in general, as well as specific guidance for Finance Tables, please review the following:</P><OL><LI>SAP Note <A href="https://me.sap.com/notes/2044468/E" target="_blank" rel="noopener noreferrer">2044468 - FAQ: SAP HANA Partitioning</A></LI><LI>SAP Note <A href="https://me.sap.com/notes/2289491" target="_blank" rel="noopener noreferrer">2289491 - Best Practices for Partitioning of Finance Tables - SAP for Me</A></LI><LI>For scale out scenarios read SAP Note <A href="https://me.sap.com/notes/3325787" target="_blank" rel="noopener noreferrer">3325787 - Partition-Distribution of Financial Data in SAP S/4HANA</A></LI><LI>Guidance provided in SAP Customizing Implementation Guide:</LI></OL><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="ACDOCA partition.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252121i16E1DA0A2FC41736/image-size/large?v=v2&amp;px=999" role="button" title="ACDOCA partition.png" alt="ACDOCA partition.png" /></span></P><P class="lia-indent-padding-left-30px" style="padding-left : 30px;"><span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Partition text.png" style="width: 999px;"><img src="https://community.sap.com/t5/image/serverpage/image-id/252124i938F046DEB407BFA/image-size/large?v=v2&amp;px=999" role="button" title="Partition text.png" alt="Partition text.png" /></span></P><P>When designing a partitioning strategy, it is important to analyze how the data is distributed over Fiscal Periods and Years, and to anticipate future data growth.</P><P><STRONG>Note:</STRONG> Pay attention to the following guidance from SAP Note <A href="https://me.sap.com/notes/2289491" target="_blank" rel="noopener noreferrer">2289491 - Best Practices for Partitioning of Finance Tables</A> when creating future partitions à Too many empty partitions can affect the HANA optimizer and cause poor performance for queries which don't apply partition pruning.</P><P><STRONG>Question 5: When should partitioning be done during the conversion process?</STRONG></P><P>The partitioning of the ACDOCA table, in the target system, can be done once the SUM tool has reached the “business downtime” phase (this phase is known as DOWNCONF_DTTRANS) on the source system (in SUM uptime phase).</P><P><STRONG>Note 1:</STRONG> Other tables are partitioned automatically by the SUM tool.</P><P><STRONG>Note 2:</STRONG> If partitioning isn’t performed before the data migration it is recommended to reset the migration rather than trying to apply partitioning with an already populated ACDOCA. Once records exist in ACDOCA, applying partitioning becomes significantly slower. It's recommended to reset the migration (Step ID MUJ is where ACDOCA begins to be populated) - this clears the records in ACDOCA — then proceed with partitioning.</P><P>Before we wrap up, we would like to mention that it is very important to focus on Data Volume Management before conversion.</P><P>Benefits are:</P><OL><LI>Reduced data footprint which results in a shorter conversion duration.</LI><LI>Reduced memory requirement for the new SAP S/4HANA system.</LI></OL><P>The following guide and notes contain useful reading on the topic:</P><UL><LI><A href="https://me.sap.com/notes/2818267" target="_blank" rel="noopener noreferrer">2818267 - Data Volume Management for FI/CO during migration to SAP S/4HANA</A></LI><LI><A href="https://help.sap.com/doc/195dd7408c7447388c1bb9e54a5f6a31/1.0/en-US/DMV_CONV.pdf" target="_blank" rel="noopener noreferrer">Data Volume Management during an SAP S/4HANA Conversion</A></LI></UL><P>Hope you enjoyed this blog! Please feel free to leave suggestions and comments.</P><P>This blog was co-authored by&nbsp;<a href="https://community.sap.com/t5/user/viewprofilepage/user-id/681">@DotEiserman</a>.</P> 2025-05-07T12:06:41.075000+02:00