--- layout: default title: "Infosys Launches Open-Source Responsible AI Toolkit" description: "An IT Brief Australia report on Infosys's launch of its open-source Responsible AI Toolkit, featuring commentary from Sunil Abraham on the importance of open innovation for advancing safe and responsible AI development." categories: [Artificial Intelligence, Media mentions] date: 2025-03-03 authors: ["Sean Mitchell"] source: "IT Brief Australia" permalink: /media/infosys-responsible-ai-toolkit-it-brief/ created: 2026-01-06 --- **Infosys Launches Open-Source Responsible AI Toolkit** is an *IT Brief Australia* article published on 3 March 2025. The report covers Infosys's decision to release its Responsible AI Toolkit as open-source software, designed to address ethical AI challenges including bias, privacy breaches, and security vulnerabilities. The article features Sunil Abraham's statement emphasising how open-source code and datasets empower diverse AI innovators to prioritise safety, diversity, and economic opportunity. ## Contents 1. [Article Details](#article-details) 2. [Full Text](#full-text) 3. [Context and Background](#context-and-background) 4. [External Link](#external-link) ## Article Details
đź“° Published in:
IT Brief Australia
đź“… Date:
3 March 2025
👤 Authors:
Sean Mitchell
đź“„ Type:
Technology News Report
đź”— Article Link:
Read Online
## Full Text

Infosys has announced the launch of its open-source Responsible AI Toolkit, aimed at assisting enterprises in innovating responsibly while addressing challenges and risks associated with ethical AI adoption.

The Responsible AI Toolkit forms part of the Infosys Topaz Responsible AI Suite and is developed within the framework of the Infosys AI3S model, which stands for Scan, Shield, and Steer. It provides businesses with advanced technical guardrails to detect and manage issues such as privacy breaches, security attacks, and biased outputs. It is designed to enhance model transparency and offers flexibility and ease of implementation across diverse digital environments, being fully customisable and open source.

Balakrishna D. R., Executive Vice President and Global Services Head, AI and Industry Verticals at Infosys, stated, "As AI becomes central to driving enterprise growth, its ethical adoption is no longer optional. The Infosys Responsible AI Toolkit ensures that businesses remain resilient and trustworthy while navigating the AI revolution. By making the toolkit open source, we are fostering a collaborative ecosystem that addresses the complex challenges of AI bias, opacity, and security. It's a testament to our commitment to making AI safe, reliable, and ethical for all."

Infosys launched its Responsible AI Office last year, reaffirming its commitment to ethical AI practices. The company is also one of the first to achieve ISO 42001:2023 certification on AI management systems and actively participates in global discussions on Responsible AI through memberships in key industry bodies and government initiatives.

Joshua Bamford, Head of Science, Technology and Innovation at the British High Commission, commented on Infosys' move, saying, "Infosys' commitment to becoming an AI-first business and establishing the Responsible AI Office reflects bold innovation and ethical leadership. By going open source, Infosys is empowering enterprises, startups and SMEs to leverage AI for groundbreaking advancements. Their Responsible AI Toolkit is a benchmark for technological excellence and when paired with a commitment to responsible practices and global sustainability can be an inspiring model for companies worldwide."

Sunil Abraham, Public Policy Director in charge of Data Economy and Emerging Tech at Meta, expressed support for the initiative. "We congratulate Infosys on launching an openly available Responsible AI Toolkit, which will contribute to advancing safe and responsible AI through open innovation. Open-source code and open datasets are essential to empower a broad spectrum of AI innovators, builders, and adopters with the information and tools needed to harness the advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all," he said.

Abhishek Singh, Additional Secretary at the Ministry of Electronics and Information Technology in India, stated, "I am very happy to learn that Infosys has decided to open source their Responsible AI Toolkit. This will go a long way in making tools available for enhancing Security, Privacy, Safety, Explainability and Fairness in AI based solutions and also help in mitigating bias in AI algorithms and models. This is critical for developing safe, trusted and responsible AI solutions. I am sure, startups and AI developers will greatly benefit from this Responsible AI Toolkit."

Infosys continues to position itself as an active participant in the global dialogue on ethical AI. Through collaborations and memberships in initiatives such as NIST AI Safety Institute Consortium and WEF AIGA, Infosys contributes to shaping the frameworks and practices surrounding responsible AI technologies worldwide.

{% include back-to-top.html %} ## Context and Background This announcement emerged during a period when enterprises increasingly recognised that deploying AI without ethical safeguards created substantial legal and reputational risks. High-profile algorithmic failures throughout 2024-2025—from discriminatory hiring systems to biased lending decisions—had demonstrated tangible harms from inadequately governed AI deployment. Infosys's open-sourcing decision reflected a strategic calculation that collaborative development would produce stronger ethical safeguards than proprietary approaches. The AI3S framework—Scan, Shield, and Steer—provided systematic methodology across the AI lifecycle: scanning for bias and vulnerabilities, shielding against identified threats, and steering governance toward responsible outcomes. Achieving ISO 42001:2023 certification gave Infosys third-party validation of its AI management systems. This recently published international standard established requirements for risk assessment, stakeholder engagement, and documented controls throughout AI development and deployment. Sunil Abraham's emphasis on open-source as "essential" aligned with ongoing debates about AI democratisation. Whilst proprietary development concentrated capabilities amongst well-resourced corporations, open approaches theoretically enabled broader participation. Meta's advocacy reflected its strategic positioning after releasing Llama models under permissive licences, arguing transparency produced safer systems than secretive development. The Indian government's enthusiastic endorsement reflected national priorities to position India as a responsible AI leader whilst nurturing domestic industry. Additional Secretary Abhishek Singh's statement emphasised how open-source tools could lower barriers for startups, advancing both policy objectives simultaneously. Infosys's participation in NIST AI Safety Institute Consortium and World Economic Forum initiatives situated this release within broader international efforts to establish shared norms and technical standards. ## External Link - Read on IT Brief Australia