--- layout: default title: "SFLC.IN Hosts Dialogue on AI: Experts Discuss Risks & Responsible Innovation" description: "The Quint coverage of SFLC.IN's dialogue on artificial intelligence examining deepfakes, responsible AI practices and governance frameworks, featuring Sunil Abraham's analysis on model explainability and the future of large language models." categories: [Media mentions] date: 2024-11-27 source: "The Quint" permalink: /media/sflcin-hosts-dialogue-on-ai-experts-discuss-risks-and-responsible-innovation/ created: 2025-12-26 --- **SFLC.IN Hosts Dialogue on AI: Experts Discuss Risks & Responsible Innovation** is a news report published by *The Quint* on 27 November 2024 as partner content. The article documents a dialogue on artificial intelligence hosted by the Software Freedom Law Center, India (`SFLC.IN`) at the India Habitat Centre on 25 November 2024. It features commentary from Sunil Abraham, Policy Director at META India, alongside other experts including Mishi Choudhary, Saikat Saha, Pamposh Raina and Udbhav Tiwari, examining challenges around deepfakes, AI governance frameworks, model explainability and the balance between innovation and accountability in India's AI ecosystem. ## Contents 1. [Article Details](#article-details) 2. [Full Text](#full-text) 3. [Context and Background](#context-and-background) 4. [External Link](#external-link) ## Article Details
📰 Published in:
The Quint
📅 Date:
27 November 2024
📄 Type:
Event Coverage
📰 Publication Link:
Read Online
## Full Text

On Monday, 25 November 2024, the Software Freedom Law Center, India (SFLC.IN) hosted a dialogue on artificial intelligence at the India Habitat Centre, bringing together industry leaders, academics, and technology experts. The event, titled AI in Focus: Navigating Risk, Regulation, and Responsibility, featured two panel sessions examining the challenges and opportunities of generative AI and open-source technologies and their impact on technological and social frameworks. The Quint was the official media partner for the event.

Mishi Choudhary, Founder of SFLC.IN, set the stage with a compelling opening statement —

"SFLC.IN has been working around AI since 2018 when Gen AI was relatively non-existent. We have seen data evolve, from simple pattern-matching software to systems capable of imitating human behaviour; it raises urgent questions about ethics and rights. We must shift from reactive measures to proactive and pro-innovation regulations, but not at the cost of human rights, equity, and environmental sustainability."
Mishi Choudhary, Founder of SFLC.IN

As the panelists shared their insights, the discussion explored the proliferation of deepfakes, the pursuit of responsible AI practices, and the need to navigate India's unique socio-cultural complexities.

Saikat Saha, Technology Director, NASSCOM, highlighted —

"At NASSCOM AI, we are shaping technical charters and addressing collective risks to drive responsible AI adoption in India. AI can be an economic game-changer for priority sectors like SMEs. While regulatory uncertainties and geolocalised challenges persist, we focus on fostering open consultations between corporates, MSMEs, and stakeholders."
Saikat Saha, Technology Director, NASSCOM

Pamposh Raina, Head, Deepfakes Analysis Unit (DAU), Misinformation Combat Alliance, said —

"Misinformation generated by AI, especially manipulated audio and video, is a growing concern. We analysed over 2,200 media pieces in eight months, revealing significant data misuse during elections, rising health misinformation, and financial fraud. This issue requires a collaborative approach, such as focusing on AI and digital literacy and ensuring platforms flag fake content."
Pamposh Raina, Head, Deepfakes Analysis Unit (DAU), Misinformation Combat Alliance

The conversation then shifted to the balance between innovation and accountability in AI, with panellists shedding light on the nuanced interplay of technology, ethics, and regulation.

Sunil Abraham, Policy Director, META India, emphasised —

"LLMs are non-deterministic, and engineers are often racing ahead of scientists with this black box. At Meta, we believe that the future lies in a multiplicity of large and small models that are more accessible, affordable, and capable. The ecosystem must catch up with the regulatory landscape, deploy responsibly, and focus on model explainability, particularly for sensitive use cases like healthcare."
Sunil Abraham, Policy Director, META India

Udbhav Tiwari, Director, Global Product Policy at Mozilla, added —

"Open-source AI holds immense potential, but it must be approached with responsibility. Clear definitions and safeguards are important to avoid risks like open washing and ensure alignment with shared norms and values. The deliberate design of these systems and their data comes with a responsibility that laws and institutions must incentivize."
Udbhav Tiwari, Director, Global Product Policy at Mozilla

The panel discussions underscored the urgent need for clear, inclusive AI governance frameworks that suit India's unique context and also aligns with global best practices like the EU's GDPR. The event reinforced the significance of pro-innovation regulations, digital literacy, and stakeholder collaboration in shaping an AI ecosystem. Based on the discussions, SFLC.IN will soon launch its research paper, Harnessing Open Source in AI, which will serve as a cornerstone for all future dialogues and initiatives in shaping responsible AI practices.

The event brought together an outstanding lineup of experts and innovators, including Aindriya Barua, Founder & CEO of Shhor AI; Charles Brecque, Co-Founder & CEO of TextMine; Saikat Saha, Technology Director at NASSCOM; Pamposh Raina, Head of the Deepfakes Analysis Unit at the Misinformation Combat Alliance; Chaitanya Chokkareddy, CTO of Ozonetel Communications; Udbhav Tiwari, Director of Global Product Policy at Mozilla; Smita Gupta, Curator of OpenNY AI; Sunil Abraham, Policy Director at META India; Vukosi Marivate; and Professor Eben Moglen.

About SFLC.IN

SFLC.IN, the Software Freedom Law Center, India, is a non-profit organization committed to safeguarding civil liberties in the digital domain. Its primary focus is advocating for software freedom and digital rights in India. Founded on the principles of freedom, transparency, and equity, SFLC.IN operates at the intersection of law, technology, and society, striving to protect and promote fundamental rights in the digital age.

{% include back-to-top.html %} ## Context and Background This dialogue occurred during a pivotal moment when India grappled with the rapid proliferation of generative AI technologies without comprehensive regulatory frameworks. The timing was significant, coming just months after the 2024 Lok Sabha elections that witnessed unprecedented deployment of AI-generated content, including deepfakes targeting political figures and manipulated audio clips spreading across social media platforms. The Misinformation Combat Alliance's analysis of over 2,200 media pieces in eight months revealed patterns of AI misuse that extended beyond electoral manipulation into health misinformation and financial fraud. This underscored the urgency of establishing governance mechanisms tailored to India's linguistic diversity, digital literacy challenges and federal regulatory structure. Unlike jurisdictions with established AI frameworks such as the European Union's AI Act, India lacked statutory provisions specifically addressing generative AI risks. Sunil Abraham's emphasis on model explainability reflected growing concerns about the opacity of large language models, particularly as they were being deployed in sensitive domains like healthcare and financial services without adequate transparency mechanisms. His reference to engineers outpacing scientists highlighted a fundamental challenge where deployment timelines prioritised commercial imperatives over rigorous safety testing. Meta's strategy of promoting diverse model architectures rather than concentrating on monolithic systems represented one industry approach to mitigating single-point-of-failure risks. The discussion around open-source AI and "open washing" addressed a critical debate within the AI development community. Whilst genuinely open-source models offered transparency and community scrutiny, some entities marketed proprietary systems as open-source whilst withholding training data, model weights or inference code. This practice undermined trust and complicated regulatory efforts to distinguish between truly transparent systems and those merely claiming openness for reputational benefits. SFLC.IN's sustained engagement with AI policy since 2018, predating the ChatGPT-triggered generative AI boom, positioned it uniquely to advocate for rights-based approaches rather than reactive crisis management. The promised research paper *Harnessing Open Source in AI* would contribute to policy discussions at a time when India's Ministry of Electronics and Information Technology was consulting stakeholders on potential AI legislation. The dialogue participants' varied perspectives from industry, civil society and academia reflected the multi-stakeholder approach necessary for crafting effective governance frameworks that balanced innovation incentives with rights protection. ## External Link - [Read on The Quint](https://www.thequint.com/tech-and-auto/sflcin-on-ai-navigating-risk-regulation-and-responsibility)