Table of Contents
Context: Framework for Responsible and Ethical Enablement of Artificial Intelligence Committee (FREEAI Committee) of Reserve Bank of India (RBI) recently released its report.
About FREE AI Committee
- In 2024, the Reserve Bank of India (RBI) set up an Internal Committee on Artificial Intelligence to frame a governance framework for the safe and ethical adoption of AI by Regulated Entities (REs) like banks, NBFCs, and insurers.
- The Committee’s report in August 2025 titled: “Framework for Regulated Entities for Effective AI (FREE AI)”.
- The aim was to strike a balance between innovation and risk mitigation, ensuring that AI adoption enhances efficiency without undermining fairness, accountability, or financial stability.
Significance of AI in Finance
- Revenue Growth: AI is expected to be a major growth driver, with financial sector investments projected to touch ₹8 lakh crore by 2027.
- Efficiency and Personalization: By automating routine and data-heavy processes, AI enables faster and more accurate operations, such as loan processing and customer support.
- Boosting Financial Inclusion: Through the use of alternative data sources like utility payments and GST records, AI helps assess the creditworthiness of “thin-file” or first-time borrowers often excluded by traditional systems.
- Strengthening Digital Infrastructure: AI enhances India’s digital public platforms such as Aadhaar and UPI, enabling more personalised and adaptive financial services.
- Improved Risk Management: AI supports fraud detection, early-warning systems, and better decision-making, thereby strengthening overall risk management.
- Example: J.P. Morgan’s AI-based payment validation reduced fraud and cut account rejection rates by 15–20%.
- Synergy with Emerging Technologies: When combined with quantum computing and advanced privacy tools, AI can deliver superior performance, security, and resilience in financial services.
Emerging Risks and Sectoral Challenges of AI in Finance
Model Risks: AI outputs may deviate from expectations, leading to losses or reputational harm.
- Risk sources include:
- Data risk – incomplete, biased, or faulty datasets.
- Design risk – flawed algorithms or misaligned objectives.
- Calibration risk – incorrect parameter weights.
- Implementation risk – poor integration into financial processes.
- Model-on-model risk: AI systems used to supervise other AI models can themselves fail, creating cascading effects.
- GenAI risks: “Hallucinations” (false outputs), lower explainability, and misleading communications to customers.
Operational Risks – Systems Under Stress: Automation reduces human error but amplifies faults at scale.
- Examples: AI fraud detection misclassifying genuine transactions → loss of customer trust.
- Credit scoring models are failing due to data pipeline corruption.
- “Model drift” occurs when performance degrades over time without monitoring.
Third-Party Risks – Vendor Dependencies: Financial institutions rely on external AI vendors and cloud providers.
- Risks: Service interruptions, software bugs, or security breaches.
- Concentration risk if a few dominant vendors control critical infrastructure.
- Limited visibility of subcontractors’ practices → compliance gaps.
Liability and Accountability Risks
AI systems are probabilistic, not deterministic. This blurs lines of responsibility.
- Risk of AI-Driven Collusion: Theoretical but significant: autonomous AI systems colluding to maintain high prices or manipulate markets.
- Particularly relevant in high-frequency trading or dynamic pricing.
- Could breach competition laws and distort markets.
- Financial Stability Concerns:
- Procyclicality: AI models trained on historical data may amplify boom-bust cycles.
- Herding effect: If multiple institutions use similar AI strategies, synchronised behaviour can increase volatility.
- Example: 2010 Flash Crash, where automated trading algorithms wiped out nearly $1 trillion in minutes.
- Cybersecurity Risks – A Double-Edged Sword:
- Offensive use: AI can power advanced cyberattacks like data poisoning, adversarial inputs, deepfake fraud, or phishing.
- Defensive use: AI improves detection through anomaly monitoring, predictive analytics, and real-time response.
- Data Security and Privacy Risks:
- Over-collection of data: AI systems often gather more data than necessary, breaching data minimisation principles.
- Data aggregation risks: Innocent data points, when combined, can reveal sensitive info (mosaic effect).
- Cloud dependency conflicts: Global AI infrastructure may clash with India’s data localization norms.
- Consumer and Ethical Concerns: Bias may exclude vulnerable groups (rural poor, women, minorities).
- Opacity leaves customers unable to understand decisions.
- Manipulation risks: AI-driven nudges may push consumers into choices not aligned with their best interests.
- Raises ethical issues around informed consent, exploitation, and fairness.
- AI Inertia – Risks of Non-Adoption: Not adopting AI is itself a risk:
- Institutions may fall behind in competitiveness and efficiency.
- Widening financial access gaps if rural/underserved areas miss AI-driven inclusion tools.
- Without AI, institutions cannot counter AI-driven cyberattacks.
RBIS Recommendations
- 7 Sutras for AI adoption:
- Trust is the Foundation: Trust is non-negotiable and should remain uncompromised.
- People First: AI should augment human decision-making but defer to human judgment and citizen interest.
- Innovation over Restraint: Foster responsible innovation with purpose.
- Fairness and Equity: AI outcomes should be fair and non-discriminatory.
- Accountability: Accountability rests with the entities deploying AI.
- Understandable by Design: Ensure explainability for trust.
- Safety, Resilience, and Sustainability: AI systems should be secure, resilient, and energy efficient.
- Innovation Enablement: Build a robust financial sector data infrastructure as part of the digital public infrastructure, linked with AI Kosh.
- AI Innovation Sandbox: Set up a secure sandbox (similar to the GenAI Digital Sandbox) where financial institutions can test AI models on anonymised data, with built-in tools to detect bias, errors, and ensure compliance with AML, KYC, and consumer protection standards.
- Consumer Protection and Security: Require proportionate AI red-teaming through both regular and event-triggered testing. Introduce incident reporting frameworks with good-faith disclosures to manage risks effectively.
- Capacity Building in Regulated Entities (REs): Design structured training programs for AI governance and risk management across all institutional levels.
- Knowledge Sharing: Create mechanisms for exchanging AI use cases and best practices across the financial sector to encourage responsible adoption.
- AI Incident Reporting: Develop a dedicated framework for timely detection, reporting, and disclosure of AI-related incidents.