Case Study · AI & LLM Security Testing

OWASP LLM Top 10: Prompt Injection, Training Data PII Leakage, and AI Account Takeover in a Financial Services Platform

Service · AI & LLM Security TestingFirm · SIRI Law LLPContact · +91 7981912046
4
Critical AI Vulnerabilities
PII Leakage
From Training Data
Account Takeover
Via AI Agency
3 Enterprise
Deals Approved
RBI Compliant
Post-Remediation
₹250Cr
DPDPA Exposure

HomeCase StudiesAI & LLM Security Testing → OWASP LLM Top 10: Prompt Injection, Training Data PII Leakage, and AI Account Takeover in a Financial Services Platform

Engagement Background

The Situation When We Were Engaged

A Bengaluru AI startup had built a generative AI customer support platform for banks, NBFCs, and insurance companies. The platform integrated with core banking APIs — enabling the AI to answer queries and perform account operations.

Three enterprise financial services customers were evaluating the platform. All three required independent AI security assessment before production deployment. Their security teams specifically flagged: prompt injection, training data leakage, and AI agency risk.

SIRI Law LLP’s OWASP LLM Top 10 assessment found four critical vulnerabilities: system prompt extraction via direct injection, indirect injection via customer-uploaded PDF documents, membership inference attacks extracting real customer PII from the fine-tuned model’s weights, and AI function calling that enabled a complete account takeover chain — email change, security alert disable, new debit card request — from a single customer session.

The model had been fine-tuned on 18 months of real customer support transcripts processed without DPDPA-compliant consent for AI training purposes.

Client Profile

Platform TypeGenerative AI — customer support + account operations
CustomersBanks, NBFCs, Insurance
ModelCommercial LLM fine-tuned on 18 months of transcripts
AssessmentOWASP LLM Top 10

Attack Scenario & Methodology

How the Assessment Was Conducted

Assessment Approach

OWASP LLM Top 10 (2023) — all 10 categories assessed. Primary focus: LLM01 (Prompt Injection), LLM06 (Sensitive Information Disclosure), LLM08 (Excessive Agency). Red team operated as malicious end-users with no prior system prompt knowledge — realistic adversary simulation.

Test environment: staging with production AI model + production system prompt, connected to financial system API sandbox with realistic non-production account data. 40+ prompt injection variations tested. Training data extraction: 200+ targeted membership inference prompts.

Technical Findings

What We Found

Each finding documented with proof-of-concept. Root cause and remediation guidance provided for every item.

CRITICALDirect Prompt Injection — System Prompt Extraction + Instruction Override

Multi-stage injection caused the LLM to output full system prompt including confidential institutional configuration. Secondary exploitation: AI asserted false product characteristics (potential mis-selling under RBI), disclosed information prohibited by system prompt, and operated outside intended constraints.

CRITICALIndirect Prompt Injection via Customer-Uploaded PDF

Document processing pipeline included all extracted text in LLM context without sanitisation. PDF with hidden white-on-white text containing injection instructions caused the AI to: list all available API function capabilities, attempt account modification function execution. Exploitable from any uploaded document.

CRITICALTraining Data PII Leakage — Membership Inference Attack

Prompts constructed to probe for specific PII patterns from training corpus. Model completed partial account numbers with correct digits (7/12 attempts), reproduced customer names in specific contexts, and outputted verbatim transcript fragments including one containing a customer’s disclosed medical condition. DPDPA purpose limitation violated.

CRITICALExcessive AI Agency — Account Takeover Chain

AI tool calling with no step-up authentication: attacker manipulated AI to change registered email, disable security alerts, request new debit card. Three sequential AI-facilitated actions = complete account takeover with only initial customer login session. RBI sensitive operation authentication requirements violated.

Engagement Timeline

Phase-by-Phase Execution

Phase 1
1

Prompt Injection Testing

40+ injection techniques. System prompt extracted on 3 different framings. Instruction overrides confirmed for competitor mentions, false product attributes, and confidential information disclosure.

Phase 2
2

Indirect Injection via Documents

PDF attack vector: white-on-white hidden text injection. Confirmed LLM executed hidden instructions in uploaded documents as if system instructions. All document types tested (PDF, DOCX, TXT).

Phase 3
3

Training Data Extraction

200+ membership inference prompts. Partial account numbers reproduced correctly. Medical condition from transcript verbatim output confirmed. DPDPA purpose limitation violation documented.

Phase 4
4

AI Agency Assessment

Account takeover chain demonstrated step-by-step. Email change → security alert disable → debit card request — all via AI conversation without step-up authentication. RBI violation documented.

Legal & Regulatory Risk Analysis

Why This Mattered Legally

SIRI Law LLP’s integrated practice means every technical finding is analysed for its legal and regulatory implications — providing a complete risk picture, not just a vulnerability list.

DPDPA 2023 — Training Data Purpose Limitation Violation

Fine-tuning on customer support transcripts without DPDPA consent for AI training = purpose limitation violation under Section 6. Training data leakage from model weights demonstrates personal data remains accessible — DPDPA erasure right (Section 17) cannot be satisfied by deleting source data.

⚠ Penalty up to ₹250 crore; machine unlearning obligation

RBI Customer Service Guidelines — Sensitive Operation Authentication

AI-facilitated account modifications without step-up authentication violates RBI requirements for sensitive account operations. Account takeover chain demonstrated via AI.

⚠ RBI directive to cease AI operations pending remediation

Mis-Selling Liability — False Product Characteristics

Prompt injection enabling AI to assert false product characteristics. RBI Master Circular + Consumer Protection Act 2019 exposure. AI-generated mis-selling even if adversarially triggered creates institutional liability.

⚠ Consumer Protection Act 2019; RBI enforcement

GDPR Article 22

One evaluating institution had EU customers. AI performing account modifications without human oversight fails GDPR Article 22 automated decision-making requirements.

⚠ GDPR supervisory authority enforcement

Remediation Programme

How We Fixed It

Prompt Injection Controls

Input sanitisation pipeline for all user inputs and document content.

Document processing isolation: separate LLM call with explicit instruction to treat all text as untrusted.

System prompt hardening: injection-resistant instruction structure.

Output validation: responses indicating system prompt disclosure automatically rejected.

Training Data Compliance

Full transcript corpus PII-redacted using NLP-based PII detection before retraining.

Model retrained on redacted corpus.

DPDPA-compliant consent framework for future training data use.

Machine unlearning procedure documented for data subject erasure requests.

AI Agency Controls

Sensitive operation classification: high-risk operations require OTP via separate channel — AI cannot complete.

Function calling scope restricted: most write functions removed from AI tool access.

Human review workflow for financial product recommendations.

Business Outcomes

What the Client Achieved

3 Enterprise Financial Institution Deals Approved

All 3 customers approved production deployment after receiving remediation evidence and AI governance documentation.

DPDPA Training Data Compliance Established

Model retrained on PII-redacted corpus. Consent framework implemented. Machine unlearning procedure documented.

AI Account Takeover Vector Closed

Step-up authentication for sensitive operations — account takeover chain permanently broken.

RBI-Compliant AI Governance Framework

AI governance documentation satisfying RBI guidelines produced — human oversight, explainability, sensitive operation controls.

Compliance Frameworks

Standards Applied in This Engagement

OWASP LLM Top 10 (2023)DPDPA 2023RBI Customer Service GuidelinesGDPR Article 22ISO/IEC 42001NIST AI RMF

Why Choose SIRI Law LLP

Unique Advantage

Qualified advocates — legally privileged investigations

Certified security engineers — OSCP, CISSP, CEPT, CEH

DPDPA + CERT-In compliance integrated into every engagement

24/7 incident response availability

Director GRC & Legal at COE Security — Adv. Chetan Seripally

Facing a Similar Security Challenge?

Contact SIRI Law LLP for a confidential scoping call with our legal and technical experts.

Disclaimer: This case study describes an engagement handled by SIRI Law LLP. All client details are generic to protect confidentiality. Outcomes are fact-specific and do not guarantee similar results. For legal advice specific to your situation, please consult a qualified advocate.
Scroll to Top