Case Study · AI Adoption Security
Secure AI Transformation: Vendor Assessment, Shadow AI Discovery, Data Governance, and Safe Deployment Framework for a 6,000-Person Professional Services Firm
Home → Case Studies → AI Adoption Security → Secure AI Transformation: Vendor Assessment, Shadow AI Discovery, Data Governance, and Safe Deployment Framework for a 6,000-Person Professional Services Firm
Engagement Background
The Situation When We Were Engaged
A 6,000-person professional services firm — providing consulting, transaction advisory, and regulatory advisory services to listed companies — had a Board-mandated AI productivity programme. The CISO and General Counsel jointly identified that AI deployment across a firm handling confidential client information required a security and legal assessment before firm-wide rollout.
The engagement began with a shadow AI discovery exercise — and the findings were immediately concerning. Employees across the firm were already using 23 AI tools without any procurement approval, vendor assessment, or data governance review. Eleven of those tools were receiving confidential client engagement data — project reports, financial models, M&A target analyses, and regulatory advice drafts — which was being processed by the AI vendors’ systems under standard consumer terms.
SIRI Law LLP conducted the shadow AI discovery, vendor security assessments, data governance framework design, and safe deployment policy — producing a Board-approved AI adoption policy that enabled the firm to deploy AI productively while managing the legal and security risks of handling client-confidential data.
Client Profile
Assessment Scope
Shadow AI Discovery, Vendor Assessment, and AI Governance Framework
Shadow AI Discovery
Network traffic analysis, browser extension audit, and employee survey to identify all AI tools in use across 6,000 employees. 23 tools identified — categorised by data risk, vendor terms, and security posture. 11 flagged as immediately concerning for client data exposure.
AI Vendor Security Assessment
Security and legal assessment of each identified AI vendor: data retention policies, model training on user inputs, subprocessor chains, data residency, security certifications, and contractual terms. Red/Amber/Green categorisation for procurement decision-making.
AI Governance Framework
Board-approved AI adoption policy: approved tool list, data classification framework (what data can be input to which AI tool), employee training, procurement process for new AI tools, and incident response procedure for AI-related data exposure events.
Key Findings
What We Found
Each finding documented with evidence. Root cause and remediation guidance provided for every item.
11 AI tools being used with client engagement data operated under consumer terms that permitted use of inputs to train AI models. Confidential M&A target analyses, regulatory advice drafts, and financial models for listed companies — all potentially incorporated into commercial AI models accessible to competitors. Immediate cessation of use with client data required pending tool assessment.
Two employees in the transaction advisory division had used AI writing tools to draft sections of M&A advisory reports — inputting unpublished price-sensitive information (UPSI) about listed company targets into external AI systems. This constitutes a SEBI Insider Trading Regulations risk — UPSI disclosure to an external party (the AI system) outside permitted channels. Legal analysis required and SEBI Regulations compliance framework implemented.
The firm had no AI vendor procurement process. All 23 tools had been adopted by individual employees without IT, security, or legal review. No vendor security assessments, no DPAs, no data residency verification. Five of the 23 tools had data residency in China — creating additional risk for clients in regulated sectors with cross-border data restrictions.
The firm’s standard engagement letter and client confidentiality provisions made no reference to AI tools or third-party AI processing of client data. Use of AI tools to process client information — even for legitimate productivity purposes — without client consent may breach the firm’s contractual confidentiality obligations. Engagement letter AI disclosure clause required.
Engagement Timeline
Phase-by-Phase Execution
Shadow AI Discovery — Network and Survey
Two-week discovery exercise: network traffic analysis (proxy logs) to identify AI tool domains, browser extension audit across managed devices, and anonymous employee survey. 23 tools identified. Data risk classification: 11 Red (client data exposure), 8 Amber (potential risk), 4 Green (low risk, enterprise terms available).
Vendor Security and Legal Assessment
Structured assessment of each Red and Amber tool: data retention policy review, model training on inputs verification, subprocessor chain mapping, SOC 2 / ISO 27001 certification check, data residency verification, and contractual terms analysis. Enterprise tier options assessed for Red tools where enterprise data protection terms were available.
AI Governance Framework Design
Data classification framework: four tiers of data sensitivity with permitted AI tool categories for each tier. SEBI UPSI handling procedures for transaction advisory — AI tool prohibition for UPSI-containing content. Approved tool list with enterprise agreements. Employee training programme. New AI tool procurement process — IT + Legal + Security review required.
Board Policy Approval and Implementation
Board-approved AI Adoption Policy drafted and approved. All 11 Red tools blocked at network level. Enterprise agreements negotiated with 4 approved tools — DPAs executed. Engagement letter AI disclosure clause added. Employee training delivered to all 6,000 employees. Shadow AI monitoring implemented on an ongoing basis.
Legal & Regulatory Risk Analysis
Why This Mattered Legally
SEBI Insider Trading Regulations — UPSI Disclosure to AI Systems
Inputting unpublished price-sensitive information about listed company clients into external AI systems constitutes a potential SEBI Insider Trading Regulations violation — UPSI disclosure outside permitted channels. SEBI enforcement in this area is active. The transaction advisory AI prohibition and UPSI handling procedures were designed specifically to address this risk.
Client Confidentiality — Breach of Engagement Letter Obligations
Professional services firms owe fiduciary and contractual confidentiality obligations to clients. Using AI tools to process client data under terms that permit model training — without client consent — may breach these obligations. The engagement letter AI disclosure clause and approved tool policy create a defensible position for future AI-assisted work.
DPDPA — AI Processing of Personal Data Without Compliant Basis
Where AI tools processed personal data of client employees or individuals — in HR advisory, employment matters, and background checks — DPDPA obligations apply to the firm as data fiduciary. Vendor DPAs and data subject consent obligations were identified and addressed in the AI governance framework.
IT Act Section 72A — Disclosure of Information in Breach of Contract
If an AI vendor’s training on client data resulted in client information being accessible via the AI model’s outputs to third parties, this could constitute disclosure of information in breach of the firm’s lawful contract with clients — creating civil liability under IT Act Section 72A in addition to contractual breach.
Outcomes & Remediation
What Changed After Our Assessment
11 Shadow AI Tools Blocked — Client Data Exposure Eliminated
All 11 Red-category AI tools blocked at network proxy level. No further client engagement data exposure to unauthorised AI systems. Four enterprise-grade approved tools deployed with DPAs and data protection terms.
Board-Approved AI Adoption Policy — Firm-Wide Implementation
AI Adoption Policy approved by Board. Data classification framework implemented. New AI tool procurement process operational. All 6,000 employees trained on AI data governance within 60 days.
SEBI UPSI AI Prohibition — Transaction Advisory Compliance
Transaction advisory division AI tool prohibition implemented. UPSI handling procedures updated to address AI. SEBI Insider Trading Regulations compliance position documented.
Enterprise AI Agreements — Four Approved Tools with DPAs
Enterprise agreements and DPAs executed with four approved AI tools. Data retention opt-out, training data exclusion, and EU data residency confirmed. Approved tool list published to all staff.
Compliance Frameworks
Standards Applied in This Engagement
Why Choose SIRI Law LLP
Unique Advantage
Shadow AI discovery is a technical + legal exercise — we do both
SEBI UPSI and professional services regulatory expertise
AI vendor contracts negotiated — DPAs, training data exclusion, data residency
Board-level policy drafting — not just a technical report
Director GRC & Legal — Adv. Chetan Seripally
Deploying AI Across Your Organisation? Do It Securely.
Contact SIRI Law LLP for a confidential scoping call with our legal and technical experts.

