A&M Report Warns Oversight Not Keeping Pace With AI Growth In India Inc.
AI infrastructure is expanding faster than the governance, security and ethical safeguards needed, widening gaps in accountability and risk management.

A new report by professional services firm Alvarez & Marsal (A&M) highlights that India Inc. is rapidly scaling AI, but adoption remains fragmented with only 15% of organisations having extensive enterprise-wide AI deployment. While AI will continue growing, the report cautions that oversight is not keeping pace with it. In many organisations, AI infrastructure is expanding faster than the governance, security and ethical safeguards needed, creating widening gaps in accountability and risk management.
The findings were based on a month-long survey of CISOs, CIOs, CTOs and CROs across BFSI, technology, healthcare, manufacturing, retail and other major sectors.
AI Governance
Governance maturity remains limited despite rising usage. While 60% of organisations have introduced basic governance or acceptable-use policies, only 19% have carried out detailed risk assessments, and 81% still lack full visibility of how their AI systems are monitored or governed. With many AI initiatives developed in silos, accountability and standards vary widely, especially when third-party and in-house models coexist. The study underscores the need for integrated, organisation-wide governance frameworks that embed transparency, oversight and clear role ownership.
Responsible AI
The report found that responsible AI principles are widely acknowledged, however their implementation remains limited. Fewer than 20% of organisations have deployed mechanisms for explainability, bias detection or fairness, and 60% lack any formal process to validate model integrity. Data governance shows similar gaps with only 26% having integrated data masking and PII-scanning within AI workflows, and 60% perform no structured dataset validation. These weaknesses leave systems exposed to bias, compromised training data and inconsistent outcomes. The report highlights the need to embed fairness checks, model transparency and secure data practices into the development lifecycle to ensure decisions remain interpretable and accountable as adoption scales.
Securing The AI Lifecycle
As more complex AI models go into production, security across the AI lifecycle will be imperative. While 52% of enterprises have secure development environments with basic controls, fewer than 30% conduct penetration testing or red-teaming, and only 19% have safeguards to detect data poisoning during model training. These early-stage vulnerabilities can compromise entire models before they are even deployed. The study advocates the need for stronger end-to-end security practices. Measures such as containerising training environments, validating dataset authenticity and embedding adversarial testing into the build lifecycle can substantially improve model resilience.
Deployment And Operationalisation
Operational risks intensify as AI models go live. Although 56% of organisations conduct security reviews before deployment, advanced safeguards remain limited. Only 30% have controls against prompt-injection attacks and just 19% have mechanisms to detect or manage hallucinations in real time. Data protection challenges also persist, with most enterprises depending on traditional access controls rather than automated privacy-preserving methods. The report notes that stronger version control, clearer audit trails and closer monitoring of how models behave with live data will be essential for safe, reliable deployment.
Monitoring And Compliance
Post-deployment oversight remains to be a critical weakness. Around 26% of organisations have no monitoring in place, and a further 45% rely on partial or non-real-time tracking. Incident-response maturity is similarly low, with only 15% reporting AI-specific response plans and 66% conducting no formal audits of their AI systems. These gaps leave enterprises exposed to performance drift, undetected failures, and regulatory risk. Establishing continuous monitoring, clearer escalation processes and periodic assessments of fairness, accuracy and compliance can help organisations respond quickly to issues.
Commenting on the findings, Chandra Prakash Suryawanshi, managing director, Alvarez & Marsal, said “As AI systems become more autonomous and data-intensive, gaps in oversight, model integrity and lifecycle governance carry far greater consequences. This report shows a clear need for organisations to move from fragmented controls to a holistic approach that integrates governance, security and monitoring across every stage of the AI lifecycle.”
Dhruv Phophalia, MD and India lead - disputes & investigations, Alvarez & Marsal, said: “India’s AI opportunity is substantial, but its long-term gains depend on how effectively organisations govern and secure the systems they deploy. Those who invest early in these foundations will be best placed to unlock the full economic and competitive potential of AI.”
