Beware! Cross-Border Gen AI Misuse Rising; To Cause Four Of 10 Data Breaches By 2027: Gartner
To mitigate the risks of AI data breaches, particularly from cross-border gen AI misuse, and to ensure compliance, critical actions are needed, Gartner report revealed.

(Source: standret/Freepik)
The rapid adoption of generative artificial intelligence technologies has overtaken the development of data governance and security measures, leading to concerns about data localisation due to the centralised computing power needed to support these technologies. According to Gartner Inc., incorrect cross-border use of gen AI will be responsible for over 40% of AI-related data breaches by 2027.
Inadequate monitoring frequently results in unintentional cross-border data transfers, especially when gen AI is integrated in existing products without adequate disclosure or description. Although authorised business apps can use gen AI technologies, sending sensitive prompts to AI tools and APIs housed in unidentified locations can compromise security.
Global AI Standardisation Gaps
The absence of uniform worldwide standards for data governance and AI makes matters worse by fragmenting the market and compelling businesses to create region-specific strategies. This can make it more difficult for organisations to expand globally and leverage AI products and services.
According to Gartner, localised AI policies might make it difficult to manage data flows and maintain quality, which can result in operational inefficiencies. To safeguard data and maintain compliance, organisations need to make investments in AI governance and security, driving growth in AI security, governance, and compliance services markets.
Organisations Should Integrate Governance, Or Lag Behind
Gartner predicts that by 2027, AI governance will become a requirement of all sovereign AI laws and regulations worldwide. Businesses may be at a disadvantage if they are unable to incorporate the necessary governance models and controls.
To mitigate the risks of AI data breaches, particularly from cross-border gen AI misuse, and to ensure compliance, Gartner recommends several actions.
Enhance Data Governance: These frameworks must be expanded to incorporate standards for AI-processed data in order for organisations to monitor unexpected cross-border data transfers and maintain compliance with international regulations.
Establish Governance Committees: Organisations must form committees to enhance AI oversight and ensure transparent communication about AI deployments and data handling.
Strengthen Data Security: To protect sensitive data, organisations must use encryption, anonymisation, and advanced technologies. For example, advanced anonymisation methods like Differential Privacy must be used when data leaves certain regions and confirm Trusted Execution Environments in such regions.
Invest In TRiSM Products: Plan and allocate budgets for trust, risk, and security management (TRiSM) products and capabilities tailored to AI technologies. This includes AI governance, data security governance, filtering and redaction, and synthetic generation of unstructured data. Gartner predicts that by 2026, enterprises applying AI TRiSM controls will consume at least 50% less inaccurate or illegitimate information, reducing faulty decision-making.