Get App
Download App Scanner
Scan to Download
Advertisement
This Article is From Jul 18, 2024

One-Third Of Sensitive Information Shared With Gen AI Apps Is Regulated Data: Report

One-Third Of Sensitive Information Shared With Gen AI Apps Is Regulated Data: Report
(Source: Freepik)

Regulated data—data that organisations have a legal duty to protect—makes up more than a third of the sensitive data being shared with generative artificial intelligence applications, posing a risk of costly data breaches to businesses, according to a new research by Netskope, a provider of secure access service edge services

Three-quarters of businesses surveyed now completely block at least one gen AI app, which shows that enterprises want to limit the risk of sensitive data exfiltration, the report showed. However, with fewer than half of organisations applying data-centric controls to prevent sensitive information from being shared in input inquiries, most are behind in adopting the advanced data loss prevention solutions needed to safely enable gen AI.

The research found that 96% of businesses are now using gen AI, a number that has tripled over the past 12 months. On average, enterprises now use nearly 10 gen AI apps, up from three last year, with the top 1% adopters now using an average of 80 apps, up significantly from 14. Hence, enterprises have experienced a surge in proprietary source code sharing within gen AI apps, accounting for 46% of all documented data policy violations.

There are some positive signs of proactive risk management by organisations. For example, 65% of enterprises now implement real-time user coaching to help guide user interactions with gen AI apps. Effective user coaching has played a crucial role in mitigating data risks, prompting 57% of users to alter their actions after receiving coaching alerts.

“Securing gen AI needs further investment and greater attention as its use permeates through enterprises with no signs that it will slow down soon. Enterprises must recognise that gen AI outputs can inadvertently expose sensitive information, propagate misinformation or even introduce malicious content,” said James Robinson, chief information security officer at Netskope.

The report also found that ChatGPT remains the most popular app, with more than 80% of enterprises using it. Microsoft Copilot showed the most growth in use since its launch in January 2024 at 57%. Also, 19% of organisations have imposed a blanket ban on GitHub CoPilot.

The report recommended that enterprises review, adapt and tailor their risk frameworks specifically to AI or gen AI, with specific steps including:

  • Begin by assessing your existing uses of AI and machine learning, data pipelines and gen AI applications. Identify vulnerabilities and gaps in security controls.

  • Establish fundamental security measures, such as access controls, authentication mechanisms and encryption.

  • Develop a roadmap for advanced security controls. Consider threat modelling, anomaly detection, continuous monitoring and behavioural detection to identify suspicious data movements across cloud environments to gen AI apps that deviate from normal user patterns.

  • Regularly evaluate the effectiveness of your security measures. Adapt and refine them based on real-world experiences and emerging threats.

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search