ADVERTISEMENT

Generative AI And Security Risks: An Enterprise Defence Guide

How can businesses survive generative AI security threats like data leaks, deep fakes, API attacks and custom malware code?

<div class="paragraphs"><p>(Source: Freepik/WangXiNa)</p></div>
(Source: Freepik/WangXiNa)

Imposter frauds, deep fakes, proprietary information leakages, sophisticated phishing emails and even malware code generators. Today, attackers can use generative artificial intelligence to target the enterprise at scale. At the same time, businesses face privacy, legal, financial and reputation risks of an unprecedented nature as generative AI makes rapid inroads.

Reining in the associated security risks is of significance since generative AI has moved on from being just a shiny new thing. For example, it has witnessed rapid adoption in enterprise teams like marketing, sales, customer service, operations, business automation, and training. Software development is another front, where organisations hope to leverage the power of generative AI. In such cases, the enterprise will do well to employ a carefully measured generative AI adoption approach.

Most senior IT leaders (67%) prioritise generative AI usage in their business as part of their present and near-term technology roadmaps, according to a Salesforce study from early this year. At the same time, 33% of the respondents voiced concerns about associated security risks and bias. These are valid concerns, especially when we consider recent incidents of targeted attacks, sensitive business information exposure and even hallucination associated with generative AI.

Weaponisation of generative AI for advanced spear-phishing attacks is a notable example. Earlier, a spear-phishing email’s key elements made it possible to distinguish whether it came from a human or an automated AI-based system. That is not the case today.

“Say you receive an email from your senior colleague with individualised phishing content—one that sounds exactly like him or her. How do you deal with that? Generative AI equips attackers to perform such attacks with highly personalised content at a scary volume and scale,” said Andy Thurai, vice president and principal analyst at Constellation Research.

<div class="paragraphs"><p></p></div>

Andy Thurai, vice president and principal analyst Constellation Research

Other issues associated with generative AI usage include:

  • Leakage of sensitive data like source code or business-related information.

  • Inherent security vulnerabilities present in generative AI applications.

  • Inaccurate responses that can affect corporate outcomes and decision-making.

  • Intellectual property issues, since generative AI apps rely on publicly available training data.

  • Manipulation of training data by attackers, which affect the app’s responses.

  • Bias in the form of discriminatory answers to certain prompts.

Securing New Threat Landscapes

As generative AI opens hitherto unseen attack surfaces, businesses must reevaluate existing risk management strategies. It is essential to address these risks at process, governance, technology, and ethical levels.

Ensuring regulatory compliance around the use of generative AI and large language models like GPT-4 in the workplace can be a challenge. “Companies must employ a multi-faceted approach that combines tech-level solutions, strong policy frameworks and comprehensive awareness programmes to remain compliant amid the growing complexities introduced by generative AI. At the same time, the enterprise must be able to profit from generative AI’s immense potential,” said Anand Mahurkar, founder and chief executive officer of Findability Sciences.

<div class="paragraphs"><p></p></div>

Anand Mahurkar, founder and chief executive officer, Findability Sciences

A case in point about the risks associated with generative AI is that of incorrect or compromised outputs from AI models. These are substantial issues in industries like cybersecurity or healthcare, where inaccurate results pose adverse consequences. “From a security perspective, safeguarding training data integrity is paramount. Another risk is that your adversaries can exploit generative AI to create and disseminate convincing phishing threats at a massive scale. To counter such threats, organisations must adopt AI-powered threat detection mechanisms capable of identifying and neutralising attacks at machine scale,” said Samir Kumar Mishra, director of security business for Cisco India and SAARC.

<div class="paragraphs"><p></p></div>

Samir Kumar Mishra, director of security business, Cisco India and SAARC

Risk mitigation for generative AI-based business apps requires considerable investments in time and effort. Edtech major Duolingo’s successful integration of GPT-4 is a good case study. In this case, Duolingo and OpenAI teams collaborated extensively to improve the basic prototype. Generation and labelling of large data sets for refinement of the prompts became a sizable portion of this exercise. Duolingo used this data for refinement of its prerelease GPT-4 model version. User testing after this phase revealed the need to ensure ideal conversational outcomes. This phase helped the teams utilise suitable AI routines and models to ensure that user interactions remain on track.

Define, Educate, Integrate

Policy measures are an integral part of establishing the right usage of generative AI. This can be in the form of AI ethics guidelines and contractual agreements. The first step is to develop a comprehensive internal policy outlining permissible and non-permissible uses of AI.

Education and employee awareness about the safe usage of data on generative AI tools is a key element of risk management. “Your ideal internal generative AI usage policy includes the type of usable data, its handling, and consequences for non-compliance. Also, integrate clauses in employment agreements that explicitly mention the company's policy on the use of generative AI and the penalties for violations,” said Mahurkar. This must be accompanied by regular training sessions and updates about the latest AI-related risks.

Establishment of an internal responsible AI use committee can go a long way in establishing best practices. “This committee should consist of stakeholders from various business areas who champion the policy and actively promote its adoption within their respective teams. By fostering cross-functional collaboration, this committee can help shape a culture of responsibility, accountability and ethical AI practices,” said Mishra.

Establish Targeted Controls

Existing defence mechanisms like traffic-monitoring tools, firewall restrictions, security gateways, and data loss prevention can be utilised for generative AI usage policy enforcement. At the same time, integration of AI with security products will be a game changer in the future.

New-generation AI-powered security tools are helpful for governance of generative AI usage. When clubbed with techniques like application programming interface-based secure access to generative AI tools, these can be highly effective to ensure compliance.

For instance, access monitoring and control tools are useful in cases where existing language model architectures are used by businesses to create internal generative AI chatbots. These can mitigate issues like information leakages or use of AI-generated source code in internal repositories. At the same time, it restricts transmission of shared information back to the language model.

Modern traffic monitoring technology screens outgoing data for personally identifiable information, sensitive data, or corporate secrets. “The choke point can also check the prompt to ensure that the questions and ask are within corporate policies. For example, if an employee asks ChatGPT about criminal activities, those are concerns which HR and law enforcement agencies need to know right away. On the return side, the business must ensure that the answers do not have issues like hate speech, bias, racism, or other threats,” said Thurai.

Towards The Right Direction

Generative AI is here to stay despite the attendant concerns. The promises it holds for businesses on every front far outweigh the associated risks. CXOs and technology leaders who understand this undeniable reality will reap rich rewards.

Today, the onus is on enterprises to craft risk management strategies that enable optimal usage of generative AI. Efforts and investments associated with these growing pangs are an inevitable driver of growth. Usage policies and processes complemented by the right technology controls will ensure that the enterprise can tap into the rich dividends that accompany generative AI usage.

OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit