ADVERTISEMENT

Divergence On AI One Of Biggest Risks To Growth For Organisations: KPMG Report

Divergence in AI is one of the biggest risks to growth for global companies, which is likely to impact operations in 2024 and beyond, the report said.

<div class="paragraphs"><p>(Source: rawpixel.com/Freepik)</p></div>
(Source: rawpixel.com/Freepik)

Artificial intelligence has become a transformative technology across industries, with investments in AI increasing more than fivefold between 2013 and 2023. While AI presents opportunities, it also brings about governance gaps that organisations must address.

Divergence in AI is one of the biggest risks to growth for global companies, which is likely to impact operations in 2024 and beyond, according to a recent report from KPMG International. The global regulatory environment for emerging technology is growing more complex and fragmented, and businesses may have to spend more time, money, and effort steering their companies through uncharted waters, the report said.

AI Governance Gaps

KPMG’s CEO Outlook survey showed that 70% of corporate leaders are making generative AI their top investment priority. The value of private equity and venture capital-backed investments in generative AI companies has more than doubled in 2023 to $2.18 billion, compared to $1 billion in 2022.

However, it is important that business leaders prioritise developing a strategic AI framework, which includes an understanding of the political, technical and ethical risks that AI represents. The report noted that it is likely that companies will have to take initiative themselves, rather than rely on global governance structures for safeguards. Additionally, the speed and generative nature of AI’s progress means that any attempts at regulation will be outdated quickly.

Increased Focus On Cybersecurity

For implementing AI into their operations, companies must recognise that the technology requires an increased alertness to cybersecurity threats, and a more nuanced approach to reputational considerations, the report noted.

Malicious actors will see regulatory gaps as an opportunity, particularly as AI use becomes more accessible and critical threats originate more from motivated individuals. Companies must ensure the right infrastructure and strategies are in place to embrace AI in a responsible, human-focused way.

Ethical and responsible AI deployment is crucial to maintain trust among stakeholders. Organisations should prioritise transparency, accountability and fairness in their AI systems to mitigate potential risks and ensure its responsible integration into their operations. Companies will have to meet technology investment with comparable investments in security safeguards and a human-centric AI strategy, the report suggested.

As companies adopt generative AI technologies, they also need to ensure the protection of consumer data privacy and security, avoiding unethical AI-powered marketing and customer profiling practices, as well as high-quality data collection and management.

Forging A Trusted AI Integration Path

The report suggested that organisations should use tailored frameworks designed to embed trusted AI principles into various stages of their AI initiatives. Risk assessments can be used for single AI/machine learning algorithms as well as for entire AI programmes.

Accountability throughout the AI lifecycle is also necessary to integrate trust into organisations’ broader ML processes. This includes establishing and implementing governance, policies, procedures and operating models spanning the AI ecosystem, including training, evaluating and continuous monitoring of AI models.

Organisations must also anticipate the impacts of regulations and compliance requirements on their AI systems. Detailed checks should be conducted to align AI practices with international and industry-specific regulations, and AI systems must be updated to keep them compliant in an evolving regulatory landscape.

Opinion
AI Boost To Top Line Of Indian IT Companies: Here's What Analysts Have To Say