Decoding AI Trends And Regulatory Grid

In this dynamic AI ecosystem, one-size-fit-all regulation does not cut in.

Artificial intelligence or AI (Photo: Pixabay)

The rapid evolution of artificial intelligence is rewriting the rules for modern organisations, consumers, governance, as well as the society.

Governments around the world know that technology can drive economic growth, but only if policies to foster innovation and manage risks match pace. And so, there is an urgency to build a regulatory scaffolding around AI to harness the power of this technology and to teach it to play by human’s rules; only to be outpaced by AI’s alarming speed of evolution. In this shifting landscape of AI innovations and governance, what organisations need is strategic insights into AI trends and AI regulations.

Key AI Trends

Multimodal Models

Working real-time on comprehensively annotated datasets, these models go beyond traditional text-based exchanges. These models are accomplished enough to process and integrate data to images, audios and videos.

The human-machine interaction is intuitive and natural, making for a rich user experience. These models have numerous use cases in the industry.

Agentic AI

This next wave AI is a digital maestro that can automate processes, make autonomous decisions and adapt to changing situations. It has jumped out of the narrow box to evolve into a context-aware agent, entirely capable of taking decisions based on context. And even show emotion!

It depicts human-like behaviour but performs without or with very little human oversight. Agentic AIs can be that assistant who can “reach out” to handle such tasks as buy a plane ticket, cancel subscription, file complaints, and handle bills. This revolution is walking among us.

AI Visualisation

This is the space where the classical form of creating content has given way to synthetic content — which is having a field day, flourishing in the internet’s deep fake world.

The extensive proliferation of AI-generated content — from marketing campaigns to job applications — raises questions about their genuineness and human discernment. While some of these are labelled as AI-generated, a massive amount of text and videos are released without such classification. The future? Implementation of radical transparency and ethical systems.

Cyber Exploits

Close on the heels of deep fake is the trend of cyber exploits, which is well documented. Sophisticated threat actors of AI are increasingly creating a lot of exploitative or socially engineered content. In response, cyber security teams of organisations are adopting AI not only to thwart such motives, but also to move beyond checkbox security, i.e., by investing in AI-responsible behaviour.

A good example of this approach is SEBI’s Cyber Security and Cyber Resilience Framework, which provides standards and guidelines for regulated entities to address cybersecurity measures. It is a prescient and very forward-looking approach, albeit challenging for the organisation to comply. At least at this stage.

RegAI

Regulatory artificial intelligence or RegAI has revolutionised navigation of complex frameworks around regulatory compliance and automates regulatory processes at lightning speed. Regulators around the world are leveraging it to sift through vast datasets, analyse them and identify patterns.

The Securities and Exchange Board of India is using AI to review documents and applications. As are the Reserve Bank and the insurance regulator. While regulators are using AI to help with the processes, organisations are adopting it to pre-vet documents to ensure all grounds related to compliance are covered. This makes for a fascinating development where regulatory systems AI models interact with enterprise system AI models and switch to data protocols until the transactions are completed.

India An Outlier In Global AI Regulatory Labyrinth

The EU AI Act is the world’s first comprehensive legal framework for AI. It is gaining attention for being a potential global template and applies extraterritorially to any entity engaged in AI development or AI deployment, if its AI systems impact EU citizens. The Act regulates on risk-based classification, categorising threats as “unacceptable risk”, “high risk” and “limited risk”. Would the EU Act have the same Brussels Effect? Unlikely. Being a strategic subject, AI has the freedom to shift to less-regulated geographies if over-regulation is seen as a growth-stifling measure.

India’s approach to regulating AI has been starkly different. It is lauded for adopting a middle path in which automated processing by AI is allowed as long as transparency, fairness and consent are adhered to. The aim is to promote innovation in the AI ecosystem. India had adopted this approach even while framing the country’s data protection framework, which applies only to digital data and where consent plays a key role. This is in sharp contrast to the General Data Protection Regulation framework of the EU that spills over to include offline data. In the same vein, India allows unrestricted processing of its widely-recognised public data system.

Key Considerations

  • Regulatory Awareness: Entities must future-proof their data. What is built today, may be implemented a couple of years down the line. Regulating such products would take even longer. Therefore, organisations must be mindful of storing legacy data as they can become a ticking bomb in the face of compliance and processing needs.

  • Risk-based Classification: Both AI deployers and AI developers must classify AI use cases based on risks. All possible risks — uses that could be unsuitable or invalid — must be specified in platform documentation and in contracts with end users. For instance, when the automobile industry is looking at connected car systems through AI, it should be important to mention that the technology is still emerging.

  • Human Intervention: In spite of the preceding use-case scenarios, human oversight should be non-negotiable when it comes to AI applications that can irreparably damage human interest. Imagine a situation when AI models have erred to generate false medical diagnosis or wrongful denial of insurance claims. 

  • Transparency and Accountability: Imperceptibility is one of the characteristics of AI models, which means many of them carry the risk of deception. To avoid falling prey to dark patterns, organisations must enforce such obligations and transparency and accountability.

  • Sandboxes: Regulatory sandboxes play a critical role in encouraging innovation in the AI space. This approach gives innovators enough room to undertake creative experiments without legal consequences. Sandboxes allow organisations to exercise complete control while testing the AI systems.

  • Befriending Ethics: Trust is the basis of ethical AI, which applies at three levels — organisations being able to maintain trust of its external and internal users; AI developers bearing the ethical onus and letting customers know of the risks, be transparent about the underlying bias, and have an effective grievance mechanism; and faith in the government and regulators.

Also Read: Will Big Tech Remain Big In A Decade Or Will AI Sink Some?

In this dynamic AI ecosystem, one-size-fit-all regulation does not cut in. The risk and responsibilities differ vastly when creating a chatbot for internal use and developing a foundational model. Regulators often do not see this distinction.

To understand such nuances, organisations must turn to professional assistance who can unpack this distinction through scale base analysis and offer strategies that are not generic but based on individual principles.

There is also the choice to implement self-regulation mechanisms in certain sectors. The gaming industry, acting on the industry-led OGI framework, has already set an example of self-regulation. However, strict regulatory enforcement of AI would continue in such domains as individual freedom, health, surveillance, monitoring (other than what is permitted under the law), and sovereign functions.

Arun Prabhu is a partner (head-technology) at Cyril Amarchand Mangaldas.

Disclaimer: The views expressed here are those of the author and do not necessarily represent the views of NDTV Profit or its editorial team. 

Also Read: Cyberattacks On Semiconductor Sector Surge Six-Fold, AI-Generated ‘Implants’ Can Wreak Havoc: Report

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit.
GET REGULAR UPDATES