ADVERTISEMENT

Regulating AI: All Inclusive Or All Interrupted?

Choosing guardrails over AI regulations offers a strategic and adaptable approach to overseeing artificial intelligence.

<div class="paragraphs"><p>(Source: Freepik)</p></div>
(Source: Freepik)

Technology—standalone—does not have morality, intent or moral intent. Governments, rightfully, will want to include these into tech outcomes. In its usage, the various users and other technology experts who shape the technology adoption will decide on the morality and behavioural outcomes of such technologies. That’s where the morality, ethics and integrity of the technology comes into question. It will be shaped by the humans who shape them.

Artificial intelligence regulatory development will need to be based on multi-stakeholder trust and engagement. With no disrespect or malice, it cannot be left alone to policymakers or politicians, for AI is the most powerful technology that confronts us, in known human history. We need technologists, researchers, ethicists and thinkers in our midst to join the policy ideation and formulation process.

Governments expect governance to avoid techno-industrial-anarchy spreading into societies. In order to ensure transparent and fair AI, it is important to establish and enforce appropriate and effective governance mechanisms for the AI system, both internally and externally. This will help ensure the accountability and responsibility of the actors involved in its development and use. Can hard-coded regulations in an evolving field ensure this?

Choosing guardrails over AI regulations offers a strategic and adaptable approach to overseeing artificial intelligence. Guardrails serve as flexible guiding principles, allowing for ongoing adjustments to accommodate the rapid advancements in AI technology. The strength of guardrails lies in their ability to strike a balance between encouraging innovation and mitigating potential risks. As a society, once we learn more on-the-job as AI evolves, including some of the expected and unexpected risks that we surely would see, we can move from guardrails to regulations. The larger semantics in this narrative is usually the political tone. Regulators aspire to oversee AI, yet embarking on the regulatory journey poses challenges. They are used to structured-framework-based and hierarchy-based markets, and supervision of such ecosystem. AI will not necessarily cater or adhere to much of these.

Initiating the regulatory process for AI would seem relatively straightforward, but delineating the boundaries of regulatory scope proves elusive. What precisely should regulators regulate? Whom should they regulate? Unraveling the intricacies of backtracking and stress-testing AI systems presents further complexities. Questions of trust loom large—whom to trust, how to assess credibility, and deciphering the unspoken aspects. This list of inquiries continually will expand, widen, and deepen the complexities inherent in regulating the black hole of AI.

Forget AI for a while. For example, with fintechs, we are still struggling to regulate it into mainstream finance. Examine the intersection of technology and finance in a comparatively simpler ecosystem—digital lending. Regulatory frameworks in this domain are in a state of continual evolution, grappling with challenges in establishing a straightforward whitelist in India. Ongoing supervisory gaps persist among official stakeholders, reflecting the complexities inherent in navigating the regulatory landscape surrounding digital lending, a much simpler one than AI.

Take, for instance, the global challenge of regulating and exerting influence over App Stores as marketplaces—a topic that remains notably sensitive. Given the current regulatory competence and adequacy of supervisory tools, it's difficult to assert that we will successfully achieve comprehensive regulatory control over AI in the foreseeable future.

AI regulatory development presents a formidable challenge, marked by its inherent complexity and the inability to adopt a one-size-fits-all approach. Unlike static regulations, the dynamic nature of AI requires a continual evolution akin to the regular updates of applications. The question that looms is whether the political and policy space is adequately equipped for such agile regulatory upgrades, considering the rapid pace of technological advancements. The concern amplifies as the doubt arises whether such swift updates might inadvertently sideline the crucial multi-stakeholder conversations necessary for each nuanced upgrade.

Until a comprehensive globally-homogenised regulatory framework emerges, there's concern that political impatience, coupled with potential limitations in regulatory supervision capabilities, might lead to constraints on AI. In the past century, political and policy discussions primarily revolved around physical and civic infrastructure, aligning especially with the electorate’s understanding of such developments or their absence. In today's context, the narrative must also address digital infrastructure, prompting concerns that, in the pursuit of citizenry protection, there's a risk of inadvertently stifling AI. Simultaneously, the imperative remains to prevent the weaponisation of datasets and finance, and safeguard existing safety-nets and public systems from compromise.

The establishment of regulations or guardrails in any domain must intricately respect the core principles governing the digital landscape. This entails a meticulous consideration of algorithms and authentication protocols, ensuring their reliability and security. Guardrails must navigate the pitfalls of bias and the ethical implications of bots, promoting fairness in technological applications. Striking a balance between competency and commerce is paramount, fostering innovation while maintaining ethical standards in the pursuit of societal benefit.

Data, a cornerstone of the digital age, requires protection, ethical handling, and responsible use. The principles of equity, explainability, and fairness should underscore every regulatory framework, fostering inclusivity and transparency. Engaging with governments and advocating for robust governance structures and global standard regulations becomes imperative, ensuring that the regulations not only meet individual country needs but collectively contribute to the greater good of society.

Engaging with global governments and advocating for robust governance structures and global standard regulations becomes imperative, ensuring that regulations contribute to the greater good of society. These guardrails must align with principles upholding the integrity, equity, and well-being of the global community, emphasising the convergence of purposes among multiple stakeholders. The ultimate yardstick for these guardrails is their steadfast alignment with principles that prioritise the greater good of the global community.

Srinath Sridharan is a policy researcher and corporate advisor.

The views expressed here are those of the author, and do not necessarily represent the views of NDTV Profit or its editorial team.

OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit