Expert comment

AI governance race – to halt, ramble or run

Home

AI governance race – to halt, ramble or run

We have been deluged with news headlines on who will lead the AI race, the better generative AI tool, and whether the moratorium on AI research will make a dent. At the same time, there is a clamour to keep abreast with legal approaches to establish some regime to reign in the technology, and its creators, who are no less big tech companies which many feel have primarily escaped regulation. The call to pause AI experiments has attracted big names, but remonstrations are insufficient.

We cannot seek respite by adopting a wait-and-see approach. Lessons have been learnt from our lax attitude towards our environment and climate. We have a youth that is seeking inter-generational justice for the irreparable damage that has been perpetrated on our world. As a disciple of sorts of the theory of longtermism, I emphasise that the value of our actions and decisions, or lack of them, will have a significant impact on future generations.

The dissonance between the maleficence and beneficence of AI is apparent – so is the approach to regulate or govern AI tools. With bans by states on generative AI (the Italian ban has since been lifted) and China imposing conditions on content and roll-out of generative AI tools, we are beginning to see the run to regulate.

Setting the characterisation of regulation is essential. Should regulators rely on a prescriptive legal framework, or is a light-touch approach preferred? What overarching principles must guide the form of regulation to be taken? In its recent special issue titled ‘The AI Revolution’, the New Scientist emphasised that transparency will be key to regulating new technology. This is but one view of a myriad of opinions. One may argue that any regulatory regime’s central aim must be accountability. Developers and deployers of AI tools must be held accountable to any standard or measure imposed on them, whether through ethical or legal normative frameworks.

Ethical frameworks have been adopted, or are in the process of being drawn up, by state and non-state actors to promote accountability – using terms such as “altruistic AI”, “Responsible AI”, or “Trustworthy AI”. International organisations, tech corporations, and nation-states have contributed to a large inventory of these guidelines. The aim of these guidelines is for developers and deployers of AI systems to apply a set of values when making ethical considerations within a sandbox before unleashing these AI systems. The adoption of such frameworks promotes a manner of self-governance – a kind of soft law – that serves to encourage a degree of governance of AI without impeding its innovation. The risk with such an approach is that it could be reduced to ethics washing

The UK government is promoting a pro-innovation, light-touch approach to regulate AI, announced recently on 29 March, with the publication of its White Paper. Not surprisingly, tech businesses, investors, and AI research centres have welcomed it. The proposal promotes five principles that provide the scaffolding of a regulatory sandbox to be adopted by developers and deployers of AI systems and lists future follow-up actions on the part of the government to issue guidance, tools, and risk assessment templates to operationalise these principles. These five principles are – safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress. Whilst the list may appear to be a rather minimalistic distillation of principles contained in AI ethical frameworks, we can be assured that it is but the first iteration. 

While many have criticised this approach, the proposal clearly indicates that this first step may evolve into legislation “…to ensure regulators consider the principles consistently.”​

A more prescriptive proposal seen as the first step to legislate on AI was proposed in April 2021 – the EU Artificial Intelligence Act. And on 27 April 2023, a preliminary agreement was reached amongst the members of the European Parliament to push it along to its enactment. The law’s main aim is to create an environment of trust in the use of AI systems by classifying the levels of risk of the AI system that could lead to AI systems being banned or subjected to strict, minimal or no obligations.

As one of the two countries alongside China seen as AI leaders, the U.S., has joined the legion of countries considering AI regulation. The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) published an AI Accountability Policy Request to elicit comments from the public.

We have recently heard a warning of the dangers posed by AI from the likes of Geoffrey Hinton, one of the leading architects of AI. The call, it seems, is to consider a halt to the technology that has largely been unchecked and to speed up the race in its regulation.

Dr Jaspal Kaur Sadhu Singh is Senior Lecturer in the School of Law, Policing, and Social Sciences. Jaspal’s current research interest is in legal and ethical debates involving AI governance including technology law-related challenges perambulating fre speech and expression. Her teaching focuses on ICT & E-commerce Law and Data Governance Law and delivers the AI Governance: Law and Policy module offered on the LLM programme at the University.

Share this page: