Expert comment

Antipathy or approval for an AI oversight regulatory body?


Antipathy or approval for an AI oversight regulatory body?

The UK AI Safety Summit in November is an opportunity for experts and regulators to prognosticate on the governance of AI. Unsurprisingly, the quotidian of news on the governance of AI worldwide has been ratcheting up.

We have heard calls for regulation by the companies that have caused the problems. When appearing before the US Congress in May, Sam Altman of Open AI appealed for the US government to embark on a legislative initiative to supplant the powers of tech companies with that of state regulation. On 14 June 2023, the European Parliament adopted its negotiating position on the EU AI Act – another turn of the cog in moving the law to fruition. The Brussels position is clear – to put a robust governance regime in place, including the calls for the EU AI Act to take cognisance of foundational models under its regulatory framework.  This has had the impact of making other governance initiatives appear weak.

In July, we saw the Biden Administration securing commitments from leading AI companies – a first step to establishing guardrails with bipartisan legislation in the pipeline. Globally, the voices of regulators on the type of regulatory models to adopt nationally suggest that efforts from putting ethical frameworks in place have been transduced into legal ones. As the profundity of views from the UK AI Safety Summit emerges, I hope some of this will be directed to questions about adopting an AI oversight regulatory body within national laws.

The UK-proposed model prefers to move away from an oversight or regulatory body. The UK Pro-innovation White Paper supports “innovation while providing a framework to ensure risks are identified and addressed”, acknowledging that a heavy-handed and rigid approach can stifle innovation and slow AI adoption. The process has listened to the cautionary tale of the industry that “regulatory incoherence” may impede innovation and competition. The White Paper proposes establishing a regulatory sandbox for AI, promising that regulatory coordination “will support businesses to invest confidently in AI innovation and build public trust by ensuring real risks are effectively addressed”, with regulators expected to issue clear guidance.

The motivation for avoiding a central oversight regulator may lie in the report published by the Office for AI. The report details the benefits and costs of either adopting a central UK AI-specific regulator or changing current UK sectoral regulation to account for AI-specific risks. The analysis estimates that changes to existing sectoral regulation to account for AI-specific risks will have a more minor negative impact on AI revenue than creating an AI-specific regulator, assessing that a central AI-specific regulator could cause losses of £3 billion more in AI revenue between 2023 and 2032 compared to a scenario with changes to existing sectoral regulation.

However, the Policy Paper, published in July 2022, that preceded the White Paper emphasises regulatory coordination. A lack of coordination amongst existing regulators across the sectors may lead to an absence of a cohesive regulatory landscape. The policy paper acknowledges that the government must find “ways to support collaboration between regulators to ensure a streamlined approach”, ensuring that organisations will not have “to navigate multiple sets of guidance from multiple regulators all addressing the same principle.”

The discussion around an oversight agency is critical when AI tools are used in the public sector where oversight of a compulsory algorithmic transparency regime upholding principles of AI accountability is essential when these affect individuals – a discussion for a follow-up comment.  

Elsewhere, Spain is leading the creation of a national agency as an oversight regulator. The European Union’s model under the EU AI Act proposes a governance system at the Member States level, building on already existing structures, and a cooperation mechanism at the Union level through the establishment of an EU AI Office, a new EU body to support the harmonised application of the AI Act, provide guidance and coordinate joint cross-border investigations. The Future Society published a blueprint for the proposed workings of the EU AI Office. In the US, the regulation of AI is fragmented, and the arguments for a federal body to be an oversight regulator will be a formidable move.

I believe an independent oversight regulator imbued with powers, expertise, funding, and even perhaps the capability of providing a redress mechanism could be seen as ‘innovative’. If implementing accountability and governance and increasing public trust in using and applying AI tools are the principles and aims underpinning the White Paper, I am afraid the current position in the UK may not present the panacea for AI dilemmas. Whether the various stakeholders take a position of antipathy to, or approval of, an independent oversight regulator, as a positive peroration, I am confident of a verdant opportunity for the continuous evolution of the right approach as we meander through the opportunities and challenges of AI adoption.

Dr Jaspal Kaur Sadhu Singh is Senior Lecturer in Law at the University.

  • Artificial Intelligence expert Sarah Porter will be exploring the state of AI and what that means for our future, with her lecture ‘Mortal Humans, Time Machines & a Future with AI’ on Wednesday 22 November at 6pm. Book tickets.
Share this page: