Before embarking on my post on several reflections on 2023, I wish to recognise the destitution and suffering experienced by many this year. Nothing I write here can compare to the vulnerability felt by many worldwide. Many of us are weary of and overwhelmed by the year’s events. In my field of interest, the conversation was dominated by Chat GPT, the impending EU AI Act, the UK AI Safety Summit, the issuance of the White House AI Executive Order and the discussion around whether the UK Pro-Innovation White Paper is the correct approach in governing AI. Conversations are suffused concerning AI and hallucination, as words of the year, with the latter given a new meaning referring to when AI produces false information and governance of AI has become the hobbyhorse of governments, international organisations and civil society.
Significant milestones have been achieved with respect to the governance of AI, shifting the gears to a regulatory frontier of AI. My beginning point is in May, when the G7 meeting in Hiroshima led to a published communique listing, amongst other things, the recurring mantra of the need for trustworthy AI emphasising the importance of interoperable governance frameworks with the realistic caveat that ‘approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across’ its members. This led to the publication of the G7 Leaders’ Statement on the Hiroshima AI Process on 30 October, a day before the UK AI Safety Summit. Like most guidance, The Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI Systems provides a set of principles to counter the risks presented by AI systems.
On the very same day, the White House Executive Order on AI was issued, indicating an alignment with the march towards the firm direction of developing a governance framework with the clear message of ‘seizing the promise and managing the risks of artificial intelligence (AI)’. The Order has a clear direction and tone with clear focus areas. At the UK AI Safety Summit on 1 and 2 November, the Bletchley Declaration was published (less compelling in its structure in comparison to the White House Order), representing a collective voice of the participating countries (albeit a limited number, with the concerning absence of the representation of the global south) and the establishment of the AI Safety Institutes in the UK and the US.
These initiatives are broad brushes without the granular details as to the manner of operationalising these principles. It is to be noted that most governments take no aberrant position on the reigning view that some regulatory approach is required.
On 8 December, the EU firms its ambition in enacting the first comprehensive regulatory framework with members of the European Parliament reaching a provisional agreement setting out red lines and guardrails on the development and use of AI systems. The mixed reaction from the businesses was divergent – from the general euphoria or the edge non-EU businesses could gain if they were to prepare to align their risk assessments with the principles imbued in the EU framework since these address the concerns of stakeholders to sentiments that innovation could be impeded and disadvantage EU businesses in competition with the US, UK and China.
The challenge to businesses is putting in place risk assessment models and adopting assurance techniques to ensure risks and vulnerabilities are addressed to minimise liability if there is a breach of liability laws such as the EU AI Act.
The coming year will see a push for fostering international governance of AI, with the hope of greater participation from and engagement with developing countries. With India as the Lead Chair supported by the Ministers of the Global Partnership on Artificial Intelligence (GPAI), which convened in New Delhi on 13 December, the Ministerial Declaration, in the closing of its statement, made commitments to diversity in its membership and to address the manifold challenges of AI ‘with a particular focus on low and middle-income countries to ensure a broad range of expertise, national and regional views and experiences based on our shared values.’ Further, a report is due sometime in 2024 from the work of the United Nations multi-stakeholder High-level Advisory Body on AI, convened ‘to undertake analysis and advance recommendations for the international governance of AI.’
Closer to home, on 13 December, the House of Commons Science, Innovation and Technology Committee heard from Michelle Donelan, the Secretary of State for the Department for Science, Innovation and Technology. With no plans to introduce a lex specialis AI legislation, the publication of the Government’s response to the consultation process to its Pro-Innovation White Paper is expected in 2024.
The upward trend of governance evidenced in 2023 is not showing signs of a slowdown in 2024. Fundamentally, in the next few years, we will bear witness to successes and failures in finding a regulatory scheme that works that will not impede the benefits of AI. The rallying cry is summarised by a closing remark made by Sir Tim Clement-Jones on a panel at the Digital Ethics Summit 2023 – ‘Regulation is not the enemy of innovation.’
Dr Jaspal Kaur Sadhu Singh is Senior Lecturer in the School of Law, Policing, and Social Sciences. Jaspal’s current research interest is in legal and ethical debates involving AI governance including technology law-related challenges perambulating fre speech and expression. Her teaching focuses on ICT & E-commerce Law and Data Governance Law and delivers the AI Governance: Law and Policy module offered on the LLM programme at the University.