Like electricity or the internet, artificial intelligence (“AI”) has the potential to change our world. Whilst it may bring huge benefits, there is also a considerable risk of harm.
Many jurisdictions are currently considering the extent to which AI needs to be regulated, and the approach to be taken, including the UK and the EU. As matters currently stand, the UK looks likely to take a different approach from the EU, though time will tell if divergent approaches are indeed implemented.
I. UK approach: the UK government’s March 2023 white paper
Whilst in the UK certain pieces of legislation already have a potential impact on the employment of AI, the UK government is currently considering the implementation of a focused regulatory approach to AI, having set out its proposals in a March 2023 white paper titled “A pro-innovation approach to AI regulation”.
A regulatory framework to ensure adherence to five key principles
The proposed approach is a somewhat “light touch” one, the intention being to introduce a “context-specific” regulatory framework which will focus on the potential outcomes AI is likely to generate in particular contexts so as to determine appropriate regulation.
The framework will be underpinned by five principles as follows:
i. safety, security and robustness – AI systems should function in a robust, secure and safe way with risks being continually identified, assessed and managed;
ii. appropriate transparency and “explainability” – it must be possible to understand how decisions are made by AI;
iii. fairness – in terms of its outcomes and use, and that it applies relevant law;
iv. accountability and governance – to ensure effective oversight of the supply and use of AI systems;
v. contestability and redress – ensuring that its outcomes can be challenged.
Note: This article has been republished in Architecture & Governance.