On Wednesday 14th June, the European Union significantly advanced the adoption of its proposed legislation that is intended to guard against the serious possible harms that may be brought about by the uncontrolled development and use of AI technologies.
Thus, the European Parliament adopted its negotiating positive on the Artificial Intelligence (AI) Act, which now paves the way for talks with EU member states in the form of the European Council towards the final determination and implementation of the law.
In so doing, the European Union is leading the way towards the control and implementation of safeguards in respect of A.I. and in so doing stands a very good chance of establishing itself as the global leader in this area and as the potential determiner of global standards and principles to be applied to AI.
The legislative approach will be a risk-based one – looking to the potential risks of the technology and its use, before then providing for appropriate controls.
As such, the AI Act looks likely to prohibit certain uses of AI as being simply “too dangerous”, for example employing AI for “social scoring” (that is to say classifying people according to their social behavior or personal characteristics). However, other possible uses of AI were also included in the list of prohibited uses – including using AI to “recognise emotions” in the course of law enforcement, border management, workplace and educational institutions, using AI to “scrape” facial images from the internet or CCTV so as to develop facial recognition databases, and predictive policing systems (i.e, systems that seek to determine persons who are likely to commit criminal actions based on profiling, location or past criminal behaviour).
Certain “high-risk” uses of AI will also be controlled – these being uses that pose significant harm to people’s health, safety, fundamental rights or environment. Included on the list are systems that are used to influence voters and the outcome of elections.
There will also be controls in respect of “generative” AI systems – requiring disclosure that material has been generated by AI, and for there to be safeguards against the generation of illegal content. There is also to be a requirement that summaries of copyrighted data used to train such systems be made publicly available.
In striding towards the implementation of legislation that will take a firm line towards guarding against the harms that may arise from this technology, the European Union is on a potential collision course with the “big tech” companies, much of which are U.S. based (where legislative control of AI is currently quite some way off). However, with even leading figures in the industry calling for protections to be implemented, it remains to be seen how strong will be industry opposition to the proposed legislation. This is likely to be determined by industry perceptions as to whether the European approach excessively prioritises controls over AI technology, with the effect that its development and use is stifled, or whether the AI Act strikes the correct balance between the benefits and opportunities and the risks.
The European approach contrasts with that proposed by the UK government in its March 2023 white paper titled “A pro-innovation approach to AI regulation”. As the title of that paper suggests, the UK government has its sites firmly set on the possible benefits of AI, and has as such proposed a somewhat “light-touch” regulatory regime. Currently open for consultation to 21 June 2023, it will be interesting to see whether the UK Government maintains such a light touch when eventually implementing its proposed regulatory framework, or whether it is swayed by the stricter approach which it now appears very likely Europe will take to the regulation of AI.
Further briefings on the developing regulatory landscape as applicable to AI will be published in due course.