We're sending off 2023 with the news that by the beginning of next year at the latest we will finally have a new European regulation on artificial intelligence (AI) coming into force within two years (the EU AI Act). On 14 June 2023, the European Parliament approved its negotiating position on the EU AI Act and since then, the talks have begun with EU countries in the Council on the law's final form. The EU AI Act is intended as a fundamental and harmonizing legislative text. But is it really so and what do we read between the lines in the new regulation?
To begin with, one cannot fail to note the fact that although everyone is commenting on the "swift" reaction of the EU in relation to the regulation of AI, this is not the case at all. AI is not only Chat GPT. AI has existed long before that and is developing in many different directions, as even the memorandum to the EU AI Act mentions that as early as 2017 the EU called for a "sense of urgency to address emerging trends" including "issues such as artificial intelligence". If the emergency regulation needs a minimum of 6 years, not counting the period before entry into force and application, what would the normal legislative initiative be? With AI developing at breakneck speed even as you read these lines, its regulation in Europe is clearly overdue.
Next, it took months for the EU to even find an adequate legal definition of AI that should be neither too broad and encompass talking baby toys, nor too narrow so that it could leave room for unregulated risks. Of particular importance has turned out to be whether we treat AI as software or as a more comprehensive system of techniques and approaches. Lobbying circles in individual European countries are still arguing about the final description.
Furthermore, an attempt is made to create a legal framework that gravitates around the GDPR, giving the impression that it too aims to level and iron out disparities in law enforcement at the national level - something that the GDPR itself has failed to do, and what remains for the new regulation. In addition, the relationship between data protection legislation and AI regulation is unclear and based on core clauses of the GDPR, which, for example, require transparency and detailed information when using different algorithms and/or data subject profiling. With AI, this turns out to be strictly individual and often impossible because the information would affect trade secrets, source code intellectual property rights, etc.
It should also be noted that the regulation is complex, deals with fairly nebulous definitions of what types of AI fall into one of the risk-based protection categories, and can be difficult for individual businesses to implement. This is because the legal text covers a wide range of AI systems and technologies, and is not specifically divided by sectors. That characteristic, in turn, can stifle innovation, as the requirements of legislation can be perceived as too burdensome. To be sure, the new regulation could also increase red tape and costs for businesses, with some industries already warning of the dangers of over-regulation.
Last but not least, the EU AI Act may prove ineffective in preventing damage from AI systems. This is because it does not cover all the potential risks associated with AI. This is precisely why one of the original legislative ideas - to regulate AI with separate and business-specific laws - might have been more appropriate than the current blanket approach.
In the context of all of the above, a broad social discussion is highly needed on how different AI systems can and should be specifically regulated and assessed, and only then can we look with enthusiasm at the new legal framework. Until then... Merry Christmas and don't expect any presents from the EU legislator.
By Irena Georgieva, Managing Partner, PPG Lawyers