In February, the EU Commission issued its new White Book on Artificial Intelligence – a European Approach to Excellence and Trust. The White Book is the prelude to a new EU regulatory framework for AI that aims to minimize the risks of AI and seize the opportunities it offers.
High-Risk Applications in Focus
The focus of the new EU AI regulations is high-risk AI applications. The proposed definition sets out that an AI application is “high risk” if it is used in a sector where significant risks can be expected to arise, such as healthcare, transportation, energy, and the public sector, and only in use cases with high exposure. The EU Commission aims to introduce a voluntary labelling system for non-high-risk AI applications, but leaves relatively large room for self-regulation and self-assessment in lieu of government controls. Whilst the approach is commendable in the sense that it leaves enough room for innovation for most AI developments, it should be subject to review: high-risk AI systems should cover any AI applications using special categories of personal data–not just healthcare data but biometric data, genetic data, personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, and trade union membership. Furthermore, to ensure the ethical and fair use of governmental data, all AI applications using personal data from datasets from the public sector should be deemed high-risk.
New Legal Requirements for AI Developers, Deployers, and Users
The planned new legislation will introduce the following obligations for developers, deployers, and certain type of users of AI systems.
Training data: Datasets used for training AI must be sufficiently broad, representative, and cover all relevant dimensions of gender, ethnicity, and other possible grounds of prohibited discrimination to avoid dangerous situations and prohibited discrimination.
Records and documentation: The developer/deployer of the AI application must keep accurate records and documentation regarding the training datasets and testing process, programming and training methodologies, and processes and techniques used to build, test, and validate AI systems. In certain cases, they must keep datasets themselves.
Transparency: The developers of AI applications must provide information on the AI system’s capabilities and limitations, the purposes for which the system is intended, and the expected level of accuracy.
Robustness and Accuracy: The AI systems must be robust and accurate, and outcomes must be reproducible, which means that the AI application must be resilient against both overt attacks and more subtle attempts to manipulate data or the algorithms themselves.
Human Oversight: Developers must ensure that the output of the AI system will not become effective unless it has been previously reviewed and validated by a human, or at least where human intervention is ensured afterwards. Human monitoring of AI operations is also required.
We think these obligations are commendable and in line with global trends. Nevertheless, we would caution that the new administrative and documentation requirements, especially that requiring developers/deployers to keep training datasets themselves, raise potential inconsistencies with the GDPR, and could potentially lead to overregulation.
Furthermore, these administrative burdens could overwhelm SMEs and cause competitive disadvantages for those established in the EU, as the amount of proposed documentation, registration, testing, checking, and certifying might also not generate an environment fostering innovation and might decrease the competitiveness of EU based companies in the global AI race.
New Authorities for Conformity Assessment of AI Systems
The proposal includes provisions empowering regulatory authorities to assess compliance with the new framework, including procedures for testing and inspecting or certifying high risk AI systems, including checking algorithms and datasets in the pre-launch development phase. A prior conformity assessment would be mandatory for all developers, deployers, and corporate users of high risk AI. The new authorities would be entitled to ex-post control and continuous monitoring of compliance.
We are generally against administrative control and especially in areas so key to innovation. In our view, supervisory authorities would not necessarily have the technical knowledge to assess any risks in the algorithms. Furthermore, those algorithms usually constitute trade secrets, which the AI companies are not keen to reveal. It would be significantly more critical to safeguard access to data and databases and transparency around data processing, especially regarding data stemming from or built with the use of public funds.
Overall, although the White Book on AI lists the proper principles of AI development, like trustworthiness, transparency and safety, overregulating this area could hinder innovation for EU based competitors, and thereby endanger competitiveness on a global scale.
By Dora Petranyi, Partner, and Katalin Horvath, Senior Associate, CMS Budapest