The European Commission (‘EC’) published its White Paper on Artificial Intelligence in February 2020, and in April 2021, published its proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) (the ‘AI Act’). With the AI Act, the European Union (‘EU’) was the first out of the gate globally with a proposal for a comprehensive framework for regulating AI, but as discussed below, the UK and others have proposals of their own. An initial consultation period elicited hundreds of responses from across industry, governmental and public bodies, and civil society, which highlight some of the key areas that will shape the legislative debate as the AI Act progresses toward adoption.
A human-centred focus and a wide scope of application
The Explanatory Memorandum accompanying the official text of the AI Act frames the EC’s aim for the new law:
AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being. Rules for AI available in the Union market or otherwise affecting people in the Union should therefore be human centric, so that people can trust that the technology is used in a way that is safe and compliant with the law, including respect for fundamental rights.
Elsewhere, the Memorandum notes that AI is ‘not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.’
The AI Act runs to 85 articles and 9 annexes, setting out the proposed rules for placing on the market, putting into service and use of AI systems in the EU. Like the General Data Protection Regulation (‘GDPR’), the AI Act will have extra-territorial effect where the ‘output’ of an AI system is used in the EU.
Regulated AI Systems
Articles 5 and 6 of the AI Act identify AI systems, which according to their use are either prohibited (article 5) or classed as high-risk (Article 6, together with Annexes II and III); in addition, certain implementations of AI systems which are not deemed to be high-risk give rise to heightened transparency obligations. The classes of AI systems under the AI Act are summarised in the table below:
Prohibited AI practices are so-categorised because they have been determined to be incompatible with the protection of fundamental rights under EU law; as discussed below, the permissibility of certain law enforcement uses of otherwise prohibited AI systems has proven to be a central point of contention since the AI Act’s proposal by the EC.
Whilst various AI systems directed at public safety, detection of crime and law enforcement are classed as high risk AI systems, the AI Act does not apply to AI systems ‘exclusively used for the operation of weapons or other military purposes.’
AI systems not falling within the scope of the classes set out above escape the reach of the AI Act, however, as discussed below, the AI Act covers the full lifecycle of AI systems, and a system which initially falls outwith the AI Act, may by its use become subject to the new rules.
Key definitions in the AI Act
Before considering in detail the proposed requirements for high-risk AI systems, a number of key definitions set out in article 3 of the AI Act merit particular consideration:
Artificial intelligence system (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with. The Annex I techniques and approaches cover:
(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge
representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; and
(c) Statistical approaches, Bayesian estimation, search and optimization methods.
Distributor means any natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market without affecting its properties;
Importer means any natural or legal person established in the Union that places on the market or puts into service an AI system that bears the name or trademark of a natural or legal person established outside the Union;
Intended purpose means the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation;
Placing on the market means the first making available of an AI system on the Union market;
Making available on the market means any supply of an AI system for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;
Putting into service means the supply of an AI system for first use directly to the user or for own use on the Union market for its intended purpose;
Provider means a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge;
User means any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity;
Reasonably foreseeable misuse means the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.
Obligatory design and development requirements for high-risk AI systems
As summarised in the table above, AI systems classified as high-risk under the AI Act fall into two categories: Annex II systems, which comprise safety components for products which are subject to existing regulation (e.g. medical devices); and Annex III systems which are classified on the basis of their intended purpose. The EC is also empowered under the AI Act to add further AI systems via Annex III, where such systems (i) fall within any of the uses set out in Annex III, and (ii) pose a risk to health and safety, or of an adverse impact on fundamental rights.
High-risk AI systems must meet a number of key requirements under the AI Act, including:
- The establishment and maintenance of a risk management system which will seek to eliminate or reduce as far as possible the risks of AI system through design and development. Testing of the AI system must be carried out to seek to identify the most appropriate risk reduction approach, and in so doing, take account of the technical knowledge, experience, education and training to be expected by the user of the system and the environment of its use. Particular regard shall be given to whether a system is intended to be accessed by or impact children. Testing for risk management need not ‘go beyond what is necessary to achieve’ the intended purpose of the AI system, though it must take account of ‘reasonably foreseeable misuse.’
- Training, validation and testing data must comply with data governance and management practices including relevant design choices; relevant data preparation processes such as annotation, labelling, cleaning, enrichment and aggregation; assessment of the availability and suitability of data sets; consideration of possible biases. Training, validation and testing data sets must be ‘relevant, representative, free from errors and complete,’ a requirement which has been subject to considerable commentary in the responses to the EC consultation, as discussed below.
- Technical documentation must be prepared before the AI system is placed on the market or put into service, and must be kept up to date. The documentation must demonstrate compliance with the AI Act, and must provide sufficient information for the relevant regulatory bodies assess that compliance; minimum mandatory information for technical documentation is set out in Annex IV to the AI Act.
- An additional record-keeping requirement under which high-risk AI systems must be designed to keep operating logs which ‘ensure a level of traceability throughout [the system’s] lifecycle’ which is appropriate for the system’s intended purpose.
Particular transparency obligations must be met. The design and development of high-risk systems must ensure that their operation is ‘sufficiently transparent to enable users to interpret the system’s output and use it appropriately.’ Instructions must accompany high-risk AI systems, and those must identify the system’s Provider or the Provider’s authorised representative. Those instructions must also describe the characteristics, capabilities and performance limitations of the system, including: its intended purpose; its level of accuracy, robustness and cybersecurity against which it has been validated; known or foreseeable circumstances in which the system present risks to health and safety or to fundamental rights; where relevant, the training, testing and validation data sets used; the human oversight measures implemented (see below); and the expected lifetime of the system.
- High-risk AI systems must be designed and developed such that they can be subject to effective human oversight. This may be achieved, where appropriate, through human-machine interface tools, and must either be in-built to the AI system or identified by the Provider prior to placing the system on the market, for implementation by the user. Individuals tasked with human oversight must, inter alia, be enabled to: ‘fully understand the capabilities and limitations’ of the system; decide in a particular situation not to use the system or otherwise disregard its output; intervene or interrupt the system with a ‘stop’ button.
- Systems must be designed and developed so as to meet an appropriate level of accuracy, robustness and cybersecurity and must be resilient to ‘errors, faults, or inconsistencies’ which may arise from their use. Systems that ‘continue to learn’ shall be developed to guard against biased outputs resulting from ‘feedback loops’ where prior outputs serve as inputs for future decisions.
Obligations for Providers of high-risk AI systems
The majority of the obligations in relation to high-risk AI systems fall to Providers of those systems (though as described below, others also bear some responsibilities). Providers must ensure their high-risk AI systems comply at all times with the requirements set out above, and must have in place a comprehensive quality management system in aid of such compliance, including:
- A regulatory compliance strategy, including for managing modifications over time to a high-risk AI system;
- Techniques and procedures for the design, verification and quality control and assurance of the system;
- Data management systems and procedures;
- The establishment of a post-market monitoring system which will collect, document and analyse data on the performance of the system throughout its lifetime and allow the Provider to assessment compliance continuously;
- Records management procedures and accountability framework for management and staff.
Providers must also be prepared to take corrective action if at any time they consider that a high-risk AI system is no longer in conformity with the AI Act, and be able to withdraw the system if corrective action is insufficient to address the non-compliance. There is, in addition, a general duty to notify the competent authorities in Member States where a system has been made available, if the Provider considers that the system poses a risk to health and safety.
Prior to being placed on the market or put into service, the Provider must ensure that its high-risk AI system undergoes a conformity assessment to confirm that the system complies with the AI Act; such assessments may be by way of self-assessment or conducted by a qualified third party assessor. Providers must then complete the relevant declaration of conformity and affix the appropriate CE marking to their high-risk AI. Certain ‘stand-alone’ high-risk AI systems will also require registration in a new publicly accessible EU database.
For products in a domain listed in Section A of Annex II of the AI Act – those subject to harmonising legislation in respect of safety components and systems – the manufacturer of the product, rather than the Provider, will be responsible for complying with the obligations for high-risk AI systems placed on the market or put into service with the product.
Obligations for Importers, Distributors, and Users of high-risk AI systems
Importers and distributors of high-risk AI systems bear similar responsibilities under the AI Act, including verifying that the relevant conformity assessment has been completed by the Provider, and that the appropriate documentation and marking are in place. Importers will additionally be required to identify themselves as the importer of such systems, either on the system itself or in accompanying documentation. Both importers and distributors have reporting obligations, where they consider that a high-risk AI system is not in conformity with the AI Act, and are required to cooperate with national competent authorities upon reasoned request.
Users must use high-risk AI systems in accordance with the systems’ instructions, subject to compliance with other laws and regulations, either at the EU or national level. Users are permitted to implement the human oversight requirements for high-risk AI in line with their own resources and activities, and to the extent users exercise control over input data, they must ensure it is ‘relevant in view of the intended purpose’ of the system.
Importers, distributors and users of high-risk AI systems will be considered to Providers of a high-risk AI system, together with the attendant obligations under the AI Act, where:
a) They place a system on the market or put it into use under their own name or trade mark;
b) They modify the intended purpose of such a system already on the market or in use; or
c) They make a substantial modification to the system.
Support for innovation and a nod to SMEs
A frequent criticism of the GDPR is that, whilst it may be fit for regulating personal data processing by large, well-resourced organisations, compliance can appear daunting (in complexity and in cost) for smaller organisations, particularly those for whom personal data processing is not at the core of their commercial focus.
The AI Act provides some recognition for smaller organisations seeking to develop AI systems. Title VI of AI Act is entitled ‘Measures in Support of Innovation’ and sets out a ‘sandboxing’ scheme which may be adopted ‘under strict regulatory oversight’ including each of the national data protection authorities wherever the processing of personal data is involved. The regulatory burden appears to be acknowledged – at least – for SMEs, as there is provision for prioritising access to digital hubs and other facilities which feature it the regulation.
Oversight, enforcement and penalties
The implementation and supervision of the AI Act will fall to a new EU AI Board, which will include the European Data Protection Supervisor (‘EDPS’), and the national regulatory authorities designated by Member States as the competent authorities for the enforcement of the AI Act. Member States may designate a new, specialised regulatory body for the purpose of the AI Act, though it has been suggested by the EDPS and the European Data Board (‘EDPB’) that the existing national data protection regulators would be the ‘natural’ choice for regulating AI, given the common human-centric nature of the two regimes.
National supervisory authorities will monitor compliance across their market, and report to the EC. Where AI systems are found to be non-compliant, national supervisory bodies will have powers similar to those under the GDPR; the AI Act provides a range of fines, from 2% – 6% of global annual turnover for non-compliance (the GDPR provides for a maximum of 4% of global annual turnover).
Challenges on the horizon
The AI Act has garnered considerable attention, even before its official publication; a leaked draft of the AI Act was widely circulated online in advance of the official release. One notable difference in the official document, is the treatment of AI systems for real-time remote biometric identification, which in the leaked draft had been classed as ‘high risk’ but now resides in the list of prohibited AI practices, save for certain law enforcement uses (non real-time biometric identification systems currently reside in the high-risk classification).
It is clear that remote biometric identification will continue to be a focal point for debate as the AI Act proceeds. The EDPB and EDPS have jointly called for alignment of the concept of ‘risk to fundamental rights’ to align more closely with that under the GDPR. Those bodies jointly called for a blanket ban on the use of remote automated recognition of human features in publicly accessible spaces (presently classified as high-risk), including not only facial recognition but also individuals’ gait, fingerprints, and voices. In a similar vein, Members of the European Parliament adopted a non-binding resolution on October 6th 2021 calling for a moratorium on the use of facial recognition technology in public places for law enforcement purposes.
Responses to initial consultations highlight concerns
A consultation period followed the publication of the AI Act proposal, which drew many responses from across industries, civil society and the public sector. In addition to myriad views on the approach to remote biometric identification, as noted above, particular concerns which emerge from the consultation responses and public statements include:
- Respondents have observed that the present definition of ‘AI’ is extremely wide, particularly the inclusion of all ‘Logic- and knowledge-based approaches’ and this may unnecessarily stifle innovation. It has been suggested that a suitable definition should focus on systems that learn and adapt over time, to justify the need for regulation under the proposed rules, with other systems already being adequately governed by existing laws such as the GDPR.
- Accommodation for the use of ‘general purpose AI tools’ including via open source software, which may be deployed in high-risk uses without the knowledge of those responsible for placing such AI systems on the market.
- Clarity is sought on the use of high-risk AI for research purposes versus operational use, at which point the AI Act obligations take hold. The definition of Provider extends to development of AI systems ‘with a view to’ placing them on the market or putting them into use; the concern raised is that an AI system developed and used for research purposes, may later be identified as having commercial application.
- The requirement for datasets used in training, validation and testing to be ‘relevant, representative, free of errors and complete’ has been cited by numerous respondents for a lack of clarity, as well as being practically impossible insofar as being free of any errors. Interestingly, the data retention rules for personal data under the GDPR are cited as posing an impediment to maintaining a constant dataset over time, as personal data must not be preserved indefinitely without a legal basis.
- In relation to the requirement for human oversight, it has been suggested that the need for individuals to ‘understand fully’ the relevant AI system is both unnecessary to achieve effective oversight, and likely impossible in many, if not most, instances.
- Clarity is sought as to the commercial practices which would be caught by the prohibited practice of using ‘subliminal techniques’ to distort an individual’s behaviour in a manner that is at least likely to result in psychological harm. There is particular concern that, whilst the EC has stated that the prohibition is not intended to apply to targeted online advertising, the present definition leaves room for ambiguity.
- National and international defence bodies have raised concern that, whilst strictly military applications for AI are excluded from the AI Act, many tools for military application are adapted from commercially developed systems; an overly-burdensome set of rules for high-risk AI systems may slow innovation and impact the availability of tools suitable for adaptation for military use.
The UK proposes to ‘go its own way’ with its National AI Strategy
With the UK outside of the European Union, on September 22nd 2021, the UK government published its ten-year plan for investment in AI research and development, and for the creation of ‘the most pro-innovation regulatory environment in the world,’ set out in its National AI Strategy. The government’s plan lays the groundwork for a series of regulatory and investment initiatives which will be rolled out across the short term (within 3 months), medium term (6 – 12 months), and long term (12 months and beyond).
In contrast to the AI Act, and perhaps in answer to some of the criticisms outlined above, the National AI Strategy proposes that ‘defence should be a natural partner for the UK AI sector.’ The government timeline states that the UK Ministry of Defence will publish, by the end of 2021, a defence-focussed AI strategy of its own, and will launch a dedicated Defence AI Centre. Elsewhere, NATO has joined the regulatory debate, with the publication in October 2021 of its own AI Strategy.
Whilst it remains to be seen how the UK’s AI strategy will be advanced in proposed legislation, it is apparent that the UK’s strategy seeks to leverage a ‘pro innovation’ stance to differentiate its approach from the AI Act. The UK has proposed to follow a similarly distinct path in relation to AI in its consultations on reforms to the data protection landscape, including the possibility of removing from the UK GDPR the constraints on fully automated decision making.
The National AI Strategy also emphasises an increased collaboration with the United States on AI initiatives, and makes note of ongoing international projects on AI governance frameworks, including by the Council of Europe and the OECD. Further detail of the UK’s plans are to be set out in a White Paper expected in early 2022.
A long road ahead
Those developing AI systems – or currently deploying AI tools – will wish to consider whether those systems are caught by the proposed AI Act, and if so, whether their features meet the proposed framework’s requirements. The final detail of the proposals will, of course, not be known for some time, and the most contentious aspects of the AI Act may well be subject to material changes as the debate – and lobbying – continues.
The AI Act will now follow the EU’s legislative process, which itself is likely to extend into 2023; the proposal currently includes a 24 months implementation period following adoption into law, after which the AI Act would be applicable.