Artificial Intelligence in legal systems: The right to a human judge should be guaranteed at all stages of the proceedings (CCBE)

The Council of Bars and Law Societies of Europe (CCBE) published a position paper on the proposal for a regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act). 

On 21 April 2021, the European Commission presented a proposal for a regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. The proposal is supplemented by 9 annexes. 

The CCBE previously issued comments on the communication on the digitalisation of Justice in the EU, a response to the consultation on the European Commission’s White Paper on Artificial Intelligence as well as its own considerations on the legal aspects of Artificial Intelligence. 

With the recent paper, the CCBE wishes to further develop its position in relation to several aspects of the proposal for an Artificial Intelligence Act (hereafter “the AIA” or “the proposal”). 

In particular, the CCBE considers that: 

• Despite the choice of a risk-based approach, the proposal should contain specific provisions on the use of AI in the field of Justice. 

• The proposal must contain clearer prohibitions in Article 5. Any type of social scoring should be prohibited, as well as automated recognition of human features in publicly accessible spaces or the use by AI systems of biometrics to categorise individuals into clusters. 

• A judge should not be allowed to delegate all or part of his/her decision-making power to an AI tool: there should be prohibited in the field of Justice not only automated decision making by AI systems but also the use of those AI systems which produce “decisions” of a nature which might tempt a human judge simply to adopt such decisions uncritically – effectively rubber-stamping what in effect would be automated decision-making. 

• The entire decision-making process must remain a human-driven activity and human judges must be required to take full responsibility for all decisions. A right to a human judge should be guaranteed at all stages of the proceedings. Annex III.8 and Recital 40 should clarify that, where an AI system may be used to "assist" judicial authorities, the possibility of it doing so to, in effect, reach decisions or formulate the expression of such decisions is excluded. 

• The proposal should definitively exclude the use of AI tools which may infringe a person's fundamental rights; for example: for the purposes of so-called “predictive policing” and for the purposes of determining risks of future offending as an aid to the making of decisions as to the granting of bail, the imposing of a sentence, following conviction, the making of decisions concerning probation and, generally, during prosecution and trial. Furthermore, the output of an AI system should not, of itself, be treated in judicial proceedings as having the status of evidence. 

• The principles of transparency and explainability must be strictly observed. In those cases where the manner in which an AI system produces an output is not transparent or where that output cannot be sufficiently explained, the output must not be taken into account by a law enforcement authority and removed from the file. 

• The AIA should define the notion of “judicial authority”, as mentioned in Recital 40 and Annex III.8. 

• The use of an AI system to apply the law to a “concrete set of facts” should be excluded and the relevant deletions should be made in Recital 40 and Annex III.8. 

• The transparency obligations laid down in Article 13 must be strengthened. 

• The exception to the principle of transparency, laid down in Article 52, paragraph 1, for certain AI systems intended to interact with natural persons should be discarded. 

• There should be a ban or moratorium on the use of automated technologies in border and migration control until they are independently assessed for compliance with international human rights standards. 

• The proposal should limit uses and applications of AI systems that violate access to social rights and benefits. 

• Specific provisions should be adopted on AI liability issues. The following issues must be considered: 

  • the notion of product; 
  • the lack of foreseeability in the functioning of AI systems; o the addressee of liability; 
  • the defences; 
  • the type of damage and the victims; 
  • the rule of evidence and the reversal of burden of proof in certain situations; and 
  • the question of whether there should be mandatory insurance.

(source: ccbe.eu/ photo: pixabay.com)

Comments

Editorial

Editorial
George Kazoleas, Lawyer

Top Stories

Ombudsman inquiry on Commission President’s text messages is a wake-up call for EU

Daily Mail publisher wins case against ‘success fees’ paid to lawyers (ECtHR)

The banks Crédit agricole and Credit Suisse participated in a cartel in the sector for suprasovereign bonds, sovereign bonds and public agency bonds denominated in US dollars

A notary does not breach the sanctions against Russia when he or she authenticates the sale of a property owned by an unlisted Russian company (ECJ)

ECtHR elects a new Vice-President of the Court and two new Section Presidents

A national court is not required to apply a decision of its constitutional court that infringes EU law (ECJ)

Ethics committee opinions on commissioners’ intended new jobs should be made public, says Ombudsman