Reliance on artificial intelligence (AI) is now common practice in the legal profession. Either through smart contracts, analytics or case management software, legal professionals increasingly use machine learning and AI-based tools for the quick, accurate and cost-efficient provision of legal services. However, predictive justice – the use of AI-enabled technologies for predicting outcomes of legal disputes – is a recent development. Confronted with increased workloads and structural and personnel inefficiencies, courts around the world view the adoption of predictive justice tools as an inevitability. Some countries are already developing and applying AI-based tools in their judicial systems to predict the outcome of cases and identify those that need to be prioritised or to analyse the merits of an appeal. In Brazil, at least half of the country’s courts are using AI tools, such as Project VICTOR of the Supreme Court and ATHOS System of the Superior Court of Justice, both of which function as appeal filters. Other countries, while still in preliminary stages in the adoption of such tools, are expecting AI to shape the future of dispute resolution. Master of the Rolls Sir Geoffrey Vos predicted that over the next 20 years AI will become a core part of the UK’s justice system.
Advantages and pitfalls of predictive justice
Predictive justice tools undeniably bring benefits. They can achieve greater accuracy, predictability, consistency, and speed than humans, and thus have a positive impact on efficiency and productivity. However, they also raise questions. To what extent can human judges be replaced by algorithms? And are fundamental human rights and liberties threatened by the deployment of predictive justice tools?
The main concern is the inability of algorithmic justice to imitate human reasoning. This encompasses the inability to replicate either analogical and deductive reasoning or creative and interpretative cognitive processes. Human judges use intuition and conscience to achieve mediation between the abstract law and the concrete facts. Predictive justice, on the other hand, identifies patterns in historical data and reproduces such patterns. Predictability, however, comes at a cost: judicial outcomes could become endogenous, using inputs that are the same as the outputs, and ossified, losing their capacity to adapt, experiment, and innovate. This danger is even more acute in common law systems, where the law develops through a case-by-case process of interaction between advocates and judges. It could thus diminish the flexibility that characterises common law reasoning.
This AI-induced ‘judicial conformism’ does not only ignore evolving societal intricacies, but could also reinforce existing biases, inequalities and discriminatory patterns entrenched in the historical data used to train the algorithms. These considerations are further exacerbated by the ‘black box’ problem, the term used to describe the invisibility of the process through which the AI reached a specific output. The opacity of the decision-making process in the case of predictive justice tools can harm the trust in the justice system and make the detection of errors and biases more difficult, thus limiting the access to justice. Coupled with a lack of accountability, the lack of transparency and explainability could endanger the credibility of the judicial system.
Predictive justice in tax cases
In the tax field, intrinsic information and power asymmetries pose additional challenges. Predictive algorithms require access to data from previous court cases to produce more accurate predictions, but such data are not widely and freely accessible. While the UK provides free online access to its case law through various databases (BAILII and recently The National Archives), this is not the case in most civil law jurisdictions. In the EU, regulatory frameworks range from mandatory publication of cases only from a specific judicial level to selective publication of some cases at the judges’ discretion. In tax disputes, where the tax authority is the constant party while the taxpayers alternate, the limited access to previous tax case law creates an uneven playing field between the two litigant parties. Even in jurisdictions where all case law is published, tax authorities have exclusive access to ongoing tax disputes, as well as other non-disclosable documents, including contracts, tax returns and rulings. Data inaccessibility impacts the accuracy of prediction tools, and hence raises procedural justice concerns. Only through the provision of access to the same input data as the tax authorities, would taxpayer’s due process rights, including their right to fair hearing and fair appeal, be fully protected.
Predictive justice will most probably shape the future dispute resolution process in most countries. The House of Lords has already analysed the impact of its use in the criminal justice system. It is therefore crucial to strike a balance between its benefits and pitfalls. Making all case law and tax administration data freely accessible would be a step in this direction. Improving the transparency and explainability of the predictive tools would be another. An orchestrated policy and technology effort would thus be required to employ predictive justice to the best advantage of citizens, and taxpayers, in the UK and beyond.
Vasiliki Koukoulioti, Lecturer in Tax Law at the Centre for Commercial Law Studies, Queen Mary University of London
Reliance on artificial intelligence (AI) is now common practice in the legal profession. Either through smart contracts, analytics or case management software, legal professionals increasingly use machine learning and AI-based tools for the quick, accurate and cost-efficient provision of legal services. However, predictive justice – the use of AI-enabled technologies for predicting outcomes of legal disputes – is a recent development. Confronted with increased workloads and structural and personnel inefficiencies, courts around the world view the adoption of predictive justice tools as an inevitability. Some countries are already developing and applying AI-based tools in their judicial systems to predict the outcome of cases and identify those that need to be prioritised or to analyse the merits of an appeal. In Brazil, at least half of the country’s courts are using AI tools, such as Project VICTOR of the Supreme Court and ATHOS System of the Superior Court of Justice, both of which function as appeal filters. Other countries, while still in preliminary stages in the adoption of such tools, are expecting AI to shape the future of dispute resolution. Master of the Rolls Sir Geoffrey Vos predicted that over the next 20 years AI will become a core part of the UK’s justice system.
Advantages and pitfalls of predictive justice
Predictive justice tools undeniably bring benefits. They can achieve greater accuracy, predictability, consistency, and speed than humans, and thus have a positive impact on efficiency and productivity. However, they also raise questions. To what extent can human judges be replaced by algorithms? And are fundamental human rights and liberties threatened by the deployment of predictive justice tools?
The main concern is the inability of algorithmic justice to imitate human reasoning. This encompasses the inability to replicate either analogical and deductive reasoning or creative and interpretative cognitive processes. Human judges use intuition and conscience to achieve mediation between the abstract law and the concrete facts. Predictive justice, on the other hand, identifies patterns in historical data and reproduces such patterns. Predictability, however, comes at a cost: judicial outcomes could become endogenous, using inputs that are the same as the outputs, and ossified, losing their capacity to adapt, experiment, and innovate. This danger is even more acute in common law systems, where the law develops through a case-by-case process of interaction between advocates and judges. It could thus diminish the flexibility that characterises common law reasoning.
This AI-induced ‘judicial conformism’ does not only ignore evolving societal intricacies, but could also reinforce existing biases, inequalities and discriminatory patterns entrenched in the historical data used to train the algorithms. These considerations are further exacerbated by the ‘black box’ problem, the term used to describe the invisibility of the process through which the AI reached a specific output. The opacity of the decision-making process in the case of predictive justice tools can harm the trust in the justice system and make the detection of errors and biases more difficult, thus limiting the access to justice. Coupled with a lack of accountability, the lack of transparency and explainability could endanger the credibility of the judicial system.
Predictive justice in tax cases
In the tax field, intrinsic information and power asymmetries pose additional challenges. Predictive algorithms require access to data from previous court cases to produce more accurate predictions, but such data are not widely and freely accessible. While the UK provides free online access to its case law through various databases (BAILII and recently The National Archives), this is not the case in most civil law jurisdictions. In the EU, regulatory frameworks range from mandatory publication of cases only from a specific judicial level to selective publication of some cases at the judges’ discretion. In tax disputes, where the tax authority is the constant party while the taxpayers alternate, the limited access to previous tax case law creates an uneven playing field between the two litigant parties. Even in jurisdictions where all case law is published, tax authorities have exclusive access to ongoing tax disputes, as well as other non-disclosable documents, including contracts, tax returns and rulings. Data inaccessibility impacts the accuracy of prediction tools, and hence raises procedural justice concerns. Only through the provision of access to the same input data as the tax authorities, would taxpayer’s due process rights, including their right to fair hearing and fair appeal, be fully protected.
Predictive justice will most probably shape the future dispute resolution process in most countries. The House of Lords has already analysed the impact of its use in the criminal justice system. It is therefore crucial to strike a balance between its benefits and pitfalls. Making all case law and tax administration data freely accessible would be a step in this direction. Improving the transparency and explainability of the predictive tools would be another. An orchestrated policy and technology effort would thus be required to employ predictive justice to the best advantage of citizens, and taxpayers, in the UK and beyond.
Vasiliki Koukoulioti, Lecturer in Tax Law at the Centre for Commercial Law Studies, Queen Mary University of London