In the information society, algorithms are increasingly employed to take crucial decisions that affect people’s life. By including, excluding, classifying and ranking, algorithms decide prizes and penalties, benefits and liabilities, both in the private and in the public sector: health, employment, education, finance, housing, and even criminal justice. Despite being often presented as scientific, objective and neutral, made of numbers, rules and data, this is rarely the case: in fact, algorithms adopt predictive models which involve critical judgements based on questionable opinions, beliefs, values, biases and sometimes prejudices. Algorithmic decisions are also highly resistant to legal scrutiny. As a result, access to justice may be restricted in practice, leaving affected people with no real clue which are the grounds for verdicts. This paper presents the first Italian cases where the outcome of an algorithm has been litigated before a court and scrutinize how the liability of public administration is affected by these notable changes.
|Numero di pagine||22|
|Stato di pubblicazione||Published - 2019|