The badly brought up artificial intelligence
2021.02.25
The above examples indicate that we generally classify an AI algorithm to be efficient if it gives a better result than we could attain with the intelligent investment of resources, with people. However, such an interpretation of efficiency does not rule out that the algorithm is discriminative for given groups.
Given the problems outlined above, many would draw the conclusion that AI is simply not suitable to carry out such tasks because in certain situations it is unacceptably unfair. We reckon that decisions taken in a ‘mathematical’ way will be fair, fairer than we could be, but to achieve this we would have to formulate the concept of justice mathematically. During the recruitment procedure, perhaps it would be socially most just if we picked at random the five persons to be employed from a hundred applicants. However, from the point of view of the individual this is highly unfair because it ignores the skills of and energy invested by the given candidate. In the case of gender and race, there is an emerging consensus to avoid discrimination, but in other cases this may be far more complicated. Our data sets teaching AI already contain within themselves our own injustices and biases. It would be important that these teaching data do not represent the current situation but our expectations judged to be fair!
‘A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Gábor Hraskó
Head of CCS Division, INNObyte
More articles
Covid 2020
2020.03.16
The biological roots of AI research
2021.04.07