Humans are full of conscious and unconscious biases. For example, a 2012 study in Quebec showed that in considering equally qualified and skilled candidates, those with last names like Ben Saïd were 35 per cent less likely to be called back for an interview than those with last names like Bélanger.
Our machines are learning from this data. They are being taught through AI systems that in fact “Bélangers” are more qualified than “Ben Saïds.” So, as we use AI to predict recidivism in the criminal justice system, to determine loan eligibility or for job application screening, we are further embedding systemic discrimination in our institutions. This is unfair and unethical. It is also a great economic loss. One solution is to teach machines in a similar way to the human brain.