The Problem with Machines? They Aren’t Human Enough

Humans are full of conscious and unconscious biases. For example, a 2012 study in Quebec showed that in considering equally qualified and skilled candidates, those with last names like Ben Saïd were 35 per cent less likely to be called back for an interview than those with last names like Bélanger.

Our machines are learning from this data. They are being taught through AI systems that in fact “Bélangers” are more qualified than “Ben Saïds.” So, as we use AI to predict recidivism in the criminal justice system, to determine loan eligibility or for job application screening, we are further embedding systemic discrimination in our institutions. This is unfair and unethical. It is also a great economic loss. One solution is to teach machines in a similar way to the human brain.

Source: Montreal Gazette

Share your thoughts:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s