Wednesday 26 April 2017

Biased Bots: Human Prejudices Sneak into Artificial Intelligence Systems

Bot

Biased robots are here with human prejudices seeping their AI

Most of the AI experts believed that the artificial intelligence will offer objectively rational and logical thinking for the robots and systems in future. But a new study has is showing a darker path for AI wherein the machines are acting reflection so human and the AI is prejudiced with the human notions.

It has been found when the common machine learning programs are trained online with the ordinary human language then they are likely to acquire the cultural biases and this can get embedded right into the patterns of their wording. The ranges of these biases are quite erratic from the preference to certain lower to having some objectionable view about the race or gender.

Security experts has stated that it is extremely critical and necessary to address the rise of biases in the machine learning at the earliest as it can seriously impact their reasoning and decision making in future. In upcoming days we will be turning to the computers for processing a number of things ranging from the natural language translation for communication to making online text searches as well as image categorization.
Fair and just

Arvind Narayanan, an assistant professor of computer science at the Center for Information Technology (CITP) at Princeton has stated that the artificial intelligence should remain impartial to the human prejudices in order to offer better result and judgment making skills. He asserted that fairness and bias present in the machine learning has to be taken seriously as our modern will depend on it in near future.

We might soon be finding ourselves in the center of such situation wherein modern artificial intelligence system will be frontrunner is perpetuating the historical patterns of bias within the system without even us realizing it. If such events comes in the future then it will be completely socially unacceptable and we will still remain good old times rather than moving forward.

An objectionable example of bias seeping into AI

Just a few years ago in 2004, a study was conducted by Marianne Bertrand from the University of Chicago and Senhil Mullainatahan from Harward University. These economists conducted a test wherein they sent out about 5000 identical resumes to over 1300 job advertisements.

They only change the name of the applicants’ names which happened to be either the traditional European American or the African American and the results they received were astonishing. It was found that the European American candidates are 50 percent more likely to get an interview than the African American candidates. Another Princeton study has shown that the set o African American comes more unpleasant associations than the European American sets when it is run through automated system run by the artificial intelligence based systems.

Therefore it has become a necessity to distance the AI from the biases and prevent the cultural stereotypes from further perpetuation into the mathematics based instruction of the machine learning programs. It should be taken as the task of the coders to ensure that machines in future reflect the better angles of the human nature.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.