Showing posts with label Image Recognition. Show all posts
Showing posts with label Image Recognition. Show all posts

Thursday 16 November 2017

AI Image Recognition Fooled By Single Pixel Change

AI Image Recognition

Adversarial Model Images


According to research, computers can be misled into thinking that an image of a taxi can be a dog by only altering one pixel. These limits developed from the methods that Japanese function in tricking the extensively utilised AI-based image recognition system.

Several of the other scientists tend to now develop `adversarial’ model images to reveal the fragility of some kinds of recognition software. Experts have cautioned that there is no quick and easy means of fixing image recognition system of stopping them from being duped in this manner.

Su Jiawei together with colleagues at Kyushu University, in their research had made small alterations to plenty of the images which were then analysed by extensively utilised AI-based image recognition systems. All the systems that had been tested had been based on a kind of AI known as deep neural networks.

These systems usually tend to learn on being trained with plenty of various examples for the purpose of providing them with an intellect of how objects such as dogs and taxis tend to vary. It was observed by the researchers that altering one pixel in about 74% of the test images made the neural nets mistakenly label what they saw.

Designed – Pixel Based Attacks


A variety of pixel based attacks had been designed by Japanese researchers which had caught all the state-of-the-art image recognitions system that had been investigated. Mr Su from Kyushu leading the research had commented that as far as they were aware there was no data-set or network which is more robust than others.

 Several other research groups all over the world have been now developing `adversarial examples’ which tend to reveal the flaw of these systems according to Anish Athalye from the Massachusetts institute of Technology – MIT who has been dealing with this issue. A specimen made by Mr Athalye together with his team is a 3D printed turtle that one image classification system insists on labelling a rifle.

He informed BBC that more and more real world schemes have begun to incorporate neural networks and tends to be huge concern which these schemes could be possible to destabilize or attack utilising adversarial examples. He stated that though there had been no instances of malicious attacks in real life, the fact that these apparently smart schemes could be deceived with ease was a matter of concern.

Methods of Resisting Adversarial Exploitation


Web giants comprising of Facebook, Amazon as well as Google seems to be known for investigating methods of resisting adversarial exploitation. He stated that it is not some strange `corner case’ and it has been shown in their work that one can have a single object which steadily fools a network over viewpoints, even in the physical world.

He further added that the machine learning community do not tend to comprehend completely what seems to be going on with adversarial examples or why they seem to exist. Learning system established on neural network tends to involve creating links between large numbers of nodes such as nerve cells in a brain.

Analysis involves the network creating plenty of decision regarding what it tends to see and every decision should lead the network nearer to the accurate answer.