Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Friday, 11 August 2017

The Computer That Know What Humans Will Do Next

AI

Computer Code – Comprehend Body Poses/Movement

A fresh computer code tends to provide robots with the possibility of an improved understanding of humans around them, making headway for more perceptive machines from self-driving cars to investigation. The new skill enables the computer to comprehend the body poses as well as movements of various people even to the extent of tracking parts as tiny as individual fingers.

Though humans tend to communicate naturally utilising body language, the computers tend to be somewhat blind to these interactions. However, by tracking the 2D human form and motion, the new code is said to improve greatly the abilities of the robots in social situations.

A new code had been designed by the researchers at Carnegie Mellon University’s Robotics Institute by utilising the Panoptic Studio. The two-story dome has been equipped with 500 video cameras developing hundreds of views of individual action for a specified shot. Recording of the system portrays how the system views the movement of humans utilising a 2D model of the human form.

Panoptic Studio – Extraordinary View of Hand Movement

This enables it to trail motion from the recording of video in real time, capturing everything right from the gestures of the hand to the movement of the mouth. Besides this, it also has the potential of tracking several people at once.

Associate professor of robotics, Yaser Sheikh had stated that they tend to communicate mostly with the movement of the bodies as they tend to do with their voice. However computer seems to be more or less blind to it. Multi-person tracking gives rise to various challenges to computers and hand detections is said to be more of an obstacle.

The researchers, in order to overcome this, utilised a bottom-up approach localizing individual body area in an act. Thereafter the areas were associated with certain individuals. Though the image datasets on the hand of the human seemed quite restricted than those on the face or body, the Panoptic Studio provided extraordinary view of hand movement.

 A PhD student in robotics, Hanbyul Joo had stated that a distinct shot provides 500 views of individuals hand and also automatically interprets the position of the hand.

2D to 3D Models

He further added that hands tend to be too small to be interpreted by most of the cameras, but for the research they had utilised only 32 high-definition cameras though were still capable of building a huge data set. The method could ultimately be utilised in various applications for instance helping to enhance the ability of self-driving cars to predict pedestrian movements.

 It could also be utilised in behavioural diagnosis or in sports analytics. Researchers would be presenting their work CVPR 2017, the Computer Vision and Pattern Recognition Conference, from July 21 -26 in Honolulu. Up to now they have released their code to several other groups in order to expand on its skills.

Finally, the team expects to move from 2D models to 3D models by using the Panoptic Studio in refining the body, face and hand detectors. Sheikh had mentioned that the Panoptic Studio had boosted their research and they are now capable of breaking through various technical barriers mainly as a result of the NSF grant 10 years back.

Friday, 30 June 2017

Can Artificial Intelligence Help Us Make More Human Decisions?


About 88 million pages of original and authentic handwritten documents belonging to the past three-and-a-half centuries, line the tiled halls of a simple 16th-century trading house located right in the middle of Seville, Spain. These are stored here, incompletely transliterated, where some of them are almost indecipherable. A few of them were carried back on armadas from the Americas while a few have undergone scanning and digitisation.

These documents contain the answers and the context for the innumerable questions pertaining to the Conquistadors, the European history, the New World contact and colonialism, politics, law, economics and ancestry. However, it is unfortunate that hardly some of these carefully kept pages were ever read or interpreted since they were written and brought to Seville centuries before and it is highly unlikely that most of them never will be.

All hope is not lost as a researcher from the Stevens Institute of Technology is trying to get computers to read these documents, before we are out of time, while the documents are still readable. A Stevens computer science professor, Fernando Perez-Cruz asks “What if there was a machine, or a software, that could transcribe all of the documents?”.

Perez-Cruz, who’s expertise lies in the research area of machine learning also says “What if there was a way to teach another machine to combine into groups those 88 million pages and convert them into searchable text categorised into topics? Then we can start understanding the themes in those documents and then will be aware where to look in this storehouse of documents for our answers”. Thus Perez-Cruz is working on both factors of this two-fold approach which, if right, could then be applicable to many other new age and futuristic data analysis queries such as independent transport and analysis of medical data.

Pricing on Amazon, medical study, text reading machines


Perez-Cruz, who is a veteran of Amazon, Bell Labs, Princeton University and University Carlos III of Madrid, has had a very interesting career dealing with scientific challenges.In 2016, he joined Stevens and contributed to the growing asset of the computer science department of the university. Stevens aims at making this a strong research department which in turn is drawing more talent and resources. Perez-Cruz is using this to his advantage in his work. Currently, at Stevens, he is working to develop something called as ‘interpretable machine learning’ which is a systematized intelligence that humans can still work on.

As far as the problem of the historical document analysis is concerned, Perez-Cruz is in the hopes that he will be able to develop improved character-recognition engines. With the help of short excerpts of documents written in varied styles, which have been earlier transliterated by experts, he aims to teach software to identify both the forms of characters and often correlated associations between letters and words, thus constructing a growing recognition engine over time that is absolutely precise. The only question remains, he says, is that how much data or how much handwriting that is transcribed, is sufficient to do this well. The work on this concept is still developing.

Perez-Cruz states that he believes even though it is a technical challenge, it may still be achievable. He is even more fascinated about the next part which is organisation of large quantities of transcribed matter into topics that can be used in a glance. He says that the machine should be able to give us information right away from these three-and-a-half centuries of data when transcribed and should itself learn from the locations of the words and sentences. This is, what he calls, topic modelling.

A key link: Systematically grouping large data into easily accessible topics


After sufficient data has been entered into the algorithm, it begins to spot the most vital identifying and organizing forms and designs in the data. Very often, it so happens that various cues from the human researchers are vital and are searched for.Perez-Cruz notes that eventually, we might discover that there are, let’s say, a few hundred topics or descriptions that run through the whole of this archive and then all of a sudden there may be 88-million-document problems that have been scaled-down to 200 or 300 ideas.

If algorithms can consolidate 88 million pages of text into a few hundred lots, a huge progress in systematisation and efficiency can be achieved by historians and researchers who need to make choices about which particular document, theme or time periods are to be searched, reviewed and analysed in the formerly unmanageable archive. The same concept could be used to find styles, themes and concealed meaning in other vast unread databases.

He concludes saying that one begins with a huge quantity of unorganised data and in order to understand what material does that data contain and how it can be used, a kind of a structure needs to be brought to that data. Once the data is comprehended, one can begin to read it in a particular way, understand better what questions are to be asked pertaining to that information and make better conclusions.

Wednesday, 26 April 2017

Biased Bots: Human Prejudices Sneak into Artificial Intelligence Systems

Bot

Biased robots are here with human prejudices seeping their AI

Most of the AI experts believed that the artificial intelligence will offer objectively rational and logical thinking for the robots and systems in future. But a new study has is showing a darker path for AI wherein the machines are acting reflection so human and the AI is prejudiced with the human notions.

It has been found when the common machine learning programs are trained online with the ordinary human language then they are likely to acquire the cultural biases and this can get embedded right into the patterns of their wording. The ranges of these biases are quite erratic from the preference to certain lower to having some objectionable view about the race or gender.

Security experts has stated that it is extremely critical and necessary to address the rise of biases in the machine learning at the earliest as it can seriously impact their reasoning and decision making in future. In upcoming days we will be turning to the computers for processing a number of things ranging from the natural language translation for communication to making online text searches as well as image categorization.
Fair and just

Arvind Narayanan, an assistant professor of computer science at the Center for Information Technology (CITP) at Princeton has stated that the artificial intelligence should remain impartial to the human prejudices in order to offer better result and judgment making skills. He asserted that fairness and bias present in the machine learning has to be taken seriously as our modern will depend on it in near future.

We might soon be finding ourselves in the center of such situation wherein modern artificial intelligence system will be frontrunner is perpetuating the historical patterns of bias within the system without even us realizing it. If such events comes in the future then it will be completely socially unacceptable and we will still remain good old times rather than moving forward.

An objectionable example of bias seeping into AI

Just a few years ago in 2004, a study was conducted by Marianne Bertrand from the University of Chicago and Senhil Mullainatahan from Harward University. These economists conducted a test wherein they sent out about 5000 identical resumes to over 1300 job advertisements.

They only change the name of the applicants’ names which happened to be either the traditional European American or the African American and the results they received were astonishing. It was found that the European American candidates are 50 percent more likely to get an interview than the African American candidates. Another Princeton study has shown that the set o African American comes more unpleasant associations than the European American sets when it is run through automated system run by the artificial intelligence based systems.

Therefore it has become a necessity to distance the AI from the biases and prevent the cultural stereotypes from further perpetuation into the mathematics based instruction of the machine learning programs. It should be taken as the task of the coders to ensure that machines in future reflect the better angles of the human nature.

Monday, 10 April 2017

‘Machine Folk’ Music Shows the Creative Side of Artificial Intelligence

Magenta
Folk music is seen a direct link which connects us to our past and helping in cementing the cultural bond. When it comes to Artificial Intelligence it doesn’t possess any cultural heritage, bond or traditions but can we help Artificial Intelligence in building those attributes which defines the human intelligence at some levels. Over the years AI has grown by leap and bounds as it has been able to defeat the brightest human minds at Chess and even demonstrated breath-taking wordplay skills but can it create music or show us the creative side.

Artificial Intelligence on the rise

Researchers are trying to unlock the creative side of the Artificial Intelligence for quite some time. In 2016 a AI has been able to produce a piece of musical theatre which was premiered in the London. The effort is given towards broadening the boundaries of the creative research through new evolved AI techniques and utilizing the larger collection of the data. The Artificial Intelligence written musical piece has been a result of a thorough analysis of hundreds of other successful musicals.

Some of the prominent Artificial Intelligence project which aims towards bringing art and music out of the AI includes Sony’s Flow Machines, Google’s Magenta project and some projects under the British startup called Judedeck.

During the current study researchers has made use of the state of the art Artificial Intelligence techniques in order to educate Ai about the musical practice in the Celtic folk tradition. Traditional folk music was reduced into the form of music notation using ABC. In this method the music is denoted using the text characters as a rough guide for the musicians. Researchers have been successful in training the Artificial Intelligence for using as many as 23,000 ABC transcription of the folk music. This particular feat of transcription this amount of folk music was made possible with the active contribution of the people on the ‘the session.org’. During recent workshop researchers has asked the folk musician musicians to perform some of the songs composed by the machines.

The culmination of the artificial compositions and the human melodies

This particular Artificial Intelligence is trained to look at a single ABC symbol and predict what will come next and this helps in generating new tunes which appears to be like original compositions due to the use of existing patterns and structures. So far researchers have generated as many as 100,000 new folk tunes using Ai which is a remarkable feat in itself.

The structure of the folk music composed by the AI has two repeated part of similar eight-bar length and they complement each other quite musically. Secondly AI had also shown a great ability of repeating and varying the musical patterns which has always been the key characteristic of the Celtic music. AI has been able to understand as well learn all the rules set for the Celtic folk song compositions on its own though carefully analysing the patterns in the fed data within a very short time.

Friday, 24 March 2017

FaceBook Building a Safer Community With New Suicide Prevention Tools

FaceBook

Facebook Utilising Artificial Intelligence to Avert Suicides

Facebook intends utilising artificial intelligence and updating its tools and services in order to avert suicides among individuals. This social media network which is the largest in the world had informed that it intends to integrate its prevailing suicide prevention tool for Facebook post in its live streaming feature namely Facebook Live together with its Messenger service.

The company had mentioned in a blogpost recently that artificial intelligence would be utilised in supporting spot users having the tendency of suicides. As per the New York Post, a 14 year old foster child in Florida had broadcasted her suicide on Facebook Live, in January. In live video stream, Facebook has been utilising artificial intelligence in monitoring offensive material.

Recently, the company had expressed that the latest tools would be providing a choice to users viewing a live video in reaching to the user directly and report the video on Facebook. Resources would also be provided by Facebook Inc. which comprises of reaching to a friend and get a help line to the individual broadcasting the live video. Among individuals in the age group of 15 -29, suicide is said to be the second leading reason of death.

Suicide Rates Escalated by 24%

As per a National Centre for Health Statistics study, in the United States from 1999 to 2014, the suicide rates had escalated by 24 percent after a period of almost steady decline. Suicide among youngster seems to be the final ultimatum in stepping in and ending their life when help and proper counselling is not available to the victim.

 Facebook has provided advice to individuals who would be at risk of committing suicide for years though till now it seemed to depend on other individuals in bringing about awareness by clicking a report button on a post. It has now progressed pattern-recognition algorithms in recognising if an individual seems to be stressed by training them with examples of the posts which could have been earlier identified.For instance, dialogue of sadness and pain could be an indication and responses from friends with comments such as `Are you ok’, or `I am concerned about you’, could be of some help.

System Being Rolled All Over the World

When a post has been recognized, it could be sent for quick review to the community operations team network. When someone who would be viewing the live stream tends to click a menu option declaring that they are concerned about the individual, Facebook then provides advice to the viewer with regards to how they could render help to the broadcaster.

The stream is also identified for instant review by the team of Facebook who in turn overlaps a message with a suitable recommendation whenever possible. This latest system is being rolled out all over the world. Presently, a different choice in contacting various crisis counsellor helplines through the Messenger tool of Facebook seems to be limited to the US. Facebook has mentioned that it needs to check if the other organisation would be capable of managing with the demand before it intends to develops the facility

Saturday, 28 January 2017

Google Is Making AI That Can Make More AI

Google
It is extremely touch to get a good artificial intelligence going on across the devices or on a single device. The tier tech companies from the Silicon Valley namely Google, Microsoft and Apple have spent millions of dollars and years of research to develop their own proprietary AI for their range of devices. Within a short time their distinct AI has become an integral part of their overall device user experience. This has been result of continuous monitoring, tweaking and further development of the AI to enhance its potential and to work at their best. Now Google AI research lab has stated that it is currently building a new AI software which possess the ability to develop more of its kin i.e. AI. In short AI is set to make more AI in future will be much cheaper and easier affair than today.

A smart enough AI to develop more AI

Google has stated that for AI to become capable of developing AI is a extremely delicate and complex process which does require higher level of human intervention. Google has hired a number of experts of experts to develop or discover such tools which has the potential of developing more AI in future. Secondly Google is trying to reduce cost incurred at the development of the AI by building a smart enough Ai to do the job. In future educational institutions and corporation will be able to hire AI builder to develop their own Ai for exclusive purposes.

Is science fiction turning into reality? 

We already have a rich science fiction literature and movie galore which showcases how AI will eventually take over the world and decides to kill the humanity. This scenario is more commonly own as the Skynet catastrophe based on the Terminator series evil AI. When machines are allowed to develop their offspring that are smarter than the earlier iteration then it is certainly a major reason to worry. In similar fashion AI will work on its own to develop Ai and it will keep learning things related to development on its own without any human assistance. This will bring up a situation where in humans wouldn’t be able to understand the minute working details of the AI by looking at its performance and this will create a trouble for the AI trainers. Eventually AI will overcome as a powerful entity which wouldn’t require human at all to discover new territories.

This might appear to be too dark to digest given the fact that Google wouldn’t let the AI run rogue at any given time. Google has built it contingency plans to avoid any such miserable situations by ensuring that Ai doesn’t get the chance to disable its own killswitches at any given point of time. Furthermore Google has clarified that its AI charged with the task of developing more AI isn’t capable of competing against the human engineers which ensure that dark futuristic Skynet isn’t in making at all.

Ref:

Fast Reinforcement Learning via Slow Reinforcement Learning

Learning to Optimize

Thursday, 15 December 2016

Quick Draw: Interactive Drawing Game Guess What You're Doodling in 20 Seconds

Quick Draw

Google’s Pictionary Style Experiment – Quick Draw’


A terrifying game developed by Google tends to utilise artificial intelligence to guess what one could be drawing from sketches. The Pictionary style experiment known as `Quick Draw’ tends to prompt users in drawing a famous object or phrase in around 20 seconds by using a mouse cursor on a desktop or by utilising their finger on a mobile device.

 Google’s new AI tools are impressive and the game is built with machine learning according to a tutorial site. It informs that one can draw and a neural network tries to guess what one is drawing though at times it does not seem to work. However the more one plays with it the better understanding they gain from it. It is just an example of how one could use machine learning in an amusing way wherein the computer game adopts artificial intelligence and machine learning techniques to assist the user and shows the user how good they can be at it.

The software tends to guess what the player intends drawing depending on machine learning which can be tried by the user. Developed by using neural networks, the software tends to improve as it progresses, functioning in the same way to handwriting recognition software. Performers are invited to draw a series of six sketches and are provided with 20 seconds for each of it.

Impressive Image Recognition Software


The software tends to begin with words or phrases which it presumes the user would be utilising in illustrating till it obtains the appropriate one. These suggestions are portrayed towards the bottom of the screen and it also tends to call them out. Jonas Jongejan, creative technologist at the Google Creative Lab had mentioned in a video accompanying the game that it does not always work which is because it’s only seen a few thousand sketches.

The more one tends to play with it, the more it will learn and improve at guessing.The impressive image recognition software tends to identify also the reduced quality sketches offering a clue at what AI could be capable of.

The game achieved to guess six out of six Mail Online’s sketches that had been of blueberry, scissors, church, squirrel, swan and Eiffel Tower. Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim together with friends at Google Creative Lab and Data Arts Team were responsible for its construction and the challenge has been a part of the newly released AI Experiment site of Google.

Thing Translator


The site comprises of several machine learning experiments which also includes one that permits users to take a picture of something in order to identify how it is said in another language. The intention is that anybody could attempt the experiment though the site also tends to inspire coders to give in their own contribution. Google’s AI ExperimentsQuick Draw had been designed to show the amusing side of artificial intelligence and machine learning to users. Besides this other experiments comprise of the Thing Translator that informs you what the image of the object taken is called in another language together with the AI Duet that enables you to collaborate with the playing melodies on the computer.

Friday, 21 October 2016

Getting Robots to Teach Each Other New Skills


Robot
Google – Robots Utilising Shared Experiences

Robots have not got hold of human intelligence still but the researchers of Google have shown how they have been progressing in utilising downloadable intelligence. Just envisage if you could be better at some skill by not just learning and practising but by accessing the brains of another to tap directly in their experiences.

That would be science fiction for human, but in the case of AI powered robotics, it could be a possibility to shortcut training times by having robots to share their experiences. Recently, Google had demonstrated this with its grasping robotic arms.

James Kuffner, former head of robotics of Google had created a term for this kind of skills acquisition, six years back, naming it `cloud robotics’. It recognises the effect of distributed sensors and processing supported by data centres and quicker networks.

 Presently Knuffer, the CTO of the Toyota Research Institute where his focus lies on cloud robotics in bringing about reality domestic helper robots. UK artificial intelligence lab, Google Research, Deep Mind together with Google X continues to discover cloud robotics in quickening general purpose skills acquisition in robots. In many demonstration video that were recently published, Google has portrayed robots utilising shared experiences to learn quickly how to move objects and open doors.

Robots – Own Copy of Neural Network

Out of the three multi-robot approaches which the researchers have been utilising is reinforcement learning, or trial and error together with deep neural networks which is the same approach that DeepMind is using in training its AI, is being skilled at Atari video games and Chinese board game Go.

Every robot tends to have its own copy of neural network which assists it to decide the ideal action in opening the door. Google has constructed data quite quickly with the addition of interference. Recording the robots actions, behaviours and concluding outcome is done by a central serve, which utilises those experiences in order to build an improved neural network which assists the robots enhancing the task.

As portrayed in two videos by Google, after a training of 20 minutes the robotics arms fumbled around for the handle though ultimately managed to open the door. But in the span of three hours, the robots could reach for the handle with ease, twist it and then pull to open the door.

Google Training Robots to Construct Mental Models 

Another system which they have been exploring could be helpful for robots to follow commands in moving objects around the home and it is here that Google has been training its robots to construct mental models on how things tend to move in response to definite actions by building experience of where pixels turn out on a screen after a certain action is done.

The robots tends to share their experiences of nudging different object surrounding a table, assisting them forecast what could happen should they tend to take a specific course of action. The researchers are ultimately exploring methods for the robots to study from humans. Google’s researchers directed robots to the doors and exhibited how to open them. These actions had been encoded in deep neural network which tends to transform camera images to robot actions.

Tuesday, 29 March 2016

Microsoft's AI Bot

AI Bot

Microsoft’s Artificial Intelligence Chat Bot


Microsoft has developed a new artificial intelligence chat bot which tends to claim that it would be smarter the more one talk to it. Tay, the so-called bot has been built by Microsoft Technology and Research together with Bing team for the purpose of conducting research on conversational understanding. The Bing team had developed a related conversational bot, XiaoIce, for Chinese market in 2014.

Microsoft executives had dubbed XiaoIce `Cortana’s little sister’ after the company’s voice-activated Cortana personal assistant software of Redmond, Washington.The real world focus of the bot is to enable researchers to experiment and learn how people tend to talk to each other. Microsoft states that for bot which is available through Twitter as well as messaging platforms Kik and GroupMe, AI has been doing the role of a millennial and emojis have been included in the vocabulary and is clearly aimed at 18-24 years olds.

The bot seems to have little useful function for users though it has the potential of three varied method of communication and its website tay.ai tends to boast that the AI can talk through text, play games like guessing the meaning of a string of emojis and make comments on the photos sent to it.

Tay Designed to Engage & Entertain People


Till the time of writing, the bot had accumulated around 3,500 followers on Twitter but had sent over 14,000 messages, responding to questions, statements as well as general abuse within a matter of few seconds. The about section of Tay’s website stated the `Tay is designed to engage and entertain people where they connect with each other online via casual and playful conversation.

Tay tends to work depending on public data and with editorial inputs which have been created by staff and comedian. Microsoft has informed that `public data which has been anonymised is the primary data source of Tay and that data has been modelled, cleaned and filtered by the team creating Tay’. Besides the meme-tastic appeal of the bot, there seems to be a grave side to the research behind the AI. Making machine capable of communicating in a natural and human way is a main challenge for learning procedures.

Effort of Service to Comprehend How Humans Speak


Google too had recently updated its Inbox mail service recommending answers to emails and the smart reply feature offers three probable responses which are recommended by Google’s AI. Similar to Tay, Google informs that the more one uses smart replies, the better they will get. If a user desires to share with Tay, the bot tend to track the user’s nickname, gender, zip code, favourite food as well as the relationship status.

Users could delete their profiles on submission of a request through the Tay.ai contact form. In the field of virtual assistants and chat bots, Facebook’s M is also experimenting with the use of artificial intelligence in completing tasks. Though it has been partly controlled by humans, currently the systems are being condition to book restaurants and respond to some questions. The core of the service is an effort to comprehend how humans tend to speak and the best way to respond to them

Monday, 11 January 2016

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Artificial_Intelligence

Artificial Intelligence Surpassed Human Competences

Computer researchers had recently reported that artificial intelligence had surpassed human competences for a narrow set of vision related task. These developments are remarkable since the so-called machine vision methods are being a commonplace in various characteristics of life comprising of car-safety methods which tend to identify pedestrians and bicyclists and in video game controls, Internet search as well as factory robots.

 Researchers from the Massachusetts Institute of Technology, New York University together with the University of Toronto have recently reported a new kind of `one shot’ machine learning in the journal Science, wherein a computer vision program seemed to beat a group of humans in identifying handwritten characters founded on a single example. The program seems to have the ability of learning quickly the characters in a variety of languages as well as in generalizing from what it has learned.

The authors recommend that this ability is the same wherein humans tend to learn and understand perceptions. Bayesian Program Learning or B.P. L as the new approach is known is unlike the present machine learning technologies known as deep neural networks. Neural networks can be trained in recognizing human speech, identify objects in images or detect types of behaviour on being exposed to large sets of examples

Bayesian Approach

Though these networks may have been modelled on the behaviour of biological neurons, they have not yet learned the way human tend to do, in quickly acquiring new concepts. In comparison, the new software program defined in the Science article has the capabilities of recognizing handwritten characters on `seeing’ only a few or a single example.

The researchers had compared the capabilities of their Bayesian approach as well as other programming models utilising five separate learning tasks which involved a set of characters from a research set. This was known as Onmiglot which comprised of 1,623 handwritten characters sets from 50 languages.

 The images as well as the pen stokes that were need to create characters were taken. Joshua B, Tenenbaum, professor of cognitive science and computation at M.I.T. together with one of the authors of the Science paper had commented that `with all the progress in machine learning, it is amazing what one can do with lots of data and faster computers. But when one looks at children, it is amazing what they can learn from very little data and some come from prior knowledge and some is built in the brain’.

Imagenet Large Scale Visual Recognition Challenge

Moreover, the organizers of an annual academic machine vision competition also reported gains in lowering the error rate in software for locating and classifying objects in digital images. Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill had stated that he was amazed by the rate of progress in the field.

The competition which is known as the Imagenet Large Scale Visual Recognition Challenge pits the teams of researchers at government, academic as well as corporate laboratories against one another in designing programs in classifying as well as detecting objects. The same was won by a group of researchers at the Microsoft Research laboratory in Beijing, this year.

Tuesday, 22 December 2015

Facebook’s Artificial-Intelligence Software Gets a Dash More Common Sense

Facebook’s_Artificial-Intelligence

Artificial-Intelligence Researchers – Learn some of the Basic Physical Common Sense


In an attempt to discover how computers could learn some of the basic physical common sense, artificial intelligence researchers have undertaken a project for the same. For instance to comprehend unsupported objects tend to fall or a large object does not fit inside a smaller one, seems to be the main way human tend to predict, communicate and explain regarding the world.

Chief technology officer of Facebook, Mike Schroepfer, state that if machines are to be more useful, they would need the same type of good judgment of understanding. He had informed at a preview recently of results, he would share at the Web Summit in Dublin, Ireland, that they have got to teach computer systems to comprehend the world in the same way. Human beings at a young age, tend to learn the basic physics of reality and by observing the world.

Facebook had drawn on its image processing software in creating a technique which learned to predict if a stack of virtual blocks would tumble. The software tends to study by gaining access to images of virtual stacks or at times two stereo images like those that form a pair of eyes.

Crafting Software – Comprehend Images/Language of Deep Learning


It had been shown in the learning phase, that several various stacks some of which had toppled while the others did not. The simulation showed the learning software the result and after adequate examples, it was capable of predicting for itself with 90% accuracy if a certain stack would possibly tumble. Schroepfer comments that if one runs through a series of tests, it would beat most of the people.

Facebook’s artificial intelligence research group in New York had done the research. It concentrated on crafting software which could comprehend images as well as language utilising a technique of deep learning. Recently the group also showed off a mobile app with the potential of answering queries regarding the content of photos.

The director of the group, Yann LeCun who is also a professor at NYU, informed MIT Technology Review that the system for predicting when block would topple indicates that more complex physical simulation could be utilised in teaching additional basic principles of physical common sense. He added that `it serves to create a baseline if we train the systems uncontrolled and it would have adequate power to figure thing out like that’.

Memory Network


His group had earlier created a system known as `memory network’ which could pick up some of the basic common sense as well as verbal reasoning abilities by reading simple stories and now progressed, in helping in influencing a virtual assistant that Facebook tends to test known as M. M has more potential than Apple’s Siri or similar apps since it is powered by bank of human operators.

 However Facebook expects that they would steadily tend to become less important as the software learns to pitch queries for itself. Schroepfer informs that adding the memory network to M is showing how that could happen. By observing the interactions between people utilising M as well as the responds of customer service, it has learned already how to manage some of the common queries.

Facebook has not made a commitment of turning M into a widely available product; however Schroepfer states that the results indicate how it could be possible. He adds that the system has figured this out by observing humans and that they cannot afford to hire operators for the entire world but with the right AI system, they could organize that for the whole planet.

Wednesday, 23 September 2015

The Search for a Thinking Machine

Li

New Age of Machine Learning


According to some experts, they are of the belief that by 2050, the machines would have reached human level intelligence. To a new age of machine learning, computers have started assimilating information from raw data in much the same way as the human infant tends to learn from the world around them.

This means that we are heading towards machines which for instance teach themselves on how to play computer games and get reasonably good at it as well as devices which tend to begin to communicate like humans such as voice assistants on the smartphones. Computers have started comprehending the world beyond bits and bytes. Fei-Fei Li has spent the last 15 years in teaching computers how to see.

 Initially, a PhD student and thereafter as a director of the computer vision lab at Stanford University, she had tracked difficult goal with the intention of creating the electronic eyes for robots as well as machines to see and to understand their environment. Half content of all human brainpower is in visual processing though it is something we all tend to do without much effort.

Ms Li in a talk at the 2015 Technology, Entertainment and Design – Ted conference had made a comparison stating that `a child learns to see especially in the early years, without being taught but learns through real world experiences and examples’.

Crowdsourcing Platforms – Amazon’s Mechanical Turk


She adds that `if you consider a child’s eyes as a pair of biological cameras, they take one image about every 200 milliseconds, which is the average time an eye movement is made. By the age of three, the child could have seen hundreds of millions of images of the real world and that is a lot of training examples. 
 Hence she decided to teach the computers in the same way. She elaborates further that `instead of focusing solely on improved algorithms, her insight was to give the algorithms the kind of training data which a child is provided through experience in quantity as well as quality. Ms. Li together with her colleagues in 2007 had set about an enormous task of sorting and labelling a billion divers as well as random images from the internet to provide examples for the real world for the computer. 
The theory was that if the machine views enough images of something, it would be capable of recognizing it in real life. Crowdsourcing platforms like Amazon’s Mechanical Turk was used, by calling on 50,000 workers from 167 countries in order to help label millions of random images of cats, planes and people.

ImageNet – Database of 15 Million Images


Ultimately they build ImageNet, which is a database of around 15 million images across 22,000 lessons of objects that were organised by daily English words. These have become the resource utilised all over the world by the research scientists in an attempt to give vision to the computers.

To teach the computer in recognizing images, Ms Li together with her team utilised neural networks, computer programs accumulated from artificial brain cells which learn as well as behave in the same ways as human brains.

At Stanford, image reading machine now tends to write accurate captions for a whole range of images though it seems to get things wrong for instance, an image of a baby holding a toothbrush was labelled wrongly as `a young boy is holding a baseball bat’.

Presently machines are learning instead of thinking and if the machine could ever be programmed to think, it is doubtful taking into consideration that the nature of the human thought has escaped scientists as well as philosophers for ages.

Thursday, 17 September 2015

Misuse Of Artificial Intelligence 'Could Do Harm

Artificial_Intelligence

Google’s Head – Demis Hassabis – Artificial Intelligence, Powerful Technology


The head of Google’s £400m machine learning business, Demis Hassabis, has called for a responsible debate on the role of ethics in the creation of artificial intelligence. It seems to be a technology which is so powerful that probably in the near future, could mean that the computers would be able to advice on the best way to treat patients, handle climate change or even feed the poor.

 With this potential power there is also a big responsibility. Mr Hassabis has stated that artificial intelligence is like any new powerful technology. It needs to be used in a responsible manner and if misused could cause much harm.

He informs that all need to be aware of that and those developing it – the companies and universities need to realise and take their responsibilities seriously and have proper concern as the main focus of their mind. He adds that they tend to engage actively with artificial intelligence community at MIT at Cambridge, Oxford and there are lot of academic institutes speculating about this and they interact with them actively and openly in their research.

Artificial Intelligence – Science of Making Machine Smart


Mr Hassabis is of the belief that there are valid concerns which need to be discussed and debated, decades prior to anything which could actually give cause to any potential consequence or power that may result in worry and hence the need to have the answers in place much in advance.

He was responding to apprehensions with regards to the development of artificial intelligence raised with others by the technology entrepreneur and a DeepMind investor, Elon Musk and Professor Stephen Hawking. Prof Hawking had informed his colleague Rory Cellan-Jones that mankind would end, with artificial intelligence.

Hassabis’ work is focused on learning machine that are capable to scrutinise large amount of data and support human understanding of the exponential rise of digitised information. He informs that artificial intelligence is the science of making machines smart and we are capable to fill machines with intelligence, they would be able to assist us as a society in solving all types of big problems which then could have better control on all things like disease and healthcare to big issues like climate change and physics, where the ability for machines to understand and obtain insights in large amount of data could be beneficial to human scientist as well as doctors’.

Computers Unable to Copy Human Behaviour/Way of Thinking


Mr Hassabis adds that the computers cannot copy human behaviours or overtake human way of thinking. London is doing well in artificial intelligence. DeepMind which is located in King’s Cross has grown to a 150 strong company of computer scientists and mathematicians.

Mr Hassabis has advised the UK not to misuse its leading position in the developing sector and has commented that they are proud to be a UK company. He further adds that though they are owned by Google, their whole operation is in UK and Cambridge, Oxford, University College London, Imperial has very strong machine learning departments.

This is something that UK is strong in and he thinks that it is great UK success story. However, unlike in the past, when they were at the beginning of the computer age and Silicon Valley ended with all the innovation as well as reaping the commercial benefits, they will now ensure that they stay at the forefront of what would be an incredibly important technology in the forthcoming 10 or 20 years’.

Wednesday, 12 August 2015

Deep Neural Nets Can Now Recognize Your Face in Thermal Images

Deep_Neural_Nets

Neural Network – Connecting Mid-or-Far Infrared Image

Cross modal matching of the face between thermal and visible range is a desired capability especially during night time scrutiny as well as security applications. Owing to huge modality gap, thermal to visible recognition of the face seems to be one of the challenging face matching issue.

Recently Saquib Sarfraz and Rainer Stiefelhagen at Karlsruhe Institute of Technology in Germany has worked out for the first time, a way in connecting a mid-or far-infrared image of a face with a visible light counterpart, a trick they have achieved in teaching a neural network to do all the task. Corresponding to an infrared image of a face to its visible light counterpart is not an easy work, but which deep neural networks are beginning to surface.

The issue with infrared observed videos or infrared CCTV images is that it could be difficult in recognising individuals where the faces tend to look different in the infrared images. Matching of these images to their usual look could be an important uncertain experiment. The issue could be that the connection between the way one may tend to look in infrared and visible light could be very nonlinear. This could be very complicating for footage which could be taken in midand far-infrared that could use passive sensors detecting emitted light instead of the reflected range.

Visible Light Images- High Resolution/Infrared Images – Low Resolution

The way in which a face emits infrared light is completely different from the way it reflects it where the emissions differ as per the temperature of the air as well as that of the skin. This in turn is based on the activity level of the individual, like having a fever or not. Another issue which could make comparison difficult is that visible light images could have a high resolution while far infrared images could have a much lower resolution due to the nature of the camera from which the images have been taken.

Collectively, these factors could tend to make it difficult in matching an infrared face with its visible light corresponding image. With the recent developments in deep neural networks in overcoming all types of difficult issues, it gave rise to the idea to Sarfraz and Stiefelhagen. They speculated on training a network to recognize visible light faces by looking at infrared types. Two major factors have been pooled in, recently in making neural networks very powerful.

Better Understanding/Availability of Interpreted Datasets

Better understanding, being the first, on how to build and tweak the networks in the performance of their task which is a procedure leading to the development of the supposed deep neural nets which was something that Sarfraz and Stiefelhagen learnt from other work.The second is the availability of largely interpreted datasets which could be utilised in training these networks.

For instance accurate computerized face recognition has been possible due to the creation of massive banks of images wherein people’s faces have been remote as well as identified by observers because of crowdsourcing services like Amazon’s Mechanical Turk. These data sets seem to be much difficult to come by for infrared or visible light evaluations.

Nevertheless, Sarfrax and Stiefelhagenhandled this issue. It was created at the University of Notre Dame comprising of 4,585 images of 82 individuals which were taken either in visible light at a resolution of 1600 x 1200 pixels or in the far infrared at 312 x 239 pixels.The data is said to comprise of images of individuals, laughing, smiling together with neutral expressions taken in various sessions in order to capture the way their appearance seem to change from day to day and in two various lighting conditions.

Fast/Capable of Running in Real Time

Each image was then divided into sets of overlapping patches of 20 x 20 pixels in size in order to vividly increase the size of the database. Eventually Sarfraz and Stiefelhagen utilised the images of the first 41 individuals in training their neural net together with the images of the other 41 people for the purpose of testing. The outcome of it seemed to be interesting.

Sarfraz and Stiefelhagen have commented saying that `the presented approach improves the state-of-the art by more than 10 percent. It is said that the net can now match a thermal image to its noticeable counterpart in a mere 35 milliseconds. They further added that `this is very fast as well as capable of running in real time at ∼ 28 fps’. Though it is by no means flawless, at best its precision is over 80 percent when it has anextensivearray of visible images when compared against the thermal image.

The one-to-one contrast accuracy is only 55 percent. Improved accuracy could be possible with larger datasets together with much more powerful network, out of which, the creation of a data set that is higher by order of magnitude would be the more difficult of the two jobs.

However, it is not an issue to imagine this type of database to be created rather quickly provided the interested individuals could be the military, law enforcement agencies and government who tend to have deeper pockets with regards to security related technology.

Tuesday, 11 August 2015

Chinese Carmaker is Testing Car-to-Car Communications

Chinese_Carmaker

Chinese Testing Technology

One of the leading carmakers in China has been testing technology which could prevent accidents and reduce overcrowding by enabling vehicles and traffic signals with wireless communication. Though there is no standard for the technology that has surfaced in China so far, representative at the company state that it could introduce some sort of car-to-car communication by 2018 ahead of several U.S. automakers.

 A state owned car manufacturers, Changan; based in Chongquing, in central China has been testing vehicle-to-vehicle – V2V as well as vehicle-to-infrastructure – V2I technology at its U.S. R&D centre in Plymouth, Michigan. The company does not sell vehicles in the U.S. and has stated that it has no plans in entering the U.S. market.

However, testing car-to-car technology at its U.S. centre indicates that it envisages a future for it in its home country. The car-to-car technology has been promoted in the U.S. and Europe as cost effective way in helping vehicles to avoid crashes as well as to control traffic flow in an efficient manner.

Technology to Be Introduced in High-End Cadillac- 2017

Vehicles which are equipped with useful broadcast information inclusive of location, direction of travel, speed and computers on-board on each car could use that information in identifying an approaching crash and send a warning. Some of the companies are also making headway in custom communication systems to enable commercial vehicles to travel in highly efficient high-speed convoys.

The U.S. Department of Technology, after a successful test of the technology involving thousands of cars around Ann Arbour, Michigan, is expected to issue specifications for the technology somewhere later this year. The technology is said to be introduced in a high-end Cadillac towards 2017 and would eventually be delegated for new cars in the U.S.

The scenario is less clear in China wherein the government is studying vehicle-to-vehicle technology though has not yet provided any clues on when it could be implemented.

Will Take Time to Get Universal

A ride was organised around Ann Arbor in one of the Changan’s car which was a small SUV known as the CS35 and was fitted with vehicle-to-infrastructure technology. The SUV was fitted with a wireless transmitter as well as a receiver that was connected to an Android tablet attached to the dashboard. When another car which was equipped with the technology approached along a blind crossing, a warning flashed out. Another warning was also received as the car travelled around a sharp bend too quickly.

The challenge with car-to-car technology is that it would take some time to get universal. Though the Chinese car market tends to be the largest auto market in the world, per capita car ownership is still lower in China than in the U.S., Japan of Europe. China also tends to lag behind U.S., Europe and Japan with regards to the development of technology. A PhD student at Carnegie Mellon University, John Helveston, studying the adoptions of electric vehicles in China has stated that the foreign car developers which control the market in China favour selling older technology there. If domestic car makers tend to be interested in car-to-car systems, it would not be interesting if only five out of every 100 cars could communicate with each other’.

Thursday, 6 August 2015

Is Artificial Intelligence The Next Step In Advertising?

Ad

First Ever Artificially Intelligent Poster Campaign

Artificial intelligence for some is a step into science fiction. The variety of artificial intelligence would perhaps not bring about ruin of humanity but could be used to shape how advertising can be created and directed. Partnership of M&C Saatchi, Clear Channel and Posterscope had recently disclosed what they dubbed as `the world’s first ever artificially intelligent poster campaign’.

Chief innovation officer of M&C Saatchi, David Cox, had defined its significance – that it’s the first time a poster has been let loose to entirely write itself, based on what works, instead of just what a person thinks could work. The basic evidence is that the poster which is based around a fictional instead of an ordinary coffee brand,`Bahio’ could read the reactions of its audience and adjust itself accordingly.

22 ads were created in each generation from its initial `gene pool’ of pictures as well as copy, with the poster evaluating the level of success of an ad and if successful, a certain ad moves onto the next gene pool, being a part of the next generation. Those which tend to be unsuccessful are removed. Cox states that `it’s a Darwinian algorithm, it will evolve to be more and more effective and so we are hoping to see fewer outcomes emerging over time’.

Big Ideas for AI Posters – Future of Out-of-Home Advertising

Till July 20, 1540 ads were automatically generated over 70 generations and the initial outcome indicated that shorter copy was more popular, with heart images, which was a frequent incidence at the time of viewing. Poster using Kinect to work out and who’s standing in its vicinity, could assess 12 people at a time. At the time of taking into account the reactions, its audience does not see immediate alterations.

The main part of the concept is to capture movements that of the consumers. Initial trial runs over 42,000 interactions were tracked by the ad. Based on initial viewing, art directors as well as copywriters would be able to rest easy.

 Cox admits that it is not writing the best ad in the world, but there is a lot of weird copy though in 10 years, one may not know the outcome of it. CMO of Clear Channel, Sarah Speake, has big ideas for what AI posters could mean for the future of out-of-home advertising.

AI – Ability in Perceiving/Understand Meaning

Speake states that they could be pushing the boundaries of where creativity meets technology in a very scalable way with a level of engagement that has not been possible till now and this symbolizes where they are going and creating the future of the industry’.

The reason why AI has shown to be so controversial as well as elusive is due to the concept of intelligence which is difficult to define. A working definition could be difficult to make. Nevertheless, intelligence in the broadest sense could refer to the ability in perceiving and understand meaning.

According to David Chalmers, Australian National University philosopher, he states that intelligence is external performance, in a sense, it is about sophisticated behaviour’. Established on this definition, it is speculated whether computers would be anywhere near achieving an artificial version of such intelligence.

Smart Mirror Monitors Your Face For Tell-Tale Signs Of Disease

Mirror

Mirror to Assess Health of Individual

The latest is a new mirror which can assess the health of an individual by simply looking into it, which analyses the facial expressions, fatty tissue and how pale or flush a person tends to be. The purpose of the gadget is to mark out cardiovascular diseases like stroke and heart disease which are the prevailing causes of death all across the globe.

Researchers are of the belief that getting to know the early signs of these conditions is the best medicine in reducing the burden of health care cost connected in the treatment of these chronic diseases. Wize Mirror seems to look like a mirror, though incorporated 3D scanners, multispectral cameras as well as gas sensors in order to assess the health of a person viewing into it.

It comprises of facial recognition software which looks for tell-tale signs of stress or anxiety. The device makes use of gas sensors in collecting samples of the user’s breath and checks for certain compounds and features a three dimensional scanner in analysing face shape for weight gain or loss, together with multispectral cameras in order to estimate the heart rate or the level of haemoglobin.

Technology Developed by Consortium of Researchers

The software needs about a minute in analysing the data and then it provides a score which informs the user how healthy they are and also tends to display personalised information on how to make improvements in their health.

The technology has been developed by a consortium of researchers as well as industry partners from seven European Union – EU countries, utilising EU funding. Sara Colantonio together with colleagues from the National Research Council of Italy will be coordinating with the project and would want to utilise Wize Mirror in addressing common long term health issue which are not easy to treat and intent viewing the device as a tool which could be helpful in addressing these health problems of heart diseases as well as diabetes.

The clinical trials of the device are arranged to begin next year at three sites in France and Italy, with focus to compare its readings with those from traditional medical devices. They have stated that `prevention is the most viable approach in reducing the socio-economic burden of chronic and widespread disease like cardiovascular and metabolic diseases’.

Accurate Health Assessment in Natural Setting - Challenging

Consumer technology which can interpret signals from the body to understand the underlying physical as well as mental health is on the point of being part of our daily life. For instance, Cardiio which was originally created at the Massachusetts Institute of Technology is an app that utilises a smartphone’s camera in order to monitor the blood levels in the face and expresses the heart rate.

At the MIT’s Media Lab, Javier Hernandez looked at utilising mirrors for the purpose of health monitoring and also developed a program known as SenseGlass wherein it uses Google Glass together with other wearable to measure a person’s mood thus enabling them to handle their emotions.

Hernandez has stated that though the mirrors are great for the purpose of health monitoring, since we use them daily, making use of them in this manner is complex than it seems to sound. According to him `accurate health assessment in natural settings are quite challenging due to several factors like the illumination change, occlusions and the excessive motion’.

Tuesday, 4 August 2015

Ban Killer Robots Before They Take Over, Stephen Hawking & Elon Musk Say

stephen

UN Urged to Ban AI Based Autonomous Weapons

A new report has urged the United Nation to ban the making of artificial intelligence based autonomous weapons, already accused as `killer robots’ and the development of such types of weapons by global arms. An open letter warning against the dangers of global arms race of artificial intelligence – AI technology,has been signed by billionaire entrepreneur Elon Musk, physicist Stephen Hawking together with several other tech luminaries that the United Nation should support a ban on weapons which humans do not have any control over it.

The letter issued by the Future of Life organization was presented on July 27 during the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina – Super-Intelligent Machine: 7 Robotic Futures. Referring to the automatic weapons, the letter reads as follow, `the key question for humanity today is whether to start a global AI arms race or to prevent it from starting.

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable and the endpoint of this technological trajectory is obvious, autonomous weapons will become the Kalashnikovs of tomorrow’.

Risks Greater Than Modelled Nuclear Weapons

According to the signatories, the risks could be much greater than those modelled by nuclear weapons. From self-driving cars and much more, humanity’s main tasks are being taken over by robots. The unavoidable hike of the machine has urged both ideal as well as apocalyptic vision of the future.

 Artificial intelligence researchers have in fact voiced their concerns on how innovation in this field are developing and with autonomous AI weapons like the drone planes which seek and kill people using a face recognition algorithm.

 The writers of the letter debate that the technology would be here in a few years. According to the letter while the drone fighters’ limits to battlefield casualties, these autonomous bots could probably lower the edge in initiating clashes in the first place. In addition to this, these autonomous weapons could also possibly be in the hands of practically every military power in the world due to AI based killing machines that would not be much costly or difficult to obtain resources.

AI – Hold Great Dangers

It would also not be very long for assassins, terrorist as well as other bad characters to purchase them in the black market and utilise them for despicable purposes according to the scientists. The letter further states that `autonomous weapons are ideal for tasks like assassination, destabilizing nations, subduing populations as well as selectively killing any particular ethnic group and it is believed that a military AI arms race would not be beneficial for humanity’.

It is not the first time that science as well as tech luminaires have raised their concern against the dangers of artificial intelligence. Hawking had stated that in 2014, the development of full artificial intelligence could have influenced the end of the human race. He together with Musk had signed a letter by the same organization in January cautioning that the AI tends to hold great dangers unless humanity would assure that AI systems `will do what we want them to’.

Thursday, 30 July 2015

AI Triumphs at Pictionary-Like Sketch Recognition Task

Sketch

Sketch-a-Net – Sketch Recognition Job

Softwareknown as Sketch-a-Net accurately identifies a sample of hand drawings which has been developed by researchers in London and this artificial intelligence system – AI, has beaten human at sketch recognition job.

The researchers recommended that their program could be adapted in helping police to match drawings of any suspects involved in any crime incidents, though according to computing experts,a lot of work needs to be done. Earlier sketch recognition attempts have examined drawings as completed work wherein specific features were extracted and thereafter classified in the same way the photos tend to be analysed.

The Sketch-a-Net could make use of the information about the order the hand strokes were made in. Whenever computer equipment is utilised in the development of a drawing, the follow-on data comprises of information with regards to when each line was made and where, wherein the team at Queen Mary University took the benefit of this added statistics.

 According to Dr Timothy Hospendales, from the University’s Computer Science department, states that the `normal computer vision image recognition looks at all of the pixels in parallel, though there is some additional information offered by the sequence and there is some regularity in how people do it’.

AI Software Achieved Score of 42.5% Accuracy

He has informed BBC that in the case of an alarm clock for instance, people usually tend to start by drawing the outline of the device before adding in the hands and then creating dashes to represent the hours. He further added that different shapes have `different ordering and that is what the network learns to discover’.

The drawings in the test were from a collection of around 20,000 sketches that were known as the TU-Berlin dataset that had been utilised in earlier image recognition jobs. On examining the image library, Sketch-a-Net seemed to have a certain edge in determining some of the drawings’ finer details.

For instance, it was capable of matching drawings of birds to the description `seagull, flying birds,standing bird and pigeon, than human were.Dr Hospendales states that it has been described as being capable of solving the game Pictionary which he presumed was a nice explanation. The AI software achieved a score of 42.5% accuracy at that definite task when compared with the volunteers’ 24.8%.

Used in Matching Sketches of Suspects

The researchers recommend that their software could be utilised in matching sketches of suspects. According to Prof Alan Woodward from the University of Surrey’s Computing Department, he defines the research as `promising’ but states that it could be some time before its potential could be realised.

He adds on, that `neutral nets have proved to be extremely successful in the past since the foundation for recognition and classifications systems. But the latest application could be obviously at the initial stage and quite a lot of developments and testing would be needed prior to seeing it emerge in a real world application.

He thinks it is one of the several areas in which they could see people using the AI in order to improve on human abilities’. The peer review research would be discussed further in September at the British Machine Vision Conference.

Wednesday, 15 July 2015

Google Has Set Its Terrifying, Dreaming Image Robots on the Public


Deep_Dream
Google’s Images Recognizing Robots


Software engineers of Google has recently revealed the results of an experiment which looked at how computer can think, identify and understand objects, animals and people in images. Google has opened its images recognizing robots to all, enabling users to create strange and horrifying images from their very own images. It has released the somewhat horrifying, half amazing images created recently with pictures.

The company has made the `Deep Dream software available on code sharing website Github wherein users could download the same and run their own pictures through it. The software is said to operate on turning image recognising computers on themselves and by prompting the system to over interpret image, which they would otherwise pick out meaningless things, exaggerating them. For instance like turning clouds into bizarre llamas.

With regards to Google’s own image, it tends to transform thing into animals, with dogs being the favourite and eyes. There is also the possibility of overlaying everything with a swirly rainbow colouring. Google has stated that the technology could enable us in understanding where the creativity of human comes from and the same is being put to the test.

The Day Dream System


The `Day Dream’ system tends to feed an image via a layer of artificial neurons, asking an AI to improve as well as build certain features like edges. Over a period of time the pictures could get distorted which is morphed into something that is completely different or just a cluster of colourful random noise.

With the code for the system made available, user have the option of uploading an image of their choice and watch it metamorphose into a surrealistic image. On being fed with several pictures, the image recognition software developed by Google allows artificial neural network of computer to view shapes in images and creates strange, psychedelic and fantastic images which could be likened to entertainer art.

With immense deal of interest generated from the AI research published together with the generated images, Google decided to make the code for its algorithm available to the public. The source code needs to be hosted on a site and some software developers have hosted the same like Psychic VR Lab and Deep Neural Net Dreams where users can now upload a picture to these sites and run it through the algorithm in creating images of their choice.

Artificially Intelligent Neural Network of Google


Artificially intelligent neural network of Google comprises of 10 to 30 stacked layers of artificial neurons wherein each layer tends to look at images and detects various aspects like a corner or a shape and conveys information to the next layer till the final layer tends to formulate an answer.

At times the network understands shapes and decides to understand mild images such as clouds or faces as animals which portrays unusual effects layered over the images like plenty of creepy eyes staring back at the viewers, fantastic dog heads merged into objects and animals with striking touches. The engineers mentioned that `the techniques presented helps to understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture and check what the network has learned during training.
It also make us wonder whether neural networks could become a tool for artists, a new way to remix visual concepts, or perhaps even shed a little light on the roots of the creative process in general’.