Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Saturday 7 May 2022

Smart Screws, that Transmit DATA Wirelessly

Smart Screws, that Transmit DATA Wirelessly

Smart Screws are generally available in multiple places like on cranes, scaffolding, high-rise buildings, bridges, wind turbines, machines, etc. However, sometimes, wear & tear and temperature fluctuations or vibrations may lose one or more, which can face fatal repercussions. That's why you need to inspect regularly for safety-critical structures.

A research team from the Fraunhofer Cluster of Excellence Cognitive Internet Technologies CCIT has recently made a solution called "Smart Screw Connection." It comes with sensors and radio technology, enabling you to monitor the connections remotely.

What are Smart Screws?

The Smart screw is a type of screw used in wind turbines, machines, and bridges to keep them safe. Its sensors and radio technology help to monitor connections.

Features That Make Smart Screws Special: 

 

Diaforce Thin Film:

The Smart connection is fitted with a washer with a piezoresistive DiaForce thin film. Pressure-sensitive sensors register the preload force at three points while it remains tightened. If any change occurs in the preload force, it can change the electrical resistance in the DiaForce® thin film.

What says Dr. Peter Spies?

He is the Group Manager of Integrated Energy Supplies and Project Manager at the Fraunhofer Institute for Integrated Circuits IIS. Dr. Peter explains that a change in resistance occurs if it comes loose, and then a radio module will know about it. After that, the radio module, available on its head, sends the data to a base station, and it gathers all details from all relevant to the object.

Reliable Data Transmission:

The Fraunhofer Institute for Surface Engineering and Thin Films IST created DiaForce® thin film. Fraunhofer IIS contributes to the mioty® low-power wide-area network (LPWAN) on radio technology. Besides, the technology helps to transfer small data packets over long distances. However, through the process, it takes minimal energy using only a base station from over 100,000 sensors.

Sometimes, the base station is available at the edge of a wind farm, several hundred meters or a few kilometers away. After that, a software program shows you the information for each one separately in a graphical overview. The status of the connection relies on the configuration and application and how they are transmitted permanently.

Why Is It Important?

He says that it becomes possible to notice the stability of safety-critical infrastructures for the first time with the help of a remote monitoring system. Besides, it enables you to check on each model, making it an essential asset for safety. He also adds that engineers don't need to be available on-site to check each one during a bridge or wind turbine inspection, and the service station will get all the data via radio.

People use the technology for different applications like flange connections in the industry, the bolts in steel girders in high-rise buildings, the load-bearing parts of bridges, or attaching rotors to wind turbines.

Researchers have given solutions to energy demand issues in a resource-efficient manner. The system follows the energy harvesting principle, and it uses heat or light to generate electricity. In this case, a thermoelectric generator creates electricity from the minute differences in temperature between its head and the environment. However, if you are willing, you can generate electricity through solar cells, and it becomes self-powered due to energy harvesting.

Protects from Hackers:

Fraunhofer researchers always give importance to the security factor. A shielded programming box contains each screw, the sensor unit, and the radio module during installation. Besides, it gets an ID from the box through the short-range RFID, its requirement profile, and an individual encryption key. It uses an encrypted radio link connected to the base station during data transmission.

He says that it is possible to stop criminals or hackers from sabotaging the system. Technical staff depends on the information. The project involves Fraunhofer IST, Fraunhofer IIS, the Fraunhofer Institute for Structural Durability and System Reliability LBF, and the Fraunhofer Institute for Applied and Integrated Security, AISEC. The Fraunhofer Cluster of Excellence Cognitive Internet Technologies CCIT runs the project.

Advantage of Smart Screws:

The technology decreases the need for daily inspections of large structures. For example, amusement parks usually have a team of experts who inspect towering roller coasters the ensure nothing comes loose. But the risk increases too much when it comes to bridges, and it is while you need the technology.

The Bottom Line:

Smart devices need energy or power for operation. No one wants to make a bridge with battery-powered screws. It is why researchers move to energy harvesting, mainly the thermoelectric effect. Smart Screws' technology helps to keep these vital structures safe.

Tuesday 2 July 2019

Artificial Intelligence: See What you Touch and Touch What you See

Artificial Intelligence: See What you Touch and Touch What you See
Since we were small till now and till the end, feeling is and will always be a sort of language to us. The ability to touch and understand what we’re touching is something that we’ve taken for granted since we were born. While this is something we don’t really think about, a robot that has been programmed to touch or see can do one of them but not both together. So, to bridge that gap researchers at MIT’s Computer Science and Artificial Intelligence Laboratory or CSAIL for short have come up with an AI tech that can learn to see by touching and vice versa.

GelSight to Help with seeing by Touching and Touching by Seeing: 


The system up at CSAIL works by creating tactile signals from visual inputs and helps in predicting which object and what part of such an object is being touched. This they did by using a tactile sensor called GelSight, curtesy of another group at MIT.

How GelSight Works: 


The team at MIT used a web camera to record nearly 200 objects being touched. This was done not once or twice but nearly 12000 times. Once that was done, the 12000 videos were broken down into static frames. These frames became a part of a dataset known as “VisGel”. VisGel includes more than 3 million visual/tactile images.

By Using AI, the robot or GelSight learns what it means to touch various objects as well as different parts of those objects. Allowing the robot to blindly touch things, they use the dataset they’ve been given to understand what they’re touching and identify the object. This according to researchers will greatly reduce the data that is needed for manipulating and grasping objects.

The Work to Equip Robots with More Human Like Attributes: 


MIT’s 2016 project uses deep learning to visually indicate a sound or sounds and to also enable a robot to predict responses to physical forces. Both these projects are based of of datasets that don’t help in guiding seeing by touching and vice versa.

So as mentioned earlier, the team had come up with VisGeldataset and one more other thing. This other thing is Generative Adversarial Networks or GANs for short.

GANs use the visual or tactile images to generate other possible images of the thing. It basically uses two things known as “Generator” and “Discriminator”. These two compete with each other in that while one comes up with images of real- life objects to fool the discriminator, the other has to call the bluff. Once and if the discriminator calls it, the generator learns from it and tries to up the game.

Learning to See by Touching: 


As humans we can see things and know exactly how they would feel had we touched it. To get machines to do the same thing, they first had to locate the position of the likely touch and then understand how that exact location would feel had they touched it.

To do this the reference images were used. This allowed the system to see objects and their surroundings. After that the robot arm or GelSight came into use. While the GelSight was being used, it took how various areas felt when touched into its database. While touching things and because of VisGel, the robot knew exactly what it was touching and where.

Sunday 1 July 2018

Google Duplex: Google's Eerie Human Phone Bot is Ready for the Real World

Google Duplex gets a Redo

These days everyone is on the AI bandwagon, from apps to gaming, you can see AI on all these platforms. So why not get AI to book appointments and reservations at restaurants? That’s what Google thought back in May when it released a bot to make reservations and other stuff, called Google Duplex. On first hearing the voice, many thought it was weird and just didn’t sound right.

At the Google’s developer conference in May, a bot or AI or Google Duplex was heard calling a salon and a restaurant to book a reservation on behalf of a human. This initial show of Duplex caused many to raise a number of issues with the chat bot, one of them being that the person on the other end did not know that they were talking to a chat bot and the other that they did not even know they were being recorded. Both of these issues significantly invades a person’s privacy and the said person is unaware of both these issues.

Google Duplex to get a redo:

Google has since then taken these issues seriously and a few weeks before Google Duplex can be used by a small number of small businesses and users, Duplex has got a big redo.

Google aims to address these issues at a time when it has released a set of principles that prevents Googlers from using AI tech in technologies that could affect or violate human rights.

Google Duplex in a Demonstration: 


This time around in order to show case the new and improved version of Duplex, Google used a Hummus shop down the road from their headquarters in Mountain View, California. Google got Duplex to make a reservation at the restaurant which they played through the restaurant’s speakers so that the assembled reporters could hear Google Duplex in action.

As soon Duplex hit the ears of all those present a few differences were immediately apparent. Google’s Duplex informed the person on the other line that it is Google’s automated booking service and that it’ll be recording the call.

There were even Google’s representatives present to answer any questions that the reporters might have. The reporters were even given a chance to rotate around and answer the phone. Google’s duplex was being made to call through Google assistant from a laptop just a few feet away in the restaurant.

While many reporters took their turn at the restaurant, many of them tried to trip up the chat bot and when the chat bot was unable to satisfactorily complete the call, another person came on line which one would barely notice as a real human being.

 

No regrets: 


Google had no regrets at showing Duplex at their developer’s conference back in May. They said they were only demonstrating the technology side of things and all the feedback that they received was really helpful in making changes and going forward.

But another question arises now, Do people really feel a need to fulfill their obligations of attending an appointment, especially now that a bot is placing an order on their behalf?

Wednesday 2 May 2018

Using Evolutionary AutoML to Discover Neural Network Architectures

AutoML

Human Brain – Wide Range of Activities

The human brain has the potential of performing a wide range of activities most of which does not need much effort, for instance in conveying if a visual extract comprises of buildings, or animals. In order to perform this activity the artificial neural networks needs vigilant strategy from professions with several years of complex research, addressing each particular task in discovering what lies in an image, to name it a genetic variant or to assist in diagnosing a disease. One would prefer having an automated system of generating the precise architecture for any particular task. One of the methods of generating the architectures is by way of utilising evolutionary processes.

Old research for neuro-evolution of topologies had laid the foundation which enabled the application of these processes at scale, presently. Several of the groups have been operating on the subject inclusive of OpenAI, Uber Labs, and Sentient Labs together with DeepMind. Google Brain team certainly had been discerning about AutoML too.

Evolution - Scale Function on Construction – Architecture

Besides utilising learning based approaches it was also speculated on the use of computational resources to programmatically evolve image classifiers at exceptional scale.Queries addressed were: `Could we accomplish solutions with minimal expert participation? How good can artificially-evolved neural networks be at present?

The purpose was to enable evolution at scale function on constructing the architecture. The procedure found classifiers similar to hand made models at that point of time, beginning from the simple networks. This was inspiring since several applications needed minimum participation. For instance some would require improved model though could not have the time in being a machine learning professionals.

The next query that came up was,would a combination of hand-made and evolution perform better than any of these approaches. In a recent paper `Regularized Evolution for Image Classifier Architecture Search (2018) participation took place with the provision of sophisticated building blocks together with good initial conditions.

Scaling of Computation – New TPUv2chips – Google

Scaling of computation had been done utilising the new TPUv2chips of Google. With the combination of up-to-date hardware, skilful knowledge together with evolution the result produced state-of-the-art models on well-known benchmarks for classification of images namely CIFAR-10 and ImageNet. In the paper besides evolving the architecture, at the time of exploring the search space of early condition as well as the learning rate schedules, the population trains its networks.

 This consequence is that the development with enhanced hyper parameters producedentirely skilled models. When the experiments tend to begin, no professional input is essential. The second paper `Regularized Evolution for Image Classifier Architecture Search offered effects of applying evolutionary systems to search space. Mutations tend to adapt the cell by erratically reconnecting the inputs or switching the operations.

Though the mutation tends to be simple, the initial conditions are not. The population tends to get modified with models which tend to adapt to the outer stack of cells.However if the cells in such seed model tend to be unsystematic, there will be no beginning from simple model that eventually would make it simpler to obtain excellent models.

The paper has portrayed that evolution can locate state-of-the-art models which could compete or outdo hand-designs.

Saturday 10 March 2018

IBM CIMON: IBM Ships Robotic Head to the International Space Station

IBM Cimon

IBM gets into the delivery of equipment to International Space station

International Space Station needs a consistent of supply of essential equipment and food supplies from the earth. Earlier Elon Musk’s SpaceX played a vital role in delivering supplies to the ISS but now even IBM is in the same fray. IBM is aiming towards delivering a very unusual eleven pound robot head to the International Space Station using a device called IBM Cimon or the Crew Interactive Mobile Companion.


All you need to know about IBM Cimon

 

IBM Cimon is powered by company's popular supercomputer Watson. But IBM is utilizing it in a different manner which enables to embark on a unique mission to the International Space Station. Earlier IBM has used the Watson technology to run a number of crystal experiments as well as solving the Rubik's Cube. IBM is working closely with a German Astronaut named Alexander Gerst in order to use the IBM Cimon to perform variety of tasks like conducting complex medical experiment just using the on-board camera.

IBM Cimon isn't just every tech piece rather the company wishes to establish it as a reliable companion for the astronaut in the space which can do a number of tasks on its own or with little guidance from the astronauts. With IBM Cimon astronauts will be able to get a number of prescribed tasks done with effortless co-ordination on the space station. Secondly IBM Cimon will also act as a safetyimprovement device by giving out timely warning of any impending failure. It will be smart and intuitive enough to find the failure even before it is shown on the astronauts control boards.

IBM Cimon learns and improves on its own


As IBM Cimon is backed with the AI technology this device can literally act as the first space assistant which will be beneficial for the astronauts working on the International Space Station.The use of technology in IBM Cimon allows it to possess speech, text and other image processing capabilities. Secondly it will be smart enough to retrieve certain specific information and finding when needed to offer valuable insight in time of need. Thirdly astronauts will be able to teach a number of skills to the IBM Cimon individually thereby enhancing it application.

IBM is essentially making use of the Watson speech and vision technologies in order to make CIMON recognize and understand the different voice sample with perfection. Visual Recognition of Watson is employed to understand as well as learn the construction of plans of the Columbus module of the International Space Station.

As IBM Cimon will be deployed at the International Space Station it is necessary that it understand its layout so it can move around with ease and simplicity. Astronauts have even taught him a number of procedures which is ought tobe carried out by him in connection with the on-board experiments. IBM Cimon shows that the science fiction wouldn't be a fiction anymore as we move towards creating intelligent robots to assist astronauts in space.

Thursday 1 February 2018

Talking Artificial Intelligence to the Next Generation

Artificial Intelligence

Bringing artificial intelligence as a subject to study for younger generation

The world is moving towards the artificial intelligence and it will become an integral part of everything from technology, hardware, software to almost anything comprehensible. Before that happens we need to get our next generation known about its intricate and complex concepts and science governing it.

Taking artificial intelligence right to the younger audience wouldn’t be a simple task given its complexity. However a professor from the Queensland University of technology has found a remarkable way of teaching the intricate working of the artificial intelligence with a simple book titled “The Complete Guide to Artificial intelligence for Kids”.

Introducing Artificial intelligence to kids

The professor of the Robotics from QUT named Michael Milford is keen on making the young generation adept at understanding the artificial intelligence than any generation before them. He is introducing the AI to the kids in the primary school through a specially crafted book and a number of accompanying tactics which makes the complex themes and working of AI as simple enough to understand by a young mind. The best thing about this teaching methodology is that it can even used to teach AI even to the parents and grandparents alike which will help in creating a world where AI isn’t an enigma.

Michael has created this teaching program by drawing the insights from his vast experiences as the teacher as well as a capable researcher in the field robotics. He is also well versed in the field of the autonomous vehicles and even the exclusive brain-related technology.

In simple words he is best person to unlock the mysteries of the AI to the younger generation in a way which is both interaction and educational in every given manner. It is worth pointing out that Michael is regular speaker on the subject of Artificial intelligence at varied conference, schools, workshops and other popular events like World Science Festival Brisbane.



Michaela states that currently our world is buzzing with the talks about AI and the demand of deriving insights and information utilizing the AI capabilities is at its peak. We are bringing the AI right to the every basic sector of the society from the transport, finance, government to the public institutions. Therefore it is necessary to bring the knowledge of the artificial intelligence in tits and bits which help them in getting the bigger picture of the AI like even we can’t understand and or make them understand the way we know it.

Educating kids in their way using Artificial intelligence

The inspiration behind taking AI to the kids emerges from the constant enquiries made by his son about his work. IN order to make him understand about his work he had to make his understand some bits of the robotics then some bits about the AI. But when it came to AI he had make his understand a number of basic underlying concepts like how actually AI works, learns, interpret data and how good or bad affects it can have upon the society.

Artificial intelligence as simple as it sounds because it brings the culmination of varied fields like machine learning, science, computing, mathematics and other but this book helps in explaining everything to kids.

Thursday 16 November 2017

AI Image Recognition Fooled By Single Pixel Change

AI Image Recognition

Adversarial Model Images


According to research, computers can be misled into thinking that an image of a taxi can be a dog by only altering one pixel. These limits developed from the methods that Japanese function in tricking the extensively utilised AI-based image recognition system.

Several of the other scientists tend to now develop `adversarial’ model images to reveal the fragility of some kinds of recognition software. Experts have cautioned that there is no quick and easy means of fixing image recognition system of stopping them from being duped in this manner.

Su Jiawei together with colleagues at Kyushu University, in their research had made small alterations to plenty of the images which were then analysed by extensively utilised AI-based image recognition systems. All the systems that had been tested had been based on a kind of AI known as deep neural networks.

These systems usually tend to learn on being trained with plenty of various examples for the purpose of providing them with an intellect of how objects such as dogs and taxis tend to vary. It was observed by the researchers that altering one pixel in about 74% of the test images made the neural nets mistakenly label what they saw.

Designed – Pixel Based Attacks


A variety of pixel based attacks had been designed by Japanese researchers which had caught all the state-of-the-art image recognitions system that had been investigated. Mr Su from Kyushu leading the research had commented that as far as they were aware there was no data-set or network which is more robust than others.

 Several other research groups all over the world have been now developing `adversarial examples’ which tend to reveal the flaw of these systems according to Anish Athalye from the Massachusetts institute of Technology – MIT who has been dealing with this issue. A specimen made by Mr Athalye together with his team is a 3D printed turtle that one image classification system insists on labelling a rifle.

He informed BBC that more and more real world schemes have begun to incorporate neural networks and tends to be huge concern which these schemes could be possible to destabilize or attack utilising adversarial examples. He stated that though there had been no instances of malicious attacks in real life, the fact that these apparently smart schemes could be deceived with ease was a matter of concern.

Methods of Resisting Adversarial Exploitation


Web giants comprising of Facebook, Amazon as well as Google seems to be known for investigating methods of resisting adversarial exploitation. He stated that it is not some strange `corner case’ and it has been shown in their work that one can have a single object which steadily fools a network over viewpoints, even in the physical world.

He further added that the machine learning community do not tend to comprehend completely what seems to be going on with adversarial examples or why they seem to exist. Learning system established on neural network tends to involve creating links between large numbers of nodes such as nerve cells in a brain.

Analysis involves the network creating plenty of decision regarding what it tends to see and every decision should lead the network nearer to the accurate answer.

Friday 20 October 2017

Take Two for Samsung's Troubled Bixby Assistant

Bixby Assistant

The next phase of Samsung Bixby is competitive AI market

Artificial Intelligence is seen as the new frontier which ought to be explored by every major tech firm in order to boost its array of products and services. Apple is already in the game, so is true for Microsoft, Google and Amazon and they had almost aced it then Samsung comes to the party and it fails miserably. Bixby assistant made its debut on the Samsung elite Galaxy S line-up with dedicated button. The major issue for it overcome is the presence of Google Assistant on the Android OS which simply removes the need of having another budding AI within a smartphone.

 

The failure of the Samsung in the first phase

 
Bixby assistant has a rough and trouble beginning since its launch with the Samsung Galaxy S8 where it was touted as next best feature to be present on the flagship device. Sadly this AI wasn’t even ready to take on the challenges of the English-speaking world therefore Korean giant Samsung has to disable it altogether. This saw the departure of the Bixby from the premium Galaxy device and a physical button was left in its wake to remind users of the Samsung debacle.
 

Bixby came back with new energy and promise

 
Last year’s recall of the Note 7 due to battery failure was nothing less a nightmare for the Samsung and this year’s failure of Bixby was another nail in its coffin. However Samsung engineer took the challenge of improving upon ten shortcoming of the Bixby assistant in a record time and they brought the Bixby 2.0 at the Samsung’s Developer’s Conference in Sans Francisco. It is worth noting that this time around Samsung has simply rebuilt the Bixby from the ground and this time around they are hopeful of making a genuine progress with their AI technology.
 

Samsung aims to conquer the AI market

 
Samsung has hired Dag Killatus who had earlier created the Viv assistant which got acquired by Samsung just a year ago and even before this he was core-member of the team which went on ti develop Siri. Samsung is a dominant player in the smartphone market with having over 23 percent of the market share which can help it in pushing and popularizing its personal assistant without much hassle.

Apart from the smartphones Samsung has planned to push Bixby assistant to the refrigerators and a number of other homely electronic devices as they can benefit immensely from voice-based controls rater than the user interface. Samsung is going all out to make the Bixby a better competitor in the crowded personal assistant by bringing Bixby more devices and it will also be opining it up for the third party-developers to boost its functionality and features.

As stated earlier Samsung Bixby Assistant will be competing against the Google Assistant majorly as it comes bundled on the Android OS. It is very unlikely that users will be keeping two different assistant on the same device but Samsung is keeping its finger crossed.

Friday 11 August 2017

The Computer That Know What Humans Will Do Next

AI

Computer Code – Comprehend Body Poses/Movement

A fresh computer code tends to provide robots with the possibility of an improved understanding of humans around them, making headway for more perceptive machines from self-driving cars to investigation. The new skill enables the computer to comprehend the body poses as well as movements of various people even to the extent of tracking parts as tiny as individual fingers.

Though humans tend to communicate naturally utilising body language, the computers tend to be somewhat blind to these interactions. However, by tracking the 2D human form and motion, the new code is said to improve greatly the abilities of the robots in social situations.

A new code had been designed by the researchers at Carnegie Mellon University’s Robotics Institute by utilising the Panoptic Studio. The two-story dome has been equipped with 500 video cameras developing hundreds of views of individual action for a specified shot. Recording of the system portrays how the system views the movement of humans utilising a 2D model of the human form.

Panoptic Studio – Extraordinary View of Hand Movement

This enables it to trail motion from the recording of video in real time, capturing everything right from the gestures of the hand to the movement of the mouth. Besides this, it also has the potential of tracking several people at once.

Associate professor of robotics, Yaser Sheikh had stated that they tend to communicate mostly with the movement of the bodies as they tend to do with their voice. However computer seems to be more or less blind to it. Multi-person tracking gives rise to various challenges to computers and hand detections is said to be more of an obstacle.

The researchers, in order to overcome this, utilised a bottom-up approach localizing individual body area in an act. Thereafter the areas were associated with certain individuals. Though the image datasets on the hand of the human seemed quite restricted than those on the face or body, the Panoptic Studio provided extraordinary view of hand movement.

 A PhD student in robotics, Hanbyul Joo had stated that a distinct shot provides 500 views of individuals hand and also automatically interprets the position of the hand.

2D to 3D Models

He further added that hands tend to be too small to be interpreted by most of the cameras, but for the research they had utilised only 32 high-definition cameras though were still capable of building a huge data set. The method could ultimately be utilised in various applications for instance helping to enhance the ability of self-driving cars to predict pedestrian movements.

 It could also be utilised in behavioural diagnosis or in sports analytics. Researchers would be presenting their work CVPR 2017, the Computer Vision and Pattern Recognition Conference, from July 21 -26 in Honolulu. Up to now they have released their code to several other groups in order to expand on its skills.

Finally, the team expects to move from 2D models to 3D models by using the Panoptic Studio in refining the body, face and hand detectors. Sheikh had mentioned that the Panoptic Studio had boosted their research and they are now capable of breaking through various technical barriers mainly as a result of the NSF grant 10 years back.

Friday 30 June 2017

Can Artificial Intelligence Help Us Make More Human Decisions?


About 88 million pages of original and authentic handwritten documents belonging to the past three-and-a-half centuries, line the tiled halls of a simple 16th-century trading house located right in the middle of Seville, Spain. These are stored here, incompletely transliterated, where some of them are almost indecipherable. A few of them were carried back on armadas from the Americas while a few have undergone scanning and digitisation.

These documents contain the answers and the context for the innumerable questions pertaining to the Conquistadors, the European history, the New World contact and colonialism, politics, law, economics and ancestry. However, it is unfortunate that hardly some of these carefully kept pages were ever read or interpreted since they were written and brought to Seville centuries before and it is highly unlikely that most of them never will be.

All hope is not lost as a researcher from the Stevens Institute of Technology is trying to get computers to read these documents, before we are out of time, while the documents are still readable. A Stevens computer science professor, Fernando Perez-Cruz asks “What if there was a machine, or a software, that could transcribe all of the documents?”.

Perez-Cruz, who’s expertise lies in the research area of machine learning also says “What if there was a way to teach another machine to combine into groups those 88 million pages and convert them into searchable text categorised into topics? Then we can start understanding the themes in those documents and then will be aware where to look in this storehouse of documents for our answers”. Thus Perez-Cruz is working on both factors of this two-fold approach which, if right, could then be applicable to many other new age and futuristic data analysis queries such as independent transport and analysis of medical data.

Pricing on Amazon, medical study, text reading machines


Perez-Cruz, who is a veteran of Amazon, Bell Labs, Princeton University and University Carlos III of Madrid, has had a very interesting career dealing with scientific challenges.In 2016, he joined Stevens and contributed to the growing asset of the computer science department of the university. Stevens aims at making this a strong research department which in turn is drawing more talent and resources. Perez-Cruz is using this to his advantage in his work. Currently, at Stevens, he is working to develop something called as ‘interpretable machine learning’ which is a systematized intelligence that humans can still work on.

As far as the problem of the historical document analysis is concerned, Perez-Cruz is in the hopes that he will be able to develop improved character-recognition engines. With the help of short excerpts of documents written in varied styles, which have been earlier transliterated by experts, he aims to teach software to identify both the forms of characters and often correlated associations between letters and words, thus constructing a growing recognition engine over time that is absolutely precise. The only question remains, he says, is that how much data or how much handwriting that is transcribed, is sufficient to do this well. The work on this concept is still developing.

Perez-Cruz states that he believes even though it is a technical challenge, it may still be achievable. He is even more fascinated about the next part which is organisation of large quantities of transcribed matter into topics that can be used in a glance. He says that the machine should be able to give us information right away from these three-and-a-half centuries of data when transcribed and should itself learn from the locations of the words and sentences. This is, what he calls, topic modelling.

A key link: Systematically grouping large data into easily accessible topics


After sufficient data has been entered into the algorithm, it begins to spot the most vital identifying and organizing forms and designs in the data. Very often, it so happens that various cues from the human researchers are vital and are searched for.Perez-Cruz notes that eventually, we might discover that there are, let’s say, a few hundred topics or descriptions that run through the whole of this archive and then all of a sudden there may be 88-million-document problems that have been scaled-down to 200 or 300 ideas.

If algorithms can consolidate 88 million pages of text into a few hundred lots, a huge progress in systematisation and efficiency can be achieved by historians and researchers who need to make choices about which particular document, theme or time periods are to be searched, reviewed and analysed in the formerly unmanageable archive. The same concept could be used to find styles, themes and concealed meaning in other vast unread databases.

He concludes saying that one begins with a huge quantity of unorganised data and in order to understand what material does that data contain and how it can be used, a kind of a structure needs to be brought to that data. Once the data is comprehended, one can begin to read it in a particular way, understand better what questions are to be asked pertaining to that information and make better conclusions.

Wednesday 26 April 2017

Biased Bots: Human Prejudices Sneak into Artificial Intelligence Systems

Bot

Biased robots are here with human prejudices seeping their AI

Most of the AI experts believed that the artificial intelligence will offer objectively rational and logical thinking for the robots and systems in future. But a new study has is showing a darker path for AI wherein the machines are acting reflection so human and the AI is prejudiced with the human notions.

It has been found when the common machine learning programs are trained online with the ordinary human language then they are likely to acquire the cultural biases and this can get embedded right into the patterns of their wording. The ranges of these biases are quite erratic from the preference to certain lower to having some objectionable view about the race or gender.

Security experts has stated that it is extremely critical and necessary to address the rise of biases in the machine learning at the earliest as it can seriously impact their reasoning and decision making in future. In upcoming days we will be turning to the computers for processing a number of things ranging from the natural language translation for communication to making online text searches as well as image categorization.
Fair and just

Arvind Narayanan, an assistant professor of computer science at the Center for Information Technology (CITP) at Princeton has stated that the artificial intelligence should remain impartial to the human prejudices in order to offer better result and judgment making skills. He asserted that fairness and bias present in the machine learning has to be taken seriously as our modern will depend on it in near future.

We might soon be finding ourselves in the center of such situation wherein modern artificial intelligence system will be frontrunner is perpetuating the historical patterns of bias within the system without even us realizing it. If such events comes in the future then it will be completely socially unacceptable and we will still remain good old times rather than moving forward.

An objectionable example of bias seeping into AI

Just a few years ago in 2004, a study was conducted by Marianne Bertrand from the University of Chicago and Senhil Mullainatahan from Harward University. These economists conducted a test wherein they sent out about 5000 identical resumes to over 1300 job advertisements.

They only change the name of the applicants’ names which happened to be either the traditional European American or the African American and the results they received were astonishing. It was found that the European American candidates are 50 percent more likely to get an interview than the African American candidates. Another Princeton study has shown that the set o African American comes more unpleasant associations than the European American sets when it is run through automated system run by the artificial intelligence based systems.

Therefore it has become a necessity to distance the AI from the biases and prevent the cultural stereotypes from further perpetuation into the mathematics based instruction of the machine learning programs. It should be taken as the task of the coders to ensure that machines in future reflect the better angles of the human nature.

Monday 10 April 2017

‘Machine Folk’ Music Shows the Creative Side of Artificial Intelligence

Magenta
Folk music is seen a direct link which connects us to our past and helping in cementing the cultural bond. When it comes to Artificial Intelligence it doesn’t possess any cultural heritage, bond or traditions but can we help Artificial Intelligence in building those attributes which defines the human intelligence at some levels. Over the years AI has grown by leap and bounds as it has been able to defeat the brightest human minds at Chess and even demonstrated breath-taking wordplay skills but can it create music or show us the creative side.

Artificial Intelligence on the rise

Researchers are trying to unlock the creative side of the Artificial Intelligence for quite some time. In 2016 a AI has been able to produce a piece of musical theatre which was premiered in the London. The effort is given towards broadening the boundaries of the creative research through new evolved AI techniques and utilizing the larger collection of the data. The Artificial Intelligence written musical piece has been a result of a thorough analysis of hundreds of other successful musicals.

Some of the prominent Artificial Intelligence project which aims towards bringing art and music out of the AI includes Sony’s Flow Machines, Google’s Magenta project and some projects under the British startup called Judedeck.

During the current study researchers has made use of the state of the art Artificial Intelligence techniques in order to educate Ai about the musical practice in the Celtic folk tradition. Traditional folk music was reduced into the form of music notation using ABC. In this method the music is denoted using the text characters as a rough guide for the musicians. Researchers have been successful in training the Artificial Intelligence for using as many as 23,000 ABC transcription of the folk music. This particular feat of transcription this amount of folk music was made possible with the active contribution of the people on the ‘the session.org’. During recent workshop researchers has asked the folk musician musicians to perform some of the songs composed by the machines.

The culmination of the artificial compositions and the human melodies

This particular Artificial Intelligence is trained to look at a single ABC symbol and predict what will come next and this helps in generating new tunes which appears to be like original compositions due to the use of existing patterns and structures. So far researchers have generated as many as 100,000 new folk tunes using Ai which is a remarkable feat in itself.

The structure of the folk music composed by the AI has two repeated part of similar eight-bar length and they complement each other quite musically. Secondly AI had also shown a great ability of repeating and varying the musical patterns which has always been the key characteristic of the Celtic music. AI has been able to understand as well learn all the rules set for the Celtic folk song compositions on its own though carefully analysing the patterns in the fed data within a very short time.

Friday 24 March 2017

FaceBook Building a Safer Community With New Suicide Prevention Tools

FaceBook

Facebook Utilising Artificial Intelligence to Avert Suicides

Facebook intends utilising artificial intelligence and updating its tools and services in order to avert suicides among individuals. This social media network which is the largest in the world had informed that it intends to integrate its prevailing suicide prevention tool for Facebook post in its live streaming feature namely Facebook Live together with its Messenger service.

The company had mentioned in a blogpost recently that artificial intelligence would be utilised in supporting spot users having the tendency of suicides. As per the New York Post, a 14 year old foster child in Florida had broadcasted her suicide on Facebook Live, in January. In live video stream, Facebook has been utilising artificial intelligence in monitoring offensive material.

Recently, the company had expressed that the latest tools would be providing a choice to users viewing a live video in reaching to the user directly and report the video on Facebook. Resources would also be provided by Facebook Inc. which comprises of reaching to a friend and get a help line to the individual broadcasting the live video. Among individuals in the age group of 15 -29, suicide is said to be the second leading reason of death.

Suicide Rates Escalated by 24%

As per a National Centre for Health Statistics study, in the United States from 1999 to 2014, the suicide rates had escalated by 24 percent after a period of almost steady decline. Suicide among youngster seems to be the final ultimatum in stepping in and ending their life when help and proper counselling is not available to the victim.

 Facebook has provided advice to individuals who would be at risk of committing suicide for years though till now it seemed to depend on other individuals in bringing about awareness by clicking a report button on a post. It has now progressed pattern-recognition algorithms in recognising if an individual seems to be stressed by training them with examples of the posts which could have been earlier identified.For instance, dialogue of sadness and pain could be an indication and responses from friends with comments such as `Are you ok’, or `I am concerned about you’, could be of some help.

System Being Rolled All Over the World

When a post has been recognized, it could be sent for quick review to the community operations team network. When someone who would be viewing the live stream tends to click a menu option declaring that they are concerned about the individual, Facebook then provides advice to the viewer with regards to how they could render help to the broadcaster.

The stream is also identified for instant review by the team of Facebook who in turn overlaps a message with a suitable recommendation whenever possible. This latest system is being rolled out all over the world. Presently, a different choice in contacting various crisis counsellor helplines through the Messenger tool of Facebook seems to be limited to the US. Facebook has mentioned that it needs to check if the other organisation would be capable of managing with the demand before it intends to develops the facility

Saturday 28 January 2017

Google Is Making AI That Can Make More AI

Google
It is extremely touch to get a good artificial intelligence going on across the devices or on a single device. The tier tech companies from the Silicon Valley namely Google, Microsoft and Apple have spent millions of dollars and years of research to develop their own proprietary AI for their range of devices. Within a short time their distinct AI has become an integral part of their overall device user experience. This has been result of continuous monitoring, tweaking and further development of the AI to enhance its potential and to work at their best. Now Google AI research lab has stated that it is currently building a new AI software which possess the ability to develop more of its kin i.e. AI. In short AI is set to make more AI in future will be much cheaper and easier affair than today.

A smart enough AI to develop more AI

Google has stated that for AI to become capable of developing AI is a extremely delicate and complex process which does require higher level of human intervention. Google has hired a number of experts of experts to develop or discover such tools which has the potential of developing more AI in future. Secondly Google is trying to reduce cost incurred at the development of the AI by building a smart enough Ai to do the job. In future educational institutions and corporation will be able to hire AI builder to develop their own Ai for exclusive purposes.

Is science fiction turning into reality? 

We already have a rich science fiction literature and movie galore which showcases how AI will eventually take over the world and decides to kill the humanity. This scenario is more commonly own as the Skynet catastrophe based on the Terminator series evil AI. When machines are allowed to develop their offspring that are smarter than the earlier iteration then it is certainly a major reason to worry. In similar fashion AI will work on its own to develop Ai and it will keep learning things related to development on its own without any human assistance. This will bring up a situation where in humans wouldn’t be able to understand the minute working details of the AI by looking at its performance and this will create a trouble for the AI trainers. Eventually AI will overcome as a powerful entity which wouldn’t require human at all to discover new territories.

This might appear to be too dark to digest given the fact that Google wouldn’t let the AI run rogue at any given time. Google has built it contingency plans to avoid any such miserable situations by ensuring that Ai doesn’t get the chance to disable its own killswitches at any given point of time. Furthermore Google has clarified that its AI charged with the task of developing more AI isn’t capable of competing against the human engineers which ensure that dark futuristic Skynet isn’t in making at all.

Ref:

Fast Reinforcement Learning via Slow Reinforcement Learning

Learning to Optimize

Thursday 15 December 2016

Quick Draw: Interactive Drawing Game Guess What You're Doodling in 20 Seconds

Quick Draw

Google’s Pictionary Style Experiment – Quick Draw’


A terrifying game developed by Google tends to utilise artificial intelligence to guess what one could be drawing from sketches. The Pictionary style experiment known as `Quick Draw’ tends to prompt users in drawing a famous object or phrase in around 20 seconds by using a mouse cursor on a desktop or by utilising their finger on a mobile device.

 Google’s new AI tools are impressive and the game is built with machine learning according to a tutorial site. It informs that one can draw and a neural network tries to guess what one is drawing though at times it does not seem to work. However the more one plays with it the better understanding they gain from it. It is just an example of how one could use machine learning in an amusing way wherein the computer game adopts artificial intelligence and machine learning techniques to assist the user and shows the user how good they can be at it.

The software tends to guess what the player intends drawing depending on machine learning which can be tried by the user. Developed by using neural networks, the software tends to improve as it progresses, functioning in the same way to handwriting recognition software. Performers are invited to draw a series of six sketches and are provided with 20 seconds for each of it.

Impressive Image Recognition Software


The software tends to begin with words or phrases which it presumes the user would be utilising in illustrating till it obtains the appropriate one. These suggestions are portrayed towards the bottom of the screen and it also tends to call them out. Jonas Jongejan, creative technologist at the Google Creative Lab had mentioned in a video accompanying the game that it does not always work which is because it’s only seen a few thousand sketches.

The more one tends to play with it, the more it will learn and improve at guessing.The impressive image recognition software tends to identify also the reduced quality sketches offering a clue at what AI could be capable of.

The game achieved to guess six out of six Mail Online’s sketches that had been of blueberry, scissors, church, squirrel, swan and Eiffel Tower. Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim together with friends at Google Creative Lab and Data Arts Team were responsible for its construction and the challenge has been a part of the newly released AI Experiment site of Google.

Thing Translator


The site comprises of several machine learning experiments which also includes one that permits users to take a picture of something in order to identify how it is said in another language. The intention is that anybody could attempt the experiment though the site also tends to inspire coders to give in their own contribution. Google’s AI ExperimentsQuick Draw had been designed to show the amusing side of artificial intelligence and machine learning to users. Besides this other experiments comprise of the Thing Translator that informs you what the image of the object taken is called in another language together with the AI Duet that enables you to collaborate with the playing melodies on the computer.

Friday 21 October 2016

Getting Robots to Teach Each Other New Skills


Robot
Google – Robots Utilising Shared Experiences

Robots have not got hold of human intelligence still but the researchers of Google have shown how they have been progressing in utilising downloadable intelligence. Just envisage if you could be better at some skill by not just learning and practising but by accessing the brains of another to tap directly in their experiences.

That would be science fiction for human, but in the case of AI powered robotics, it could be a possibility to shortcut training times by having robots to share their experiences. Recently, Google had demonstrated this with its grasping robotic arms.

James Kuffner, former head of robotics of Google had created a term for this kind of skills acquisition, six years back, naming it `cloud robotics’. It recognises the effect of distributed sensors and processing supported by data centres and quicker networks.

 Presently Knuffer, the CTO of the Toyota Research Institute where his focus lies on cloud robotics in bringing about reality domestic helper robots. UK artificial intelligence lab, Google Research, Deep Mind together with Google X continues to discover cloud robotics in quickening general purpose skills acquisition in robots. In many demonstration video that were recently published, Google has portrayed robots utilising shared experiences to learn quickly how to move objects and open doors.

Robots – Own Copy of Neural Network

Out of the three multi-robot approaches which the researchers have been utilising is reinforcement learning, or trial and error together with deep neural networks which is the same approach that DeepMind is using in training its AI, is being skilled at Atari video games and Chinese board game Go.

Every robot tends to have its own copy of neural network which assists it to decide the ideal action in opening the door. Google has constructed data quite quickly with the addition of interference. Recording the robots actions, behaviours and concluding outcome is done by a central serve, which utilises those experiences in order to build an improved neural network which assists the robots enhancing the task.

As portrayed in two videos by Google, after a training of 20 minutes the robotics arms fumbled around for the handle though ultimately managed to open the door. But in the span of three hours, the robots could reach for the handle with ease, twist it and then pull to open the door.

Google Training Robots to Construct Mental Models 

Another system which they have been exploring could be helpful for robots to follow commands in moving objects around the home and it is here that Google has been training its robots to construct mental models on how things tend to move in response to definite actions by building experience of where pixels turn out on a screen after a certain action is done.

The robots tends to share their experiences of nudging different object surrounding a table, assisting them forecast what could happen should they tend to take a specific course of action. The researchers are ultimately exploring methods for the robots to study from humans. Google’s researchers directed robots to the doors and exhibited how to open them. These actions had been encoded in deep neural network which tends to transform camera images to robot actions.

Tuesday 29 March 2016

Microsoft's AI Bot

AI Bot

Microsoft’s Artificial Intelligence Chat Bot


Microsoft has developed a new artificial intelligence chat bot which tends to claim that it would be smarter the more one talk to it. Tay, the so-called bot has been built by Microsoft Technology and Research together with Bing team for the purpose of conducting research on conversational understanding. The Bing team had developed a related conversational bot, XiaoIce, for Chinese market in 2014.

Microsoft executives had dubbed XiaoIce `Cortana’s little sister’ after the company’s voice-activated Cortana personal assistant software of Redmond, Washington.The real world focus of the bot is to enable researchers to experiment and learn how people tend to talk to each other. Microsoft states that for bot which is available through Twitter as well as messaging platforms Kik and GroupMe, AI has been doing the role of a millennial and emojis have been included in the vocabulary and is clearly aimed at 18-24 years olds.

The bot seems to have little useful function for users though it has the potential of three varied method of communication and its website tay.ai tends to boast that the AI can talk through text, play games like guessing the meaning of a string of emojis and make comments on the photos sent to it.

Tay Designed to Engage & Entertain People


Till the time of writing, the bot had accumulated around 3,500 followers on Twitter but had sent over 14,000 messages, responding to questions, statements as well as general abuse within a matter of few seconds. The about section of Tay’s website stated the `Tay is designed to engage and entertain people where they connect with each other online via casual and playful conversation.

Tay tends to work depending on public data and with editorial inputs which have been created by staff and comedian. Microsoft has informed that `public data which has been anonymised is the primary data source of Tay and that data has been modelled, cleaned and filtered by the team creating Tay’. Besides the meme-tastic appeal of the bot, there seems to be a grave side to the research behind the AI. Making machine capable of communicating in a natural and human way is a main challenge for learning procedures.

Effort of Service to Comprehend How Humans Speak


Google too had recently updated its Inbox mail service recommending answers to emails and the smart reply feature offers three probable responses which are recommended by Google’s AI. Similar to Tay, Google informs that the more one uses smart replies, the better they will get. If a user desires to share with Tay, the bot tend to track the user’s nickname, gender, zip code, favourite food as well as the relationship status.

Users could delete their profiles on submission of a request through the Tay.ai contact form. In the field of virtual assistants and chat bots, Facebook’s M is also experimenting with the use of artificial intelligence in completing tasks. Though it has been partly controlled by humans, currently the systems are being condition to book restaurants and respond to some questions. The core of the service is an effort to comprehend how humans tend to speak and the best way to respond to them

Monday 11 January 2016

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Artificial_Intelligence

Artificial Intelligence Surpassed Human Competences

Computer researchers had recently reported that artificial intelligence had surpassed human competences for a narrow set of vision related task. These developments are remarkable since the so-called machine vision methods are being a commonplace in various characteristics of life comprising of car-safety methods which tend to identify pedestrians and bicyclists and in video game controls, Internet search as well as factory robots.

 Researchers from the Massachusetts Institute of Technology, New York University together with the University of Toronto have recently reported a new kind of `one shot’ machine learning in the journal Science, wherein a computer vision program seemed to beat a group of humans in identifying handwritten characters founded on a single example. The program seems to have the ability of learning quickly the characters in a variety of languages as well as in generalizing from what it has learned.

The authors recommend that this ability is the same wherein humans tend to learn and understand perceptions. Bayesian Program Learning or B.P. L as the new approach is known is unlike the present machine learning technologies known as deep neural networks. Neural networks can be trained in recognizing human speech, identify objects in images or detect types of behaviour on being exposed to large sets of examples

Bayesian Approach

Though these networks may have been modelled on the behaviour of biological neurons, they have not yet learned the way human tend to do, in quickly acquiring new concepts. In comparison, the new software program defined in the Science article has the capabilities of recognizing handwritten characters on `seeing’ only a few or a single example.

The researchers had compared the capabilities of their Bayesian approach as well as other programming models utilising five separate learning tasks which involved a set of characters from a research set. This was known as Onmiglot which comprised of 1,623 handwritten characters sets from 50 languages.

 The images as well as the pen stokes that were need to create characters were taken. Joshua B, Tenenbaum, professor of cognitive science and computation at M.I.T. together with one of the authors of the Science paper had commented that `with all the progress in machine learning, it is amazing what one can do with lots of data and faster computers. But when one looks at children, it is amazing what they can learn from very little data and some come from prior knowledge and some is built in the brain’.

Imagenet Large Scale Visual Recognition Challenge

Moreover, the organizers of an annual academic machine vision competition also reported gains in lowering the error rate in software for locating and classifying objects in digital images. Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill had stated that he was amazed by the rate of progress in the field.

The competition which is known as the Imagenet Large Scale Visual Recognition Challenge pits the teams of researchers at government, academic as well as corporate laboratories against one another in designing programs in classifying as well as detecting objects. The same was won by a group of researchers at the Microsoft Research laboratory in Beijing, this year.

Tuesday 22 December 2015

Facebook’s Artificial-Intelligence Software Gets a Dash More Common Sense

Facebook’s_Artificial-Intelligence

Artificial-Intelligence Researchers – Learn some of the Basic Physical Common Sense


In an attempt to discover how computers could learn some of the basic physical common sense, artificial intelligence researchers have undertaken a project for the same. For instance to comprehend unsupported objects tend to fall or a large object does not fit inside a smaller one, seems to be the main way human tend to predict, communicate and explain regarding the world.

Chief technology officer of Facebook, Mike Schroepfer, state that if machines are to be more useful, they would need the same type of good judgment of understanding. He had informed at a preview recently of results, he would share at the Web Summit in Dublin, Ireland, that they have got to teach computer systems to comprehend the world in the same way. Human beings at a young age, tend to learn the basic physics of reality and by observing the world.

Facebook had drawn on its image processing software in creating a technique which learned to predict if a stack of virtual blocks would tumble. The software tends to study by gaining access to images of virtual stacks or at times two stereo images like those that form a pair of eyes.

Crafting Software – Comprehend Images/Language of Deep Learning


It had been shown in the learning phase, that several various stacks some of which had toppled while the others did not. The simulation showed the learning software the result and after adequate examples, it was capable of predicting for itself with 90% accuracy if a certain stack would possibly tumble. Schroepfer comments that if one runs through a series of tests, it would beat most of the people.

Facebook’s artificial intelligence research group in New York had done the research. It concentrated on crafting software which could comprehend images as well as language utilising a technique of deep learning. Recently the group also showed off a mobile app with the potential of answering queries regarding the content of photos.

The director of the group, Yann LeCun who is also a professor at NYU, informed MIT Technology Review that the system for predicting when block would topple indicates that more complex physical simulation could be utilised in teaching additional basic principles of physical common sense. He added that `it serves to create a baseline if we train the systems uncontrolled and it would have adequate power to figure thing out like that’.

Memory Network


His group had earlier created a system known as `memory network’ which could pick up some of the basic common sense as well as verbal reasoning abilities by reading simple stories and now progressed, in helping in influencing a virtual assistant that Facebook tends to test known as M. M has more potential than Apple’s Siri or similar apps since it is powered by bank of human operators.

 However Facebook expects that they would steadily tend to become less important as the software learns to pitch queries for itself. Schroepfer informs that adding the memory network to M is showing how that could happen. By observing the interactions between people utilising M as well as the responds of customer service, it has learned already how to manage some of the common queries.

Facebook has not made a commitment of turning M into a widely available product; however Schroepfer states that the results indicate how it could be possible. He adds that the system has figured this out by observing humans and that they cannot afford to hire operators for the entire world but with the right AI system, they could organize that for the whole planet.

Wednesday 23 September 2015

The Search for a Thinking Machine

Li

New Age of Machine Learning


According to some experts, they are of the belief that by 2050, the machines would have reached human level intelligence. To a new age of machine learning, computers have started assimilating information from raw data in much the same way as the human infant tends to learn from the world around them.

This means that we are heading towards machines which for instance teach themselves on how to play computer games and get reasonably good at it as well as devices which tend to begin to communicate like humans such as voice assistants on the smartphones. Computers have started comprehending the world beyond bits and bytes. Fei-Fei Li has spent the last 15 years in teaching computers how to see.

 Initially, a PhD student and thereafter as a director of the computer vision lab at Stanford University, she had tracked difficult goal with the intention of creating the electronic eyes for robots as well as machines to see and to understand their environment. Half content of all human brainpower is in visual processing though it is something we all tend to do without much effort.

Ms Li in a talk at the 2015 Technology, Entertainment and Design – Ted conference had made a comparison stating that `a child learns to see especially in the early years, without being taught but learns through real world experiences and examples’.

Crowdsourcing Platforms – Amazon’s Mechanical Turk


She adds that `if you consider a child’s eyes as a pair of biological cameras, they take one image about every 200 milliseconds, which is the average time an eye movement is made. By the age of three, the child could have seen hundreds of millions of images of the real world and that is a lot of training examples. 
 Hence she decided to teach the computers in the same way. She elaborates further that `instead of focusing solely on improved algorithms, her insight was to give the algorithms the kind of training data which a child is provided through experience in quantity as well as quality. Ms. Li together with her colleagues in 2007 had set about an enormous task of sorting and labelling a billion divers as well as random images from the internet to provide examples for the real world for the computer. 
The theory was that if the machine views enough images of something, it would be capable of recognizing it in real life. Crowdsourcing platforms like Amazon’s Mechanical Turk was used, by calling on 50,000 workers from 167 countries in order to help label millions of random images of cats, planes and people.

ImageNet – Database of 15 Million Images


Ultimately they build ImageNet, which is a database of around 15 million images across 22,000 lessons of objects that were organised by daily English words. These have become the resource utilised all over the world by the research scientists in an attempt to give vision to the computers.

To teach the computer in recognizing images, Ms Li together with her team utilised neural networks, computer programs accumulated from artificial brain cells which learn as well as behave in the same ways as human brains.

At Stanford, image reading machine now tends to write accurate captions for a whole range of images though it seems to get things wrong for instance, an image of a baby holding a toothbrush was labelled wrongly as `a young boy is holding a baseball bat’.

Presently machines are learning instead of thinking and if the machine could ever be programmed to think, it is doubtful taking into consideration that the nature of the human thought has escaped scientists as well as philosophers for ages.