Showing posts with label Deep Learning. Show all posts
Showing posts with label Deep Learning. Show all posts

Sunday 30 July 2017

Neural Networks Model Audience Reactions to Movies

reactions

Deep learning software models the audience reactions to the movies

Blockbusters and tentpole movies have become a mega event not just for the fans but also for the studios. A huge amount of money is stake when movie are released but for some time now movies are failing to get desired results as per the expectations of studio executives.

Engineers at the Disney Research had developed a new deep learning software which makes effective use of the neural networks to map and access the viewers’ facial expressions to the movies. This particular software is result of collaboration between Disney Research and researchers from the Caltech and Simon Fraser University.

This new age deep learning software will arm studios with the knowledge of how movies are going to perform on box office through utilizing a newly developed algorithm called factorized variational auto encoders (FVAEs).

How it works?

This software makes use of the deep learning to translate the images of highly complex objects automatically. These objects can be anything from the human face, forests, trees to moving objects and this software essentially turns their images into sets of numerical data through a process called encoding or latent representation.

Thereby they were able to understand how human react to the movies by understanding how much they are smiling or how worries they were in a particular scene and so on. In next stage these neural networks are fed with the metadata which helps in bringing better understanding of the audience responses.

Researchers are all set to showcase their findings to the world at the upcoming event called IEE Conference ion Computer Vision and Pattern Recognition in July.

Futures prospects and application of this new software

Research team has performed extensive testing of this software to make the best use of the neural networks to unlock how human perceive movies in real life. This software was applied in more than 150 showings of nine blockbusters ranging from The Jungles Book, Big Hero 6 to Stars Wars: The Force Awakens.

In the 400 seater cinema hall researchers made use of four infrared cameras to make out the audience face reactions in the dark. The result of these testing provided highly astonishing findings with the help of some 16 million individual facial images captured by the cameras.


Lead researcher has stressed the amount of data collected by the software is too much for a person to comprehend on its own. The FVAEs effectively understood the nueral networks and brought some of the greatest finding for the researchers. It helped in understanding how audience reacted to certain scenes and how movie making can be enhanced to strike cord with audience hot points.

This software will not be just limited to study the audience reaction to the movies but it can also find application in analyzing varied things or places like forest wherein it can state how trees responds to different climatic and environmental changes. Later on this very finding can be utilized in creating animated simulation of the flora all around us with precision.

Thursday 22 June 2017

Deep Learning With Coherent Nanophotonic Circuits

 Nanophotonic Circuits
Light processor recognizes vowels

Nanophotonic module forms the basis for artificial neural networks with extreme computing power and low energy requirements

Supercomputers are approaching the enormous computing power of up to 200 petaflops, ie 200 million billions of operations per second. Nevertheless, they lag far behind the efficiency of human brains, mainly because of their high energy requirements.

A processor based on nanophotonic modules now provides the basis for extremely fast and economical artificial neural networks. As the American developers reported in the magazine "Nature Photonics", their prototype was able to carry out computing operations at a rate of more than 100 gigahertz with light pulses alone.

"We have created the essential building block for an optical neural network, but not yet a complete system," says Yichen Shen, from the Massachusetts Institute of Technology, Cambridge. The nanophotonic processor developed by Shen, together with his colleagues, consists of 56 interferometers, in which light waves interact and form interfering patterns after mutual interference.

These modules are suitable for measuring the phase of a light wave between the wave peak and the wave trough, but can also be used for a targeted change of this phase. In the prototype processor, these interferometers, which in principle correspond, in principle, to a neuron in a neural network, were arranged in a cascade.

After the researchers simulated their concept in advance with elaborate models, they also practically tested it with an algorithm for recognizing vowels. The principle of the photonic processor: A spoken vowel unknown to the system is assigned to a light signal of a laser with a specific wavelength and amplitude. When fed into the interferometer cascade, this light signal interacts with further additionally fed laser pulses and different interference patterns are produced in each interferometer.

To conclude these extremely fast processes, the resulting light signal is detected with a sensitive photodetector and is again assigned to a vowel via an analysis program. This assignment showed that the purely optical system could correctly identify the sound in 138 of 180 test runs. For comparison, the researchers also carried out the recognition with a conventional electronic computer, which achieved a slightly higher hit rate.

This system is still a long way from a photonic light computer, which can perform extremely fast speech recognition or solve even more complex problems. But Shen and colleagues believe it is possible to build artificial neural networks with about 1000 neurons from their nanophotonic building blocks.

In contrast to electronic circuits of conventional computers, the energy requirement is to be reduced by up to two orders of magnitude. This approach is one of the most promising in the future to compete with the viability of living brains.