Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Friday 28 July 2017

Google Blocks Lets You Make Gorgeous Low-Poly VR Art

Google Blocks Latest Virtual Reality App


The latest virtual reality app by Google called Google Blocks allows users to make colourful 3D models in Virtual Reality or VR and has recently been available for free on the Oculus Rift and HTC Vive. It is supposed to be spontaneous enough for beginners to use, but at the same time it is fully featured to help make artistic models, similar to the ones Google has gathered in a gallery that is online. Users are able to export objects and can view them online itself, or can also place them in 3D scenes both inside and outside virtual reality. Visitors are also able to spin objects around to generate animated GIFs that can be downloaded, on Google’s site.

Blocks is a recent one of the many design tools that work inside VR. It is on the same lines as Oculus’ sculpting app Medium and corresponds to Google’s popular Tilt Brush 3D painting tool. You can also use both of these together as you have the ability to export art from both Tilt Brush and Blocks. However, the art style appearance is very different. On one hand Tilt Brush gives the delusion of sculpting with paint and light whereas Blocks lets you create low-poly art in a colourful style that is used by Google in its Daydream VR interface.

Sadly for Google Daydream users, Blocks is at present restricted to only high-end headsets, which have complicated hand controls and let you move around creations. But this may not always be the case as Google is now making available all-in-one Daydream headsets that could compete with the Rift and Vive’s feature set.




Virtual Reality creators envisage a future in which users can build fantastic environments with beautiful objects in it. But in order to do that, these objects have to be created by somebody and for this a basic knowledge of 3D modelling software is needed. Google, through its design tool Blocks, seems to have found a solution to this setback.

With the help of Blocks, Google aims to give the freedom to its users to use VR and create, share and modify 3D objects promptly and without any hassle. At present, the time it takes to build a 3D object is so tiresome and it is near to impossible to achieve it.

What Google Blocks creators have done is learn about the textures and lighting and just use the most fundamental colours and shapes to see how far one can get. The user interface of Blocks depends on the motion-sensing controllers of the HTC Vive or Oculus Rift. Unfortunately, it is not available for mobile users at present.

Blocks may appear to be less proficient than Oculus’ own VR sculpting app, Medium, which is powerful, but because of its minimalism it is more user-friendly. Intricately detailed objects have low-poly aesthetics which maintains high visual consistency as well as performance.

A lower count of polygon enables the 3D objects created in Blocks to run on not only powerful VR headsets such as Vive and Rift but also on low-cost, phone-enabled assembles like Google’s Daydream View or Samsung’s Gear VR. Google Blocks is therefore a fun tool to use and like Tilt Brush a brilliant introduction to VR.

Tuesday 25 July 2017

Teachers 'Google' Tech Solutions

Google teach Teachers
Image Credit: The Valdosta Daily Times

Google started to teach Tech things to the Teachers

Digital education is the key to success! With a permanent "future workplace" in many european countries, Google wants to build a first solid foundation for its worldwide education initiative and started to teach primarily to teachers. And also the tech giant is planning to provide the permanent “Future Workshops” in many countries. As an initiative, it started to teach the tech things to 250 teachers througout Lowndes County, Valdosta. They have been given training for three days, to learn a lot things about Google App. And also this tech giant has awarded certificates to 60 teachers who were done well.

A Google partner named AppsEvents offered certified faculties from across United States to teach them to make use of Google's Free Software suite. The speciality of this suite is that offers more interactive classroom experience for their students. The teachers are well trained to use Google Classrooms, YouTube Playlists as well as Google Forms.

USA Director of AppsEvents Allison Mollica said that nearly 60 million students and teachers got well trained to use Google Apps of Education. She also added that the students who are having the expertise in coding, they will definitely get a job. The students who volunteered in the events Brandon Booker, Cameron Jackson, Carlos Torres, Benny Zhang and Samuel Sandwell has been awarded a free extemporaneous training session in coding smartphone apps during the event.

Google wants to further the digital education in many countries. And the news flare around the net that it has initiated its first permanent training center in Munich. Together with its partners, the company is now offering free training courses on numerous digital topics. Digital education is the key to making everyone fit for change and keeping people internationally competitive. AppsEvents Director, Allison Mollica said, “ We also see ourselves responsible and want to be part of the solution."

 

Google wants to reach 2 Billion people


In the future workshop, important digital skills will be taught for professionals and non-professionals. The program includes both learning contents for occupational benefit and a range of courses for schools. Such long-term future workshops are also planned to start in many countries. Time training is scheduled to take place in all the federal states. By 2020, Google plans to reach a total of two Billion people.

For further vocational training, Google is working together with the respective volunteers, which also integrates the program into its own initiative. In addition to workshops on online marketing or web analysis, special courses are planned for non-profit organizations as well as for journalists. The future workshop is an ideal complement to meet the enormous need for know-how in the company.

Calliope mini microcontroller board, developed especially for programming learning under third graders, is also part of the offer for students. The hand-held device in the shape of a six-pointed star was the result of an initiative sponsored by the Ministry of Economic Affairs. Equipped with a number of sensors, the Calliope mini can be programmed on a PC or via an app. Google has already supported the project according to own data with 1.1 million euros.

Friday 21 July 2017

Google Antes Up Its Own Cloud Migration Appliance

Google Cloud

Google is bringing its own data transfer appliance for cloud migration


When we talk about the cloud migration hen the toughest challenge to overcome is to ensure reliable and consistent migration methodology. Moving databases and data centers are not an easy game even for the proficient administrators. This is faced by almost all the major companies and new start-ups when they are trying to build new application or make use of new data residing in the cloud. Data migration between two points is always seen as tough egg to break due higher costs and huge time consumption this is where cloud vendors come into the play.

Remedy for data to cloud migration problem


The problem faced in the cloud migration is quite incomprehensible even with modern technologies at disposable. If a person has 10 Gbps connection then transferring petabyte of data from any data center will consume as many as 12 days to put it on the cloud. In the old golden days companies used to Sneakernet in the sky wherein a pile of data is loaded on a secure disk and it is shipped off to any of their cloud vendor. Microsoft Azure made use of this system for quite some time before a new system came into being called Snowball and this method is also used by the Amazon AWS. This doesn’t mean that sending secure disks to the cloud vendor has become obsolete.

Google is moving into the enterprise cloud migration business


The tech giant is getting into the enterprise cloud business on a serious note but quite specifically in the migration appliance space. In this end of the business the consumer arc is very similar to online shopping portal. User goes online, orders the device and this particular device is made available for definite set of time after which they have to send it back to the provider.

Cloud migration market is buzzing with immense order in the one petabyte category therefore Google is introducing two models with size of 100 and 480TBytes. On other hand its competitors Amazon tends to work in lower end of the spectrum with 3 models with data size of 50, 80 and 100 TBytes units. Amazon is way ahead in the field of data transfer and it has developed a number of solutions for high end data migration with Amazon Snowmobile wherein it brings a 12 wheeler 45-foot container for transferring 100 PBytes of data.

Pricing will be a key to grab a piece of the cloud migration market


Google is mainly focusing on cornering market for the petabyte cloud migration. Therefore it has brought on par pricing with Snowball for the smaller 100 TByte units and in order to make migration appliance more appealing to the users by keeping it 35% below the rival offerings. When it comes to design Google is going for plug-in based form and function while Amazon brings self-standing units.

However Google has revealed many details of its devices, service offerings, capabilities and benefits but it seems like it is eagerly looking forward to give stiff competition to market leaders namely Amazon and Microsoft Azure.

Thursday 13 July 2017

Someone Made a Working Six Speed Gearbox Out of Lego

Lego and the Gear Box

Lego never fails to surprise people with its frequent new building creativity. They are the most renowned name in terms of its excellence. Whenever we are reminded of those explicit cars enfolded in amazing bricks made of plastics we are filled with exuberance. They never let us down, but now they are back with a different take our expectation to a different level.

Dgustafsson 13 is the new development made in the arena of building by none other than Lego. Some of its features are gaerboc with 6 speed, numerous positive caster angle with front axle, parts of scania, scania cabs bendy suspension parts with rear axle, motorized parts simply usable technical parts and elevator parts.

These parts all together make this machine a remarkable bus. There are several buses available in the market, but this one turns out to be the best of its time. Lego has again taken the notches high by the introduction of these explicit machine which is the most technologically bus of all times. This city bus is the one, which is equipped with best possible equipments to give safety measures and an unhindered journey to its passengers.

Lego with the involvement of gears bestowed with six speeds is the best is what makes this city bus extraordinary. It makes the bus work almost like a real thing. This newly introduced transmission is a new thing, which makes Dgustafsson 13, the most synchronized model of all times with not even an inch of compromise in terms of gears and makes it the best version of the city buses available.

The motor associated with it is the best of all times. The motor is a much developed version of motor available, which helps in stabilizing synchronization. Lego has always something new to offer to its people.

The kit that Dgustafsson 13 it isn’t the official one and when you go into the detailing of Dgustafsson 13, it is then when you are made aware of the Lego technicalities and its development and once you get acquainted with its kit then you understand that how developed Lego is in terms of technicalities. The more you get to know Lego the more updated version of technology you are acquainted with. The technology that Lego offers is not that difficult to build all you need is dedication and the interest to transform technology to a different level.

By looking at Lego Kit, you understand how you can revolutionize the level of gear boxes. It is not that difficult a proper mindset with an enhanced knowledge can make things happen. Once you see the Lego gear box you understand what it is. Lego is the most heard name because it never fails to surprise people with its updated version of all times.

Lego knows the art of taking machine to a different level and the instructions it posts online helps people to develop such kind of techniques to fed into their machines. Lego is here to revolutionize the era of machines.



Wednesday 12 July 2017

iPhone 8 to ditch fingerprint sensor for face scanner, reports say

iPhone 8

iPhone 8 – Refurbished Security System

The upcoming iPhone 8 of Apple would be featuring a refurbished security system wherein the users can unlock the device by utilising their face instead of their fingerprints. The 10th anniversary of iPhones is estimated in having a radical redesign that would comprise of a security system which will scan the faces of the users in order to check who could be using the device.

As per Bloomberg, it is said that the 3D scanning scheme would be replacing the Touch ID as a means of verifying payments, log in to apps as well as in unlocking the phone. It could function at various angles and hence the iPhone has the tendency of getting unlocked by merely looking at it, if the same is flat on the table and also held up right. It has been reported that the scanning system has been designed for the purpose of speed and precision and can scan the face of the user and unlock the device within a few hundred milliseconds.

Since it tends to analyse 3D instead of 2D images, it is likely to be capable of differentiating between a persons’ face and an image of the person. Presently available in Galaxy S8 of Samsung in strengthening the security of the device, Apple could also utilise the eye scanning technology.

Face Scanning Technology

Bloomberg had reported that the face scanning technology could secure more than the Touch ID, first released in 2013 on the iPhone S5 since it tends to draw on more identifiers. Apple has claimed that its fingerprint scanner tends to have only a 1 in 50,000 chance of being unlocked by a stranger’s fingerprint. According to an analyst having reliable track record, Ming-Chi Kuo, the iPhone 8 is said to feature an edge-to-edge OLED screen having the maximum screen-to-body ratio than any smartphone prevailing at the moment.

Apple would probably remove the Home button as well as the Touch ID scanner in order to make provision for the display. Kuo has also predicted that Apple would be releasing three new phones in September, namely the iPhone 8, iPhone 7S and iPhone 7S Plus. The iPhone 8 would be featuring the most vivid redesign among the three, having a 5.2-inch size screen retained in a device which would be the same size as the iPhone 7. Besides that it would also have less colour options and will be available with a glass front with steel edges towards the back.

New Chip Dedicated to Processing Artificial Intelligence

A well-linked Apple blogger, John Gruber had mentioned that the top iPhone could be named as `iPhone Pro’ recommending that the cost could be $1,500 or higher. The remaining two devices would be featuring LCD screens and will be available in sizes of 4.7-inch and 5.5-inch. Like the present iPhone 7, these devices would probably have a Home button together with Touch ID.

It is also said that the three phones would be having a Lightning port together with an embedded USB-C equipped with storage of 64GB or 256GB, if the predictions of Kuo tends to be accurate. Moreover they would be available with a new chip that is dedicated to processing artificial intelligence and the same is being verified presently.

Tuesday 11 July 2017

Can You Hear Me Now?

 

Lombard - Split Second Act from Ear to Brain –

It has been observed that humans as well as animals while trying to be heard over sound tend to raise their voices. It is said to be a split-second act from the ear to the brain for vocalization. The first to measure how fast it tends to occur in bats – 30 milliseconds, are researchers from Johns Hopkins University. This is 10 times quicker than the blink of an eye, a record for audio vocal response.

 Since this deed is known as the Lombard, its effect tends to take place very fast; the researchers were capable of solving a long overdue mystery considering the neural mechanism behind it. Recently in a paper publishing in the journal `Proceedings of the National Academy of Sciences’, they concluded that it could be a fundamental temporal reflex instead of a deeper, thinking behaviour as presumed earlier, which would be time consuming in processing.

The discoveries shedding light on the foundations of human speech control has also disclosed how species as diverse as fish, animals, and frogs together with humans tend to share the ability to be head over the fight. Co-author Ninad Kothari, a graduate student in Psychological and Brain Sciences at Johns Hopkins stated that scientist have been speculating for a century that, could there be a common auditory process in explaining how this phenomenon occurred in fish to frogs to birds to human species with wildly various hearing systems and they had resolved this question.

Lombard Effect

The new statistics could lead to improved treatment for diseases and the Lombard effect could be intensified like Parkinson’s disease as well as support in building assistive medical devices. Studies conducted on bats, animals which tend to depend on sonar-like echolocation, releasing sounds and listening for echoes in order to sense, track and catch prey, had been carried out by the researchers.

In contrast to humans, where the vocalization tends to be reasonably long and slow, bats seems to be perfect for such sensorimotor study since their extraordinary frequency chirps, unnoticeable to the human ear are said to be quick and accurate enabling the researchers to test the parameters of a mammalian brain.

While tracking an insect moving towards the animal on a tether, the team had trained big brown bats to stay balanced on a platform and while the bat hunted for the insect, the researchers had recorded the vocalization of the bat with an array of 14 microphones. The researchers at time permitted the bat to hunt in silence while at other times they played bursts of interfering white noise at different intensities from a speaker which has been placed in front of the bat.

Brain Monitors Background Noise Continuously

It was observed that the white noise had interfered with the echolocation of the bat and had caused the bat to emit louder and louder chirps not different from two neighbours attempting in making a conversation, first over a lurid radio and then over the clamour of a lawn mower and thereafter over the blast of a passing fire engine.

When the noise had stopped, the bat would also stop shouting, to speak and voice at a more usual level. The researchers capable of creating a computational model for the Lombard effect which tends to be applicable to all vertebrate, reached a conclusion that the brain of a bat or a person or a fish tends to monitor continuously the background noise and is inclined to adjust the vocal levels whenever the need arises.

 At first the auditory system is said to notice the background noise. Then the auditory system measures the level of sound pressure and tends to adjust the vocalization amplitude in order to compensate and when the background noise stops, the sound pressure level dissipates as well as the level of vocalization.

Connection Between Hearing & Vocalizations

The author observed that this complete intricate process tends to take place in only 30 milliseconds. In terms of near-instantaneous brain reactions, they name this reflex as `remarkably short’. Lead author Jinhong Luo, a Johns Hopkins postdoctoral colleague had stated that `typically, we breathe every three to five seconds, our heart beats once per second and eye blinking takes one third of a second.

If we believe that eye blinking is fast, the speed at which an echolocating bat responds to ambient noise is truly shocking – 10 times quicker than we blink our eyes’. Scientists are of the belief that the Lombard effect seems to be much slower, around 150 to 175 milliseconds for humans.

Johns Hopkins professor of Psychological and Brian Sciences and Neuroscience and a co-author, Cynthia Moss commented that their study features echolocating bats as valuable animal models for understanding connections between hearing and vocalizations, including speech control in humans.

 The research has been supported by the national Science foundation IOS-1010193 and IOS-1460149, the Human Frontiers Science Program RGP0040 and LT000279/2016-L, the Office of Naval Research N00014-12-1-0339 and the Air Force Office of Scientific Research FA9550-14-1-0398

Monday 10 July 2017

Watching Cities Grow



Great Resolution Civilian Radar Satellite

Major cities in the world have been increasing and as per the estimates of United Nations, presently half of the population of the world tends to be living in cities. Towards 2050, the figure is expected to mount to two thirds of the population of the world.

 Professor for Signal Processing in Earth Observation at TUM, Xiaoxiang Zhu has informed that this growth has given rise to high demands on building and infrastructure safety since destruction events could threaten thousands of human lives at once. Zhu together with her team had established a method of early detection of probable dangers for instance; subterranean subsidence could cause the collapse of buildings, bridges, tunnels or even dams.

The new system tends to make it possible in noticing and visualizing changes as small as one millimetre each year. Data for the latest urban images tends to come from the German TerraSAR-X satellite which is one of the great resolution civilian radar satellite in the world. Since 2007, the satellite, circulating the earth at an altitude of approximately 500 kilometres tends to send microwave pulses to the earth and collects their echoes. Zhu has explained that at first these measurements were only in a two dimensional image with a resolution of one meter.

Generate Highly Accurate Four-Dimensional City Model

The TUM professor worked in partnership with the German Aerospace Centre – DLR and was also in charge of her own working team. The DLR tends to be in control of the operation and use of the satellite for scientific purposes.

The consequence of the images is restricted by the statistic that reflections from various objects which are at an equivalent distance from the satellite, will layover with each other and this effect tends to decrease the three-dimensional world to a two-dimensional image. Zhu had not only created her own algorithm that tends to make it possible in reconstructing the third and also fourth dimension, but also set a world record at the same time.

 Four dimensional point clouds having a density of three million points for each square kilometre had been reconstructed. This rich recovered information gave rise to generate highly accurate four-dimensional city models.

Radar Measurements to Reconstruct Urban Infrastructure

The trick was that the scientists utilised images taken from slightly various viewpoints. Every eleven days, the satellite tends to fly over the region of interest but its orbit position does not always seem to be precisely the same. The researchers utilise these 250 meter orbital variations in radar tomography to localize each point in three-dimensional space.

This system utilises similar principle used by computer tomography that tends to develop a three-dimensional view of the inner area of the human body. Various radar images taken from different viewpoints have been linked in creating a three-dimensional image. Zhu states that since this system processes only poor resolution in the third dimension, additional compressive sensing system which makes it possible for improving the resolution by 15 times is applied.

Scientists could utilise the radar dimensions to restructure urban organization on the surface of the earth with great accuracy, from TerraSAR-X, for instance the 3D shape of individual buildings. This system has already been utilised in generating highly precise 3D models in Berlin, Paris, Las Vegas and Washington DC.

The Secret to a Perfect Selfie

PA

Trailblazing Self-Portrait – Over £6 Million

Though selfies are said to be a basic of our technology-fanatical generation they do not always seem to be creative. Andy Warhol had takes what could have been some of the most well-known selfies in the world portraying that the artist seems to be much ahead of his time.

 Tom van Laer, a Senior Lecturer in Marketing at City University of London and Stefania Farace, a PhD Candidate in Marketing at Maastricht University in an article for The Conversation, had studied Warhol’s popular photo revealing the three simple rules to the perfect selfie for social media. Andy Warhol, in 1963 had walked in a New York photobooth and had taken what could have been the most famous selfies in the world.

One of the trailblazing self-portrait had been sold for just over £6 million. These selfies seemed to suit effortlessly Warhol’s vision of the pop art era of the late 1950s and 1960s and are typically all-American, mechanical and democratic. Although photobooth images did not go viral like social media images tend to do now, the use of a photobooth in making art was in 1963 fiercely innovative as well as added to the aura of technical invention which surrounded Warhol like it surrounds selfie together with social media presently.

Selfies – Holy Grail of Social Media


Selfies are said to be the holy grail of social media a kind of self-portraying images which tend to be posted on social networking site with details to involve large number of audience. According to latest study it had been revealed that three things could assist the user in taking images which are worth, if not millions of pounds but at least a thousand words and without the need of one risking their life for them.

 Three online experiments had been conducted by their team with workers from Amazon Mechanical Turk that had crowdsource expertise in a range of fields, one with students on computers in the university laboratory and one corpus analysis. It involved a method of looking at a body of evidence jointly with self-governing coders. To define precisely what people involve with, when they view images online, the participants various images were portrayed.

These images were rated on various photographic elements, point of view, content, artsiness and the like. Moreover they also specified how likely they were to comment on the images if they viewed them on social media. With these studies it became possible to segregate the things which seemed to affect people in stopping from caring about an online image and to locate images which would involve them.
Enthusiastic Selfie-Portrait Artist - Awareness –

Besides that they also helped to determine the type of images on which people possibly tend to comment. There are three things which enthusiastic selfie-portrait artist should be aware of:

1. People favour you before the camera

Point of view – POV, in photography is said to be a question of who it is people `see’ taking the image. The unassuming difference is that of `person’ of which there seems to be two principle types namely third person – Warhol taking an image of Marilyn Monroe for instance and first person – Warhol’s selfie.

In the case of Warhol’s time, several of the photographs had been taken from a third person point of view. However this has changed and research does not find much interest for third person images in social media age. From the point of view, it tends to add elaborately to how individuals feel and think as they view the images and just as the point of view could be from one within or outside the image, people then to pick up various feelings and thoughts.

Warhol has contributed immensely in the pictured story of his selfie than in his famous image of Marilyn Monroe and just he is more involved in the story he is conveying with his selfie, so also others are statically likely to get involved with the content of selfies.

2. People get bored of just you

Since the portrait had first been invented, painters and photographers seemed to set priority of importance to person or action. Several of the selfies are said to be about themselves, though our research recommend that this is a poor strategy for drawing attention since people are 15-14% likely in commenting on selfies of individuals doing that which is meaningful than on only selfies. Selfie-takers tend to have agency beyond only being the subject of their own images and tend to do things like eating of drinking of waving their free hand. Warhol had done something else; he had appeared as adjusting his tie.

3. Realistic images put people off 

The selfie of Warhol had been designed not for portraying or depict the truth but to accept the artifice and deception in-built to any kind of illustration. If the creative flexibility in reality and image had been wide in the photograph of Warhol, it would be vast since photography arrived in social media and this is essentially the case. Photographers, who tend to complain that selfies seem to be poor illustration of reality, overlook the fact that taking selfies is not illustration of anything but the unattached sense.

Research has shown that not changing images could wind up in failure and a variation could be silly or serious, unprofessional or professional and so on. Modern photographers need to organize the full power of procedure like emoji, lenses, filters as well as tools since selfie sticks to turn the original into something artful. These selfies tend to be superior with regards to engagement and it was observed that people tend to be 11.86% more likely in commenting on adapted selfies.

As users tend to become more sophisticated in their choice of images, it tends to pay to being more people-centric and to think harder regarding the value an image tends to provide the audience instead of just yourself. The outcome seems to be a renovated selfie of one doing something, an image which is worth a thousand words. In 1968, Warhol had written that `in future everyone would be world well-known for 15 minutes and that future is now.

Friday 7 July 2017

Hot Electrons Move Faster Than Expected

 Hot Electrons

Ultrafast Motion of Electrons


A new research has given rise to solid-state devices which tend to utilise excited electrons. Engineers and scientists at Caltech have for the first time, been in a position of observing directly the ultrafast motion of electrons instantly after they have been excited by a laser. It was observed that these electrons tend to diffuse in their surroundings quickly and beyond than earlier anticipated.

This performance called as `super-diffusion has been hypothesized though not seen before. A team headed by Marco Bernardi of Caltech and the late Ahmed Zewail had documented the motion of electrons by utilising microscope which had captured the images with a shutter speed of a trillionth of a second at a nanometer-scale spatial resolution and their discoveries had appeared in a study published on May 11 in Nature Communications.

 The excited electrons had displayed a diffusion rate of 1,000 times higher than earlier excitation. Though the phenomenon had lasted only for a few hundred trillionths of a second, it had provided the possibility for operation of hot electrons in this fast system in transporting energy and charge in novel devices.

Assistant professor of applied physics and materials science in Caltech’s Division of Engineering and Applied Science, Bernardi had informed that their work portrayed the presence of fast transient which tends to last for a few hundred picoseconds at the time when electrons move quicker than their speed of room temperature, indicating that they can cover longer distance in a given period of time when operated with the help of lasers.

Ultrafast Imaging Technology


He further added that this non-equilibrium behaviour could be employed in novel electronic, optoelectronic as well as renewable energy devices together with uncovering new fundamental physics. Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry, professor of physics as well as the director of the Physical Biology Centre for Ultrafast Science and Technology at Caltech, colleague of Bernardi had passed away on 2nd August 2016.

The research had been possible by scanning ultrafast electron microscopy, which is an ultrafast imaging technology initiated by Zewail, with the potential of creating images with picosecond time with nanometer spatial resolutions. The theory and computer models had been developed by Bernardi which clarified the tentative results as an indicator of super-diffusion.

Bernandi has plans of continuing the research by trying to answer the fundamental questions regarding the excited electrons, like how they equilibrate among themselves as well as with atomic vibrations in material, together with applied ones like how hot electrons could increase the efficiency of energy conversion devices such as solar cells and LEDs.

Super Diffusion of Excited Carriers in Semiconductors


The paper has been entitled `Super Diffusion of Excited Carriers in Semiconductors’. Co-authors comprise of former postdoc Ebrahim Najafi of Caltech, who is said to be the main author of the paper and a former graduate student, Vsevolod Ivanov. The research has been supported by the National Science foundation, together with the Air Force Office of Scientific Research, the Gordon and Betty Moor Foundation as well as the Caltech-Gwangju Institute of Science and Technology – GIST, program.

Wednesday 5 July 2017

L2 vs. L3 cache: What’s the Difference?


The cache is a special buffer memory that is located between the memory and the processor.

So that the processor does not have to get every program command from the slow memory individually, a whole command block or data block is loaded into the cache. The probability that the subsequent program instructions are in the cache is relatively high. Only when all program instructions have been executed or a jump command to a jump address outside the cache, the processor must access the memory again. Therefore, the cache should be as large as possible so that the processor can run the program instructions one after the other without waiting.

Typically, processors work with multi-level caches that are different in size and fast. The closer the cache is to the computing core, the faster it works.

Inclusive cache and exclusive cache

With the multicore processors the terms inclusive and exclusive cache came up. Inclusive cache means that data in the L1 cache is also present in the L2 and L3 cache. This makes data consistency between the cores more secure. Compared to the exclusive cache, some storage capacity is given away because the data is redundant in the caches of several CPU cores.

Exclusive cache means that the cache is available to a processor core exclusively, that is, for it alone. It does not have to share the cache with another core. A disadvantage of this is that several processor cores can then exchange data with one another only by way of a detour.

L1 cache / first-level cache

As a rule, the L1 cache is not particularly large. For reasons of space it moves in the order of 16 to 64 kByte. Usually, the memory area for commands and data is separated from each other. The importance of the L1 cache increases with the higher CPU speed.

In the L1 cache, the most frequently used commands and data are buffered so that as few accesses as possible to the slow memory are required. This cache avoids delays in the data transfer and helps to optimally utilize the CPU.

L2 cache / second-level cache

In the L2 cache, the data of the working memory (RAM) is buffered.

The processor manufacturers supply the different market segments with specially modified processors via the size of the L2 cache. The choice between a processor with more clock speed or a larger L2 cache can be answered in a simplified manner as follows: With a higher clock, individual programs, especially with high arithmetic requirements, run faster. As soon as several programs run at the same time, a larger cache is an advantage. Typically, normal desktop computers with a processor that has a large cache are better served than with a processor that has a high clock rate.

When the memory controller was shifted from the chipset into the processor and the processor was able to access memory much faster, the importance of the L2 cache decreased. While the size of the L2 cache has decreased, the L3 cache has been properly upgraded.

L3 cache / third-level cache

As a rule, multicore processors use an integrated L3 cache. With the L3 cache, the Cache Koheranz protocol of Multicore processors can work much faster. This protocol compares the caches of all cores to maintain data consistency. The L3 cache thus has less functions of a cache, but is intended to simplify and speed up the cache coherency protocol and the data exchange between the cores.

As modern processors now contain several data cores - so-called cores - the manufacturers have already donated a third cache, the L3 cache, to these multi-core processors. All processor cores work together, which is particularly beneficial in parallel processing. This allows data shared by different CPU cores to be retrieved from the fast L3 cache. Without it, these data would always come from the slow main memory. In addition, the L3 cache also facilitates data management with multiple CPU cores and caches (data coherency).

Monday 3 July 2017

Peering Into Fish Brains to See How They Work

Fish

Transparent Fish – Work in the Dark


The main focus in the research of the latest group at the Kavli Institute for Systems Neuroscience is transparent fish and the capability to work in the dark. One of the important challenges faced by neuroscientists wanting to comprehend how the brain works is essentially reckoning out how the brain is wired together and how neurons tend to interact.

NTNU neuroscientists and Nobel laureates May-Britt and Edvard Moser resolved this issue by studying how to record from individual neurons in the rat brain when the rats tend to move freely in space. They utilised the recording in order to make the findings that had attained them the Nobel Prize.

They were in a position to understand that certain neurons in the entorhinal cortex fired in a manner that created a grid pattern which could be utilised in navigating like an internal GPS. Emre Yaksi, the latest teamhead of the Kavli Institute for Systems Neuroscience utilised a diverse approach to the issue of viewing what tends to go on within the brain.

Rather than studying rats or mice, Yakshi resorted to around 90 various types of genetically modified zebra-fish which he could breed in creating various fish with preferredphysiognomies.

Comprehending Universal Circuit Architectures in Brain


Young larval zebra-fish are said to be totally transparent and hence Yakshi needed only a systematic optical microscope to view what tends to occur inside their heads. Some of the fishes of Yakshi seem to have a genetic modification which makes their neurons light up while they direct signal to another neuron and he has informed that this is what tends to make circuits and connections visible to researchers.

He commented that they are interested in comprehending the universal circuit architectures in the brain which can perform interesting computation. Though fish are quite different from humans, their brains tend to have identical structures and in the end fish also have to find food, they also have to find a mate, they have to avoid dangers and they build brain circuits which can generate all these behaviours just the way humans tend to do.

When Yaksi had come to Kavli Institute in early 2015 together with a team of researchers they had a 900 kg anti-vibration table which was the size of a billiards table. The table had been big and heavy and was needed in the laboratory to reduce vibration when they had to use the highly sensitive optical microscopes to peer into the brains of the zebra-fish.

Zebra-Fish Genetically Adapted


The larval fish tend to be quite small that a slight vibration from cars or trucks passing by the streets could make the microscopes bounce away from their miniature brain targets. Zebra-fish brains are quite small, around 10,000 to 20,000 neurons which is a figure dwarfed by the human brain that tends to have an estimated neurons of 80 billion.

However the measurement that Yaksi together with his colleagues tend to make marks in huge quantities of data. According to him, a 30 minute of recording could generate data which tends to take about a week to process the same. It was for this purpose, the research group of Yaksiis a multi-disciplinary team of engineers, physicists and life scientists who seemed to be trained to develop and utilise computational tools in analysing these huge datasets.

Since few of the zebra-fish tend to be genetically adapted in order that their neurons light up with a fluorescent protein when the neurons are active, Yaksi and his colleagues tend to work frequently in low light or darkness. This is particularly obvious when he takes visitors in the subdued darkness of the laboratory where several of the fanciest microscopes are confined in boxes open towards the front, developed to restrict the amount of external light.

Research – Causes of Seizures/How Seizures Prevented


Yaksi had informed that other zebra-fish are genetically modified to shine a blue light in their brain which tends to activate certain neurons enabling the researchers to plan connections between neuron. Major part of the study being done by the group of Yaksiis basic research with findings which tend to improve our understanding of the brain computation though does not specifically have any instant clinical implications.

However, Nathalie Jurisch-Yaksi, wife and colleague of Yaksi is working with medical doctors in order to develop genetically modified zebra-fish which could be helpful in shedding light on brain disease like epilepsy.According to Yaksi, most of the people in his lab are doing basis research attempting to ask how does the brain works, how is it connected, how is it built.

 Nonetheless, Nathalie is working at NTNU with medical doctors and they are trying to reach out to clinicians. For instance he stated that if a brain disorder like epilepsy tends to have a genetic component, that same genetic mutation could be developed in the transgenic group of zebra-fish facility in order that the team could research on the causes of seizures in a diseased brain and how the seizures can be prevented.

Kavli Institute – Excellent Science Environment


The Kavli Institute had been on an institute-wider retreat, when he had come to Trondheim for interview for the position, so Yaksi had the opportunity of meeting not just group leaders but also technicians, master’s students, PhD candidates and everyone. He informed that what was most impressive besides the excellent science environment was that people had been happy and satisfied with what was being done and it was a good atmosphere.

 Though the science had been the most serious part of his decision to move to Trondheim, he informed that he was excited to be a part of the Kavli Institute since he and his wife desired to live in a smaller town as well as close to nature.

He had stated that Trondheim seems to be a unique place and one can do really good science and yet be close to nature, which was a big thing for him and his wife. Going to London or another big city was never an option and they did not desire to deal with big city life. He also informed that when May-Britt Moser had asked him at the time of his interview on what he knew regarding Scandinavia. His reply had been that he did not know much though he had added that he and his wife loved being outdoors.

Saturday 1 July 2017

Plastic 12-Bit RFID Tag and Read-Out System With Screen-Printed Antenna

Quad Industries, Agfa, Imec and TNO made an announcement recently that they established and verified a plastic 12-bit RFID tag and read-out systems with security that is screen printed. For the first time, the system combines a screen-printed antenna and a printed user interface that is based on touch, which allows the reader to operate on curved surfaces. The demonstrator has developed for applications pertaining to badge security, but also shows scope for many other applications as well such as smart packages, games that require interaction and wearables.

Compared to silicon (Si)-based identification devices, RFID tags that are made of plastic electronics have more advantages. They can be attached to curved packaging, effortlessly incorporated in everyday objects and its manufacturing is low-cost. The usual application consists of identification of items, smart food packaging, protecting the brand and badge security. A dedicated RFID reader is needed to scan the RFID tag which is usually in two centimetres of the tag. The antenna in the tag as well as the reader should both be flexible, utilising the advantages of plastic electronics to the fullest. Screen-printed antennas have been applied effectively on the top of an RFID tag but inflexible PCB-based antennas are generally used by the read-out systems. This is primarily because of the fact that the printed antenna has a poor resistance and Q-factor.

For the first time, industries like Imec, Quad Industries and Agfa have combined a screen-printed antenna in both of the items, the RFID tag as well as the read-out system. This allows the application of both these devices on a diverse range of surfaces. Quad Industries have screen-printed antennas using printing inks from Agfa.

This new technology has been demonstrated in an application pertaining to badge security. The access badge integrates the printed antenna, which is size of a credit-card, with a plastic 12-bit RFID chip, placed on plastic substrate that’s flexible. Imec’s metal-oxide thin-film transistor (TFT) technology has been used to manufacture the RFID tag. Large-area manufacturing processes are used by this technology that makes large-scale production at a low cost possible.

The read-out system includes uniquely printed functionality at diverse levels. To begin with, an RFID read-out antenna is made by screen-printing on a plastic film, making room for best possible integration on flat, curved or 3-D shaped reading surfaces. Also, a fully printed touch screen interface with numerical keypad has been placed between the cover lens and the display, which allows any user without a badge to enter the building by punching in a numerical code. Highly transparent screen-printed inks have been used to print these printed touch screen.

There are recently developed Ag inks which are nanoparticle based that makes lower resistances over conventional Ag-flake based inks achievable which in turn enables integrating new functionalities directly by screen printing. In addition to this, the antenna is printed at the same level as the printed touch screen which results in direct, more economical combination of the printed antenna and the customized touch screen in the device that’s the reader.

This technology allows for economical screen-printing manufacturing, is effortlessly customizable and eco-friendly and allows direct chip integration on many substrates which includes plastics, paper, etc. This technology also sees a promising use in smart packaging, smart PCB and smart gaming.

Sensor Solution: Sensor Boutique for Early Adopters

Sensor Boutique
It is known that a very individual fraction of infrared light is absorbed by every chemical substance. This absorption can be used for recognising substances with the help of optical methods, which is almost like the concept of a human fingerprint.

To elaborate this concept, when the infrared radiation, that falls within a certain range of wavelength, are absorbed by molecules, they are animated to a higher level of vibration, in which they rotate and vibrate in a typical and distinctive pattern or rather in a “fingerprint” pattern. These patterns can be used for identifying specific chemical species. Such kind of a method is used, let’s say, for example, in the chemical industry but also has its uses in the health sector or in criminal investigation. A company often needs an individually tailored sensor solution if it plans a new project.

EU-funded pilot line called MIRPHAB (Mid InfraRedPhotonics devices fABrication for chemical sensing and spectroscopic applications) support companies that in search for a suitable system and help in the development of sensor technology and measurement technology in mid-infrared (MIR). Participating in this project is the Fraunhofer Institute for Applied Solid State Physics IAF.

Pilot line for ideal spectroscopy solutions


A company has very individual needs if it is looking for a sensor solution, for example, if it has to identify a particular substance in a production process. This begins with the substances that have to be recorded to the number of sensors required up to the speed of the process of production.Considering most of the cases, a custom-made solution that suits all does not suffice and various suppliers are required for the purpose of developing the optimal individual solution.Here is where MIRPHAB comes into picture and proves to be very useful.

Leading European research institutes and companies belonging to the MIR environment have collaborated to provide customers with a custom-made and best suited offers made from a single source. Parties that are interested can get in touch with a central contact person, who can then make a compilation of the best solutions possible from the MIRPHAB members component portfolio as per the modular principle.

EU funding has supported MIRPHAB in the development of the individual MIR sensor solution within the framework, in order to fortify the European industry in the long run and increase in its leading position in chemical analysis and sensor technology. This considerably lessens the investment costs and as a result also reduces the entry point for companies in the MIR area.

Companies that have previously faced high costs and development efforts are now looking at a high-quality MIR sensor solution as an object of interest due to its combination with the virtual infrastructure which is a development caused by MIRPHAB.Also, MIRPHAB provides companies access to the latest and modern technologies, enabling them with an added advantage as an early adopter compared to the competition.

Custom-madesource forMIR lasers


The Freiburg-basedFraunhofer Institute for Applied Solid State Physics IAF along with the Fraunhofer Institute for Photonic Microsystems IPMS situated in Dresden, is providing a central component of the MIRPHAB sensor solution. The Fraunhofer IAF is presenting the new technology of quantum cascade lasers that emanate laser light in the range of MIR. In this type of laser, the range of the wavelength of the emitted light is spectrally extensive and can be adapted as per requirement during manufacturing. To select a particular wavelength within the broad spectral range, an optical diffraction grating has to be used to choose and then coupled back into the laser chip. The wavelength can be adjusted constantly by turning the grating. This grating is created at the Fraunhofer IPMS in a scaled-down form in so-called Micro-Electro-Mechanical-System or MEMS technology.Thus it is then possible to oscillate the grating up to one kilohertz of frequency. This further enables the tuning of the laser source’s wavelength up to a thousand times per second over a large range of spectrum.
The Fraunhofer Institute for Production Technology IPT in Aachen also has involvement in MIRPHAB in order to make the manufacturing of lasers and ratings more proficient and to enhance them for pilot series fabrication.With the help of its proficiency, it changes the production of the quickly adaptable MIR laser into industrially applicable manufacturing processes.

Process exploration in actuality

Currently, there are many applications in the field of spectroscopy that are still in the category of visible or near the range of infrared and use comparatively feeble light sources. MIRPHAB provides solutions has the concept of infrared semiconductor lasers as a foundation. These have comparatively higher intensity of light thus allowing the scope for completely new applications. This results in a recording of up to 1,000 spectra per second with the help of the MIR laser source which, as an example, provides for the real time programmed monitoring and control of biotechnological processes and chemical reactions. Thus, MIRPHAB’s contribution is considered to be important and vital to the factory of the future.

Friday 30 June 2017

Can Artificial Intelligence Help Us Make More Human Decisions?


About 88 million pages of original and authentic handwritten documents belonging to the past three-and-a-half centuries, line the tiled halls of a simple 16th-century trading house located right in the middle of Seville, Spain. These are stored here, incompletely transliterated, where some of them are almost indecipherable. A few of them were carried back on armadas from the Americas while a few have undergone scanning and digitisation.

These documents contain the answers and the context for the innumerable questions pertaining to the Conquistadors, the European history, the New World contact and colonialism, politics, law, economics and ancestry. However, it is unfortunate that hardly some of these carefully kept pages were ever read or interpreted since they were written and brought to Seville centuries before and it is highly unlikely that most of them never will be.

All hope is not lost as a researcher from the Stevens Institute of Technology is trying to get computers to read these documents, before we are out of time, while the documents are still readable. A Stevens computer science professor, Fernando Perez-Cruz asks “What if there was a machine, or a software, that could transcribe all of the documents?”.

Perez-Cruz, who’s expertise lies in the research area of machine learning also says “What if there was a way to teach another machine to combine into groups those 88 million pages and convert them into searchable text categorised into topics? Then we can start understanding the themes in those documents and then will be aware where to look in this storehouse of documents for our answers”. Thus Perez-Cruz is working on both factors of this two-fold approach which, if right, could then be applicable to many other new age and futuristic data analysis queries such as independent transport and analysis of medical data.

Pricing on Amazon, medical study, text reading machines


Perez-Cruz, who is a veteran of Amazon, Bell Labs, Princeton University and University Carlos III of Madrid, has had a very interesting career dealing with scientific challenges.In 2016, he joined Stevens and contributed to the growing asset of the computer science department of the university. Stevens aims at making this a strong research department which in turn is drawing more talent and resources. Perez-Cruz is using this to his advantage in his work. Currently, at Stevens, he is working to develop something called as ‘interpretable machine learning’ which is a systematized intelligence that humans can still work on.

As far as the problem of the historical document analysis is concerned, Perez-Cruz is in the hopes that he will be able to develop improved character-recognition engines. With the help of short excerpts of documents written in varied styles, which have been earlier transliterated by experts, he aims to teach software to identify both the forms of characters and often correlated associations between letters and words, thus constructing a growing recognition engine over time that is absolutely precise. The only question remains, he says, is that how much data or how much handwriting that is transcribed, is sufficient to do this well. The work on this concept is still developing.

Perez-Cruz states that he believes even though it is a technical challenge, it may still be achievable. He is even more fascinated about the next part which is organisation of large quantities of transcribed matter into topics that can be used in a glance. He says that the machine should be able to give us information right away from these three-and-a-half centuries of data when transcribed and should itself learn from the locations of the words and sentences. This is, what he calls, topic modelling.

A key link: Systematically grouping large data into easily accessible topics


After sufficient data has been entered into the algorithm, it begins to spot the most vital identifying and organizing forms and designs in the data. Very often, it so happens that various cues from the human researchers are vital and are searched for.Perez-Cruz notes that eventually, we might discover that there are, let’s say, a few hundred topics or descriptions that run through the whole of this archive and then all of a sudden there may be 88-million-document problems that have been scaled-down to 200 or 300 ideas.

If algorithms can consolidate 88 million pages of text into a few hundred lots, a huge progress in systematisation and efficiency can be achieved by historians and researchers who need to make choices about which particular document, theme or time periods are to be searched, reviewed and analysed in the formerly unmanageable archive. The same concept could be used to find styles, themes and concealed meaning in other vast unread databases.

He concludes saying that one begins with a huge quantity of unorganised data and in order to understand what material does that data contain and how it can be used, a kind of a structure needs to be brought to that data. Once the data is comprehended, one can begin to read it in a particular way, understand better what questions are to be asked pertaining to that information and make better conclusions.

Wednesday 28 June 2017

Selfies: Selfie-Presentation in Everyday Life

Study – First Significant Experimental Research on Selfie

 
Georgia Institute of Technology researchers have scrutinized through 2.5 million selfie post on Instagram, to comprehend the photographic spectacle better and how people tend to form their personalities online and to determine what types of identity statement people tend to make on taking and sharing selfies. When it comes to Selfies, appearance tends to be almost everything.

Almost 52% of all selfies tend to fall in the category of appearance, with images of people portraying their make-up, clothes, lips etc. Images regarding looks seems to be twice more well-known than the other 14 categories altogether. After the appearance category, social selfies with friends, loved ones as well as pets were most common to 14%.

The ethnicity images at 13%, travel – 7% and health and fitness to 5%. It was observed by the researchers that the prevalence of ethnicity selfies is an indication that people seem to be proud of their background and also found that several selfies were solo picture instead of taken with a group. The data had been collected in the summer of 2015.the Georgia Tech team are of the belief that the study is the first significant experimental research on selfies.
 

Selfie – An Identity Performance

 
Generally, on Instagram, an overpowering 57% of selfies had been posted by 18-35 year old multitude which according to the researchers is not too surprising taking into account the demographic of the social media platform.

Selfies posted by the under-18 age group was about 30% while the older group of 35+ shared them less often around 13%. Appearance on the other hand was most popular among the crowd of all age groups. Julia Deeb-Swihart, lead author stated that selfies are an identity performance which means that users tend to carefully craft the way they may tend to appear online, which is an extension of that.

 Deep-Swihart had stated that `just like on other social media channels, people are inclined to project an identity promoting their wealth, health together with physical attractiveness, with selfies we decide how to present ourselves to the audience and the audiences tends to decide how it identifies you.

 

Type of Blending of Online/Offline Selves


  This work is stuck in the theory offered by Erving Goffman in `The Presentation of Self in Everyday Life’. The attires we tend to choose to wear together with the social roles we are inclined to play, are all intended to control the version of ourselves we prefer our peers to view.

Deeb-Swihart had commented that `selfies are a type of blending of our online and offline selves and is a way to prove what is true in your life or at least what one would want people to believe is true’. The data had been accumulated by the researchers by searching for `#selfie’, then utilised computer vision to confirm that the pictures really included faces.

 Almost half of them did not seem to have and they found plenty of spam with blank images or text. The accounts had been utilising the hashtag in order to show up in additional searches for the purpose of obtaining more followers.

Friday 23 June 2017

How to Create the Perfect App

App Streamlining Path to App Success


A lot of individuals would prefer pricing their app at the 99 cent price point and by default, it could not essentially be the best price for your app. But 99 cents could be a good offer for a game which kids tend to buy since it seems easy for them to convince their parents to spend some money on a game. However when it comes to an utility apps, there seems a sense that one would get what you pay for and hence people could really opt for a higher priced app in the same group.

Some may prefer to charge a million dollar from a user though no one would purchase it. However one needs to be realistic while not underestimating one’s services. An individual could experiment with their price and find a price where some seem to purchase your app at a fast speed. Several of the app makers tend to find the price to be about $4.

 App monetization assistance is provided to individuals by online service provider. The app had been developed in order to assist streamlining the path to app success and is expecting that with the help of videos they would be successful in directing the user step by step to their app career.

Various App Styles – Develop & Monetize


Guidance is provided in imparting knowledge on how to build an app from scratch, how to create an app from a template and how one could distribute your app. Moreover there are also videos which could explain the marketing techniques together with videos that provide full explanation.

At times reading on how to develop and market an Android app could be puzzling and daunting and essential information has been made available by the provider for every app making need though there could be requirement of further assistance and it is here that the provider offers the necessary guidance.

A short one minute video tutorials has been created which tends to portray how one could create an app from scratch, giving the user ideas for content, guiding them on how to make money from their app. They have also permitted the complete collection of app templates to be monetized which gives the user more than 50 various app styles that one could develop and monetize.

Style of Affiliate Ads


For creating apps one could log in and opt for the template style. Then you could insert your content either the URL of your company, your brand icon, family video, or any content which one may have developed and within a few clicks it is accomplished.

To monetize the app, one needs to insert the code of your ad publisher in your dashboard within the monetization tab. The best way to monetize your app in a passive way is the banner ads. The way it tends to work means that a person at any point of time is using your app and with the banners displayed, you tend to get revenue though the amount earned is not known since it is based on several factors.

This could comprise the style of affiliate ads one may have chosen, the number of times people may tend to view the banners and the time which is spent on your app.

Thursday 22 June 2017

Cyber Firms Warn of Malware That Could Cause Power Outages

Malware

Malicious Software – Modified with Ease Harming Critical Infrastructure

It was recently noted that malicious software had been uncovered by two cyber security firms which is presumed to have caused a December 2016 Ukraine power outage, cautioning that the malware could be modified with ease in harming critical infrastructure operations all over the world.

A Slovakian maker of anti-virus software – ESET together with Dragos Inc. a U.S. critical-infrastructure security firm had released information analyses of the malware called Industroyer or Crash override and had dispensed private alerts to governments as well as infrastructure operators to assist them in defending against the threat.

The U.S. Department of Homeland Security had mentioned that they were investigating the malware but it had not perceived any evidence to put forward that it had infected U.S. critical infrastructure. The two firms had stated that they were not aware of who had been behind the cyber-attack. Ukraine had put the blame on Russia but the officials in Moscow had denied the blame constantly.

The firms still cautioned that there could be added attacks utilising the same method by the group that built the malware or by imitators who alter the malicious software. ESET malware researcher Robert Lipovsky had stated in a telephone interview that the malware was easy to repurpose and utilise against other targets which was certainly alarming and could cause wide-scale destruction to organization systems that are dynamic.

System Compromised by Crash Override

That warning had been verified by the Department of Homeland Security stating that it was working to understand better the threat posed by Crash Override. The agency had mentioned in an alert post on its website that `the tactics, techniques and procedure described as part of the Crash override malware could be modified to target U.S dangerous information networks and systems’.

 The alert had posted around three dozen technical indicators that a system had been compromised by Crash Override and requested firms to contact the agency if they had doubted that their system had been compromised by the malware. Robert M. Lee founder of Dragos had stated that the malware had the potential of attacking power systems all over Europe and had the tendency to be leveraged against the United States with small modifications.

Risk to Power Distribution Organizations

Lee had further mentioned by phone that` it is able to cause outages of up to a few days in portions of a nation’s grid but is not strong enough to bring down an entire grid of a country’. Lipovsky had stated that through modifications, the malware could attack other kinds of infrastructure comprising of local transportation providers, gas and water providers.

A leader of Kroll’s cyber security practice, Alan Brill had mentioned in a telephone interview that power firms are concerned that there will be more attacks. He further added that they have been dealing with very smart people who came up with something and deployed it. It represents a risk to power distribution organizations everywhere.

Industroyer had been the only second piece of malware that has been uncovered till date which has the potential of disrupting industrial process to manually intervene, without the help of hackers. Stuxnet was first discovered in 2010 and is generally believed by security researchers to have been utilised by the United States as well as Israel for attacking nuclear program of Iran. The Kremlin and Federal Security Service of Russia had refrained from replying to their request for clarifications.

Deep Learning With Coherent Nanophotonic Circuits

 Nanophotonic Circuits
Light processor recognizes vowels

Nanophotonic module forms the basis for artificial neural networks with extreme computing power and low energy requirements

Supercomputers are approaching the enormous computing power of up to 200 petaflops, ie 200 million billions of operations per second. Nevertheless, they lag far behind the efficiency of human brains, mainly because of their high energy requirements.

A processor based on nanophotonic modules now provides the basis for extremely fast and economical artificial neural networks. As the American developers reported in the magazine "Nature Photonics", their prototype was able to carry out computing operations at a rate of more than 100 gigahertz with light pulses alone.

"We have created the essential building block for an optical neural network, but not yet a complete system," says Yichen Shen, from the Massachusetts Institute of Technology, Cambridge. The nanophotonic processor developed by Shen, together with his colleagues, consists of 56 interferometers, in which light waves interact and form interfering patterns after mutual interference.

These modules are suitable for measuring the phase of a light wave between the wave peak and the wave trough, but can also be used for a targeted change of this phase. In the prototype processor, these interferometers, which in principle correspond, in principle, to a neuron in a neural network, were arranged in a cascade.

After the researchers simulated their concept in advance with elaborate models, they also practically tested it with an algorithm for recognizing vowels. The principle of the photonic processor: A spoken vowel unknown to the system is assigned to a light signal of a laser with a specific wavelength and amplitude. When fed into the interferometer cascade, this light signal interacts with further additionally fed laser pulses and different interference patterns are produced in each interferometer.

To conclude these extremely fast processes, the resulting light signal is detected with a sensitive photodetector and is again assigned to a vowel via an analysis program. This assignment showed that the purely optical system could correctly identify the sound in 138 of 180 test runs. For comparison, the researchers also carried out the recognition with a conventional electronic computer, which achieved a slightly higher hit rate.

This system is still a long way from a photonic light computer, which can perform extremely fast speech recognition or solve even more complex problems. But Shen and colleagues believe it is possible to build artificial neural networks with about 1000 neurons from their nanophotonic building blocks.

In contrast to electronic circuits of conventional computers, the energy requirement is to be reduced by up to two orders of magnitude. This approach is one of the most promising in the future to compete with the viability of living brains.

Wednesday 21 June 2017

Gelsight Sensor Giving Robots a Sense of Touch

Innovative Technology – GelSight Sensor

The research group of Ted Adelson at MIT’s Computer Science and Artificial Intelligence Laboratory – CSAIL had unveiled an innovative sensor technology known as GelSight sensor, eight years ago which utilised physical contact with an object in providing an amazing comprehensive 3-D map of its surface.

The two MIT teams have now mounted Gelsight sensors on the grippers of robotic arms providing the robots with better sensitivity and agility. Recently the researchers had presented their work in twofold paper at the International Conference on Robotics and Automation.

Adelson’s group in one paper had utilised the data from the GelSight Sensor to allow a robot to judge the hardness of surfaces it tends to touch a crucial ability if household robots are to handle the daily objects. In the other Robot Locomotion Group of Russ Tedrake at CSAIL, GelSight Sensors were used to allow a robot to manipulate smaller objects than was earlier possible.

The GelSight sensor is said to be somewhat a low-tech solution to difficult issues comprising of a block of transparent rubber. The gel of its name is one face which is covered with metallic paint. When the paint coated face is pressed against an object, it tends to adapt to the objects shape

GelSight Sensor: Easy for Computer Vision Algorithms

Due to the metallic paint the surface of the object became reflective and its geometry became much easy for computer vision algorithms to understand. Attached on the sensor opposite the paint coated surface of the rubber block one will find three coloured light with a single camera.

Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences has explained that the system is said to have coloured light at various angles and it tends to have this reflective material and on viewing the colours, the computer is capable of figuring out the 3-D shape of what that thing would be.

A GelSight sensor in both the groups of experiments had been mounted on one side of a robotic gripper which is a device to some extent like the head of pincer though with flat gripping surfaces instead of pointed tips.

As for an autonomous robot, gauging the softness or hardness of objects is needed in deciding where and how hard to grasp them but also on how they would behave when moved, stacked or even laid on various surfaces. Moreover physical sensing would also assist robots in differentiating object which may look identical.

GelSight Sensor: Softer Objects – Flatten More

In earlier work, robot had made an effort to evaluate the hardness of object by laying them on a flat surface and gently jabbing them to see how much they give. However this is not how humans tend to gauge hardness. Instead our conclusion depends on the degrees to which the contact area from the object to our fingers seems to change as we press it.

Softer objects seem to flatten more increasing the contact area. This same approach had been utilised by the MIT researchers. A graduate student in mechanical engineering and first author on the paper from the group of Adelson, Wenzhen Yuan had utilised confectionary mould in creating 400 groups of silicon objects with 16 objects for each group.

 In each group, the object seemed to have the same shapes though with altered degrees of hardness which was measured by Yuan utilising a standard industrial scale. Then GelSight sensor was pushed against each object physically and thereafter documented on how the contact pattern seemed to change over a period of time thereby producing a short movie for each object.

In order to regulate both the data format and keep the size of the data adaptable, she had extracted five frames from each movie, consistently spaced in time describing the formation of the object which was pressed.

Changes in Contact Pattern/Hardness Movement

Eventually the data was provided to a neural network that mechanically looked for connections between changes in contact patterns and hardness movements resulting in the system taking frames of videos as inputs producing hardness scores with high accuracy.

A series of informal experiments were also conducted by Yuan wherein human subjects palpated fruits and vegetables ranking them according to their hardness. In every occasion, the GelSight sensor -equipped robot came to the same rankings.

The paper from the Robot Locomotion Group originated from the experience of the group with the Defense Advanced Research Projects Agency’s Robotics Challenge – DRC wherein academic as well as industry teams contended to progress control systems which would guide a humanoid robot through a sequence of tasks linked to theoretical emergency.

 An autonomous robot usually tends to utilise some types of computer vision system in guiding its operation of objects in its setting. Such schemes tend to offer reliable information regarding the location of the object till the robot picks the object up.

GelSight Sensor Live-Updating/Accurate Valuation

Should the object be small most of it will be obstructed by the gripper of the robot making location valuation quite difficult. Consequently at precisely the point where the robot needs to know the exact location of the object, its valuation tends to be unreliable.

 This had been the issue faced by the MIT team at the time of the DRC when their robot had picked up and turned on a power drill. Greg Izat, a graduate student in electrical engineering and computer science and first author on the new paper had commented that one can see in the video for DRC that they had spent two or three minutes turning on the drill.

 It would have been much better if they had a live-updating, accurate valuation of where that drill had been and where their hands were relative to it. This was the reason why the Robot Locomotion Group had turned to GelSight. Izatt together with his co-authors Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics and Mechanical Engineering, Adelson together with Geronimo Mirano, another graduate student in the group of Tedrake had designed control algorithms which tends to utilise computer vision system in guiding the gripper of the robot towards a tool and thereafter turn location estimation over to a GelSight sensor when the robot is said to have the tool in hand.

Monday 19 June 2017

Solar Paint Offers Endless Energy From Water Vapor

Solar Paint and its capability to Produce Fuels out of Water Vapor


Researchers always tend to turn the whirlwind with their innovative research and invention. This time they have decided to bewilder the world with the most innovative research in terms of paint. We have heard about the use of solar energy to generate electricity, but this time the impact of solar power will be located in paints as well. The researchers have unveiled this new development (Solar Paint) which can be used as a measure to generate water vapor which would further split to provide hydrogen. This has left all the science Nazis with utmost eagerness to follow up this research as soon as possible.

The paint would be so tempting because it would contain all essential compounds which would act like silica gel. This compound seems to be frequently used in most of the materials, these days. It is most commonly used in all the sachets in order to absorb moisture, so that the food, medicine or any other product in sachet would remain fresh and undetected from any sort of bacteria. But other than this gel, there are several other materials such as synthetic molybdenum-sulphide which also acts as a semi-conductor and behaves as a catalyst in spitting the water molecules into hydrogen and oxygen.

One of the renowned researchers at the University of RMIT in Melbourne known as Dr. Torben Daeneke, Melbourne, Australia, has confirmed that they once absorbed that when they added titanium particles to compounds it resulted in forming a paint that could absorb sunlight and thus, produced hydrogen from solar energy and moist air. Hence, the name solar paint was given.

Observation suggests that the white pigment which is already present in wall paints is known as titanium oxide, which means that just with the addition of this new component a simple material can upgrade itself to form large chunks of energy harvesters and real estate which produces fuel by converting walls of brick.

The researcher has further concluded that this invention in terms of solar paint has several advantages. Usage of water can be restricted to some extent, as the water vapor or moisture absorbed from the atmosphere can now be utilized to produce that too in much-affected ways. One of his colleagues also seconded him by adding that hydrogen is the cleanest and purest forms of energy which could be used as fuels by utilizing it in fuel producing cells and in combustion engines that are conventional with an alternative measure other than fossil fuels.

This invention can be used in all sorts of places irrespective of the weather conditions. May that be a hot or cold climate or places near to the oceans this formula would be applicable in all places. The formula is very simple, the sea or ocean water would evaporate due to sunlight and thus, the vapor formed can be utilized to produce fuels. The way solar paint is turning out to be beneficial in everyday life soon its impact would be realized globally.