Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Monday 10 July 2017

The Secret to a Perfect Selfie

PA

Trailblazing Self-Portrait – Over £6 Million

Though selfies are said to be a basic of our technology-fanatical generation they do not always seem to be creative. Andy Warhol had takes what could have been some of the most well-known selfies in the world portraying that the artist seems to be much ahead of his time.

 Tom van Laer, a Senior Lecturer in Marketing at City University of London and Stefania Farace, a PhD Candidate in Marketing at Maastricht University in an article for The Conversation, had studied Warhol’s popular photo revealing the three simple rules to the perfect selfie for social media. Andy Warhol, in 1963 had walked in a New York photobooth and had taken what could have been the most famous selfies in the world.

One of the trailblazing self-portrait had been sold for just over £6 million. These selfies seemed to suit effortlessly Warhol’s vision of the pop art era of the late 1950s and 1960s and are typically all-American, mechanical and democratic. Although photobooth images did not go viral like social media images tend to do now, the use of a photobooth in making art was in 1963 fiercely innovative as well as added to the aura of technical invention which surrounded Warhol like it surrounds selfie together with social media presently.

Selfies – Holy Grail of Social Media


Selfies are said to be the holy grail of social media a kind of self-portraying images which tend to be posted on social networking site with details to involve large number of audience. According to latest study it had been revealed that three things could assist the user in taking images which are worth, if not millions of pounds but at least a thousand words and without the need of one risking their life for them.

 Three online experiments had been conducted by their team with workers from Amazon Mechanical Turk that had crowdsource expertise in a range of fields, one with students on computers in the university laboratory and one corpus analysis. It involved a method of looking at a body of evidence jointly with self-governing coders. To define precisely what people involve with, when they view images online, the participants various images were portrayed.

These images were rated on various photographic elements, point of view, content, artsiness and the like. Moreover they also specified how likely they were to comment on the images if they viewed them on social media. With these studies it became possible to segregate the things which seemed to affect people in stopping from caring about an online image and to locate images which would involve them.
Enthusiastic Selfie-Portrait Artist - Awareness –

Besides that they also helped to determine the type of images on which people possibly tend to comment. There are three things which enthusiastic selfie-portrait artist should be aware of:

1. People favour you before the camera

Point of view – POV, in photography is said to be a question of who it is people `see’ taking the image. The unassuming difference is that of `person’ of which there seems to be two principle types namely third person – Warhol taking an image of Marilyn Monroe for instance and first person – Warhol’s selfie.

In the case of Warhol’s time, several of the photographs had been taken from a third person point of view. However this has changed and research does not find much interest for third person images in social media age. From the point of view, it tends to add elaborately to how individuals feel and think as they view the images and just as the point of view could be from one within or outside the image, people then to pick up various feelings and thoughts.

Warhol has contributed immensely in the pictured story of his selfie than in his famous image of Marilyn Monroe and just he is more involved in the story he is conveying with his selfie, so also others are statically likely to get involved with the content of selfies.

2. People get bored of just you

Since the portrait had first been invented, painters and photographers seemed to set priority of importance to person or action. Several of the selfies are said to be about themselves, though our research recommend that this is a poor strategy for drawing attention since people are 15-14% likely in commenting on selfies of individuals doing that which is meaningful than on only selfies. Selfie-takers tend to have agency beyond only being the subject of their own images and tend to do things like eating of drinking of waving their free hand. Warhol had done something else; he had appeared as adjusting his tie.

3. Realistic images put people off 

The selfie of Warhol had been designed not for portraying or depict the truth but to accept the artifice and deception in-built to any kind of illustration. If the creative flexibility in reality and image had been wide in the photograph of Warhol, it would be vast since photography arrived in social media and this is essentially the case. Photographers, who tend to complain that selfies seem to be poor illustration of reality, overlook the fact that taking selfies is not illustration of anything but the unattached sense.

Research has shown that not changing images could wind up in failure and a variation could be silly or serious, unprofessional or professional and so on. Modern photographers need to organize the full power of procedure like emoji, lenses, filters as well as tools since selfie sticks to turn the original into something artful. These selfies tend to be superior with regards to engagement and it was observed that people tend to be 11.86% more likely in commenting on adapted selfies.

As users tend to become more sophisticated in their choice of images, it tends to pay to being more people-centric and to think harder regarding the value an image tends to provide the audience instead of just yourself. The outcome seems to be a renovated selfie of one doing something, an image which is worth a thousand words. In 1968, Warhol had written that `in future everyone would be world well-known for 15 minutes and that future is now.

Friday 7 July 2017

Hot Electrons Move Faster Than Expected

 Hot Electrons

Ultrafast Motion of Electrons


A new research has given rise to solid-state devices which tend to utilise excited electrons. Engineers and scientists at Caltech have for the first time, been in a position of observing directly the ultrafast motion of electrons instantly after they have been excited by a laser. It was observed that these electrons tend to diffuse in their surroundings quickly and beyond than earlier anticipated.

This performance called as `super-diffusion has been hypothesized though not seen before. A team headed by Marco Bernardi of Caltech and the late Ahmed Zewail had documented the motion of electrons by utilising microscope which had captured the images with a shutter speed of a trillionth of a second at a nanometer-scale spatial resolution and their discoveries had appeared in a study published on May 11 in Nature Communications.

 The excited electrons had displayed a diffusion rate of 1,000 times higher than earlier excitation. Though the phenomenon had lasted only for a few hundred trillionths of a second, it had provided the possibility for operation of hot electrons in this fast system in transporting energy and charge in novel devices.

Assistant professor of applied physics and materials science in Caltech’s Division of Engineering and Applied Science, Bernardi had informed that their work portrayed the presence of fast transient which tends to last for a few hundred picoseconds at the time when electrons move quicker than their speed of room temperature, indicating that they can cover longer distance in a given period of time when operated with the help of lasers.

Ultrafast Imaging Technology


He further added that this non-equilibrium behaviour could be employed in novel electronic, optoelectronic as well as renewable energy devices together with uncovering new fundamental physics. Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry, professor of physics as well as the director of the Physical Biology Centre for Ultrafast Science and Technology at Caltech, colleague of Bernardi had passed away on 2nd August 2016.

The research had been possible by scanning ultrafast electron microscopy, which is an ultrafast imaging technology initiated by Zewail, with the potential of creating images with picosecond time with nanometer spatial resolutions. The theory and computer models had been developed by Bernardi which clarified the tentative results as an indicator of super-diffusion.

Bernandi has plans of continuing the research by trying to answer the fundamental questions regarding the excited electrons, like how they equilibrate among themselves as well as with atomic vibrations in material, together with applied ones like how hot electrons could increase the efficiency of energy conversion devices such as solar cells and LEDs.

Super Diffusion of Excited Carriers in Semiconductors


The paper has been entitled `Super Diffusion of Excited Carriers in Semiconductors’. Co-authors comprise of former postdoc Ebrahim Najafi of Caltech, who is said to be the main author of the paper and a former graduate student, Vsevolod Ivanov. The research has been supported by the National Science foundation, together with the Air Force Office of Scientific Research, the Gordon and Betty Moor Foundation as well as the Caltech-Gwangju Institute of Science and Technology – GIST, program.

Wednesday 5 July 2017

L2 vs. L3 cache: What’s the Difference?


The cache is a special buffer memory that is located between the memory and the processor.

So that the processor does not have to get every program command from the slow memory individually, a whole command block or data block is loaded into the cache. The probability that the subsequent program instructions are in the cache is relatively high. Only when all program instructions have been executed or a jump command to a jump address outside the cache, the processor must access the memory again. Therefore, the cache should be as large as possible so that the processor can run the program instructions one after the other without waiting.

Typically, processors work with multi-level caches that are different in size and fast. The closer the cache is to the computing core, the faster it works.

Inclusive cache and exclusive cache

With the multicore processors the terms inclusive and exclusive cache came up. Inclusive cache means that data in the L1 cache is also present in the L2 and L3 cache. This makes data consistency between the cores more secure. Compared to the exclusive cache, some storage capacity is given away because the data is redundant in the caches of several CPU cores.

Exclusive cache means that the cache is available to a processor core exclusively, that is, for it alone. It does not have to share the cache with another core. A disadvantage of this is that several processor cores can then exchange data with one another only by way of a detour.

L1 cache / first-level cache

As a rule, the L1 cache is not particularly large. For reasons of space it moves in the order of 16 to 64 kByte. Usually, the memory area for commands and data is separated from each other. The importance of the L1 cache increases with the higher CPU speed.

In the L1 cache, the most frequently used commands and data are buffered so that as few accesses as possible to the slow memory are required. This cache avoids delays in the data transfer and helps to optimally utilize the CPU.

L2 cache / second-level cache

In the L2 cache, the data of the working memory (RAM) is buffered.

The processor manufacturers supply the different market segments with specially modified processors via the size of the L2 cache. The choice between a processor with more clock speed or a larger L2 cache can be answered in a simplified manner as follows: With a higher clock, individual programs, especially with high arithmetic requirements, run faster. As soon as several programs run at the same time, a larger cache is an advantage. Typically, normal desktop computers with a processor that has a large cache are better served than with a processor that has a high clock rate.

When the memory controller was shifted from the chipset into the processor and the processor was able to access memory much faster, the importance of the L2 cache decreased. While the size of the L2 cache has decreased, the L3 cache has been properly upgraded.

L3 cache / third-level cache

As a rule, multicore processors use an integrated L3 cache. With the L3 cache, the Cache Koheranz protocol of Multicore processors can work much faster. This protocol compares the caches of all cores to maintain data consistency. The L3 cache thus has less functions of a cache, but is intended to simplify and speed up the cache coherency protocol and the data exchange between the cores.

As modern processors now contain several data cores - so-called cores - the manufacturers have already donated a third cache, the L3 cache, to these multi-core processors. All processor cores work together, which is particularly beneficial in parallel processing. This allows data shared by different CPU cores to be retrieved from the fast L3 cache. Without it, these data would always come from the slow main memory. In addition, the L3 cache also facilitates data management with multiple CPU cores and caches (data coherency).

Monday 3 July 2017

Peering Into Fish Brains to See How They Work

Fish

Transparent Fish – Work in the Dark


The main focus in the research of the latest group at the Kavli Institute for Systems Neuroscience is transparent fish and the capability to work in the dark. One of the important challenges faced by neuroscientists wanting to comprehend how the brain works is essentially reckoning out how the brain is wired together and how neurons tend to interact.

NTNU neuroscientists and Nobel laureates May-Britt and Edvard Moser resolved this issue by studying how to record from individual neurons in the rat brain when the rats tend to move freely in space. They utilised the recording in order to make the findings that had attained them the Nobel Prize.

They were in a position to understand that certain neurons in the entorhinal cortex fired in a manner that created a grid pattern which could be utilised in navigating like an internal GPS. Emre Yaksi, the latest teamhead of the Kavli Institute for Systems Neuroscience utilised a diverse approach to the issue of viewing what tends to go on within the brain.

Rather than studying rats or mice, Yakshi resorted to around 90 various types of genetically modified zebra-fish which he could breed in creating various fish with preferredphysiognomies.

Comprehending Universal Circuit Architectures in Brain


Young larval zebra-fish are said to be totally transparent and hence Yakshi needed only a systematic optical microscope to view what tends to occur inside their heads. Some of the fishes of Yakshi seem to have a genetic modification which makes their neurons light up while they direct signal to another neuron and he has informed that this is what tends to make circuits and connections visible to researchers.

He commented that they are interested in comprehending the universal circuit architectures in the brain which can perform interesting computation. Though fish are quite different from humans, their brains tend to have identical structures and in the end fish also have to find food, they also have to find a mate, they have to avoid dangers and they build brain circuits which can generate all these behaviours just the way humans tend to do.

When Yaksi had come to Kavli Institute in early 2015 together with a team of researchers they had a 900 kg anti-vibration table which was the size of a billiards table. The table had been big and heavy and was needed in the laboratory to reduce vibration when they had to use the highly sensitive optical microscopes to peer into the brains of the zebra-fish.

Zebra-Fish Genetically Adapted


The larval fish tend to be quite small that a slight vibration from cars or trucks passing by the streets could make the microscopes bounce away from their miniature brain targets. Zebra-fish brains are quite small, around 10,000 to 20,000 neurons which is a figure dwarfed by the human brain that tends to have an estimated neurons of 80 billion.

However the measurement that Yaksi together with his colleagues tend to make marks in huge quantities of data. According to him, a 30 minute of recording could generate data which tends to take about a week to process the same. It was for this purpose, the research group of Yaksiis a multi-disciplinary team of engineers, physicists and life scientists who seemed to be trained to develop and utilise computational tools in analysing these huge datasets.

Since few of the zebra-fish tend to be genetically adapted in order that their neurons light up with a fluorescent protein when the neurons are active, Yaksi and his colleagues tend to work frequently in low light or darkness. This is particularly obvious when he takes visitors in the subdued darkness of the laboratory where several of the fanciest microscopes are confined in boxes open towards the front, developed to restrict the amount of external light.

Research – Causes of Seizures/How Seizures Prevented


Yaksi had informed that other zebra-fish are genetically modified to shine a blue light in their brain which tends to activate certain neurons enabling the researchers to plan connections between neuron. Major part of the study being done by the group of Yaksiis basic research with findings which tend to improve our understanding of the brain computation though does not specifically have any instant clinical implications.

However, Nathalie Jurisch-Yaksi, wife and colleague of Yaksi is working with medical doctors in order to develop genetically modified zebra-fish which could be helpful in shedding light on brain disease like epilepsy.According to Yaksi, most of the people in his lab are doing basis research attempting to ask how does the brain works, how is it connected, how is it built.

 Nonetheless, Nathalie is working at NTNU with medical doctors and they are trying to reach out to clinicians. For instance he stated that if a brain disorder like epilepsy tends to have a genetic component, that same genetic mutation could be developed in the transgenic group of zebra-fish facility in order that the team could research on the causes of seizures in a diseased brain and how the seizures can be prevented.

Kavli Institute – Excellent Science Environment


The Kavli Institute had been on an institute-wider retreat, when he had come to Trondheim for interview for the position, so Yaksi had the opportunity of meeting not just group leaders but also technicians, master’s students, PhD candidates and everyone. He informed that what was most impressive besides the excellent science environment was that people had been happy and satisfied with what was being done and it was a good atmosphere.

 Though the science had been the most serious part of his decision to move to Trondheim, he informed that he was excited to be a part of the Kavli Institute since he and his wife desired to live in a smaller town as well as close to nature.

He had stated that Trondheim seems to be a unique place and one can do really good science and yet be close to nature, which was a big thing for him and his wife. Going to London or another big city was never an option and they did not desire to deal with big city life. He also informed that when May-Britt Moser had asked him at the time of his interview on what he knew regarding Scandinavia. His reply had been that he did not know much though he had added that he and his wife loved being outdoors.

Saturday 1 July 2017

Plastic 12-Bit RFID Tag and Read-Out System With Screen-Printed Antenna

Quad Industries, Agfa, Imec and TNO made an announcement recently that they established and verified a plastic 12-bit RFID tag and read-out systems with security that is screen printed. For the first time, the system combines a screen-printed antenna and a printed user interface that is based on touch, which allows the reader to operate on curved surfaces. The demonstrator has developed for applications pertaining to badge security, but also shows scope for many other applications as well such as smart packages, games that require interaction and wearables.

Compared to silicon (Si)-based identification devices, RFID tags that are made of plastic electronics have more advantages. They can be attached to curved packaging, effortlessly incorporated in everyday objects and its manufacturing is low-cost. The usual application consists of identification of items, smart food packaging, protecting the brand and badge security. A dedicated RFID reader is needed to scan the RFID tag which is usually in two centimetres of the tag. The antenna in the tag as well as the reader should both be flexible, utilising the advantages of plastic electronics to the fullest. Screen-printed antennas have been applied effectively on the top of an RFID tag but inflexible PCB-based antennas are generally used by the read-out systems. This is primarily because of the fact that the printed antenna has a poor resistance and Q-factor.

For the first time, industries like Imec, Quad Industries and Agfa have combined a screen-printed antenna in both of the items, the RFID tag as well as the read-out system. This allows the application of both these devices on a diverse range of surfaces. Quad Industries have screen-printed antennas using printing inks from Agfa.

This new technology has been demonstrated in an application pertaining to badge security. The access badge integrates the printed antenna, which is size of a credit-card, with a plastic 12-bit RFID chip, placed on plastic substrate that’s flexible. Imec’s metal-oxide thin-film transistor (TFT) technology has been used to manufacture the RFID tag. Large-area manufacturing processes are used by this technology that makes large-scale production at a low cost possible.

The read-out system includes uniquely printed functionality at diverse levels. To begin with, an RFID read-out antenna is made by screen-printing on a plastic film, making room for best possible integration on flat, curved or 3-D shaped reading surfaces. Also, a fully printed touch screen interface with numerical keypad has been placed between the cover lens and the display, which allows any user without a badge to enter the building by punching in a numerical code. Highly transparent screen-printed inks have been used to print these printed touch screen.

There are recently developed Ag inks which are nanoparticle based that makes lower resistances over conventional Ag-flake based inks achievable which in turn enables integrating new functionalities directly by screen printing. In addition to this, the antenna is printed at the same level as the printed touch screen which results in direct, more economical combination of the printed antenna and the customized touch screen in the device that’s the reader.

This technology allows for economical screen-printing manufacturing, is effortlessly customizable and eco-friendly and allows direct chip integration on many substrates which includes plastics, paper, etc. This technology also sees a promising use in smart packaging, smart PCB and smart gaming.

Sensor Solution: Sensor Boutique for Early Adopters

Sensor Boutique
It is known that a very individual fraction of infrared light is absorbed by every chemical substance. This absorption can be used for recognising substances with the help of optical methods, which is almost like the concept of a human fingerprint.

To elaborate this concept, when the infrared radiation, that falls within a certain range of wavelength, are absorbed by molecules, they are animated to a higher level of vibration, in which they rotate and vibrate in a typical and distinctive pattern or rather in a “fingerprint” pattern. These patterns can be used for identifying specific chemical species. Such kind of a method is used, let’s say, for example, in the chemical industry but also has its uses in the health sector or in criminal investigation. A company often needs an individually tailored sensor solution if it plans a new project.

EU-funded pilot line called MIRPHAB (Mid InfraRedPhotonics devices fABrication for chemical sensing and spectroscopic applications) support companies that in search for a suitable system and help in the development of sensor technology and measurement technology in mid-infrared (MIR). Participating in this project is the Fraunhofer Institute for Applied Solid State Physics IAF.

Pilot line for ideal spectroscopy solutions


A company has very individual needs if it is looking for a sensor solution, for example, if it has to identify a particular substance in a production process. This begins with the substances that have to be recorded to the number of sensors required up to the speed of the process of production.Considering most of the cases, a custom-made solution that suits all does not suffice and various suppliers are required for the purpose of developing the optimal individual solution.Here is where MIRPHAB comes into picture and proves to be very useful.

Leading European research institutes and companies belonging to the MIR environment have collaborated to provide customers with a custom-made and best suited offers made from a single source. Parties that are interested can get in touch with a central contact person, who can then make a compilation of the best solutions possible from the MIRPHAB members component portfolio as per the modular principle.

EU funding has supported MIRPHAB in the development of the individual MIR sensor solution within the framework, in order to fortify the European industry in the long run and increase in its leading position in chemical analysis and sensor technology. This considerably lessens the investment costs and as a result also reduces the entry point for companies in the MIR area.

Companies that have previously faced high costs and development efforts are now looking at a high-quality MIR sensor solution as an object of interest due to its combination with the virtual infrastructure which is a development caused by MIRPHAB.Also, MIRPHAB provides companies access to the latest and modern technologies, enabling them with an added advantage as an early adopter compared to the competition.

Custom-madesource forMIR lasers


The Freiburg-basedFraunhofer Institute for Applied Solid State Physics IAF along with the Fraunhofer Institute for Photonic Microsystems IPMS situated in Dresden, is providing a central component of the MIRPHAB sensor solution. The Fraunhofer IAF is presenting the new technology of quantum cascade lasers that emanate laser light in the range of MIR. In this type of laser, the range of the wavelength of the emitted light is spectrally extensive and can be adapted as per requirement during manufacturing. To select a particular wavelength within the broad spectral range, an optical diffraction grating has to be used to choose and then coupled back into the laser chip. The wavelength can be adjusted constantly by turning the grating. This grating is created at the Fraunhofer IPMS in a scaled-down form in so-called Micro-Electro-Mechanical-System or MEMS technology.Thus it is then possible to oscillate the grating up to one kilohertz of frequency. This further enables the tuning of the laser source’s wavelength up to a thousand times per second over a large range of spectrum.
The Fraunhofer Institute for Production Technology IPT in Aachen also has involvement in MIRPHAB in order to make the manufacturing of lasers and ratings more proficient and to enhance them for pilot series fabrication.With the help of its proficiency, it changes the production of the quickly adaptable MIR laser into industrially applicable manufacturing processes.

Process exploration in actuality

Currently, there are many applications in the field of spectroscopy that are still in the category of visible or near the range of infrared and use comparatively feeble light sources. MIRPHAB provides solutions has the concept of infrared semiconductor lasers as a foundation. These have comparatively higher intensity of light thus allowing the scope for completely new applications. This results in a recording of up to 1,000 spectra per second with the help of the MIR laser source which, as an example, provides for the real time programmed monitoring and control of biotechnological processes and chemical reactions. Thus, MIRPHAB’s contribution is considered to be important and vital to the factory of the future.

Friday 30 June 2017

Can Artificial Intelligence Help Us Make More Human Decisions?


About 88 million pages of original and authentic handwritten documents belonging to the past three-and-a-half centuries, line the tiled halls of a simple 16th-century trading house located right in the middle of Seville, Spain. These are stored here, incompletely transliterated, where some of them are almost indecipherable. A few of them were carried back on armadas from the Americas while a few have undergone scanning and digitisation.

These documents contain the answers and the context for the innumerable questions pertaining to the Conquistadors, the European history, the New World contact and colonialism, politics, law, economics and ancestry. However, it is unfortunate that hardly some of these carefully kept pages were ever read or interpreted since they were written and brought to Seville centuries before and it is highly unlikely that most of them never will be.

All hope is not lost as a researcher from the Stevens Institute of Technology is trying to get computers to read these documents, before we are out of time, while the documents are still readable. A Stevens computer science professor, Fernando Perez-Cruz asks “What if there was a machine, or a software, that could transcribe all of the documents?”.

Perez-Cruz, who’s expertise lies in the research area of machine learning also says “What if there was a way to teach another machine to combine into groups those 88 million pages and convert them into searchable text categorised into topics? Then we can start understanding the themes in those documents and then will be aware where to look in this storehouse of documents for our answers”. Thus Perez-Cruz is working on both factors of this two-fold approach which, if right, could then be applicable to many other new age and futuristic data analysis queries such as independent transport and analysis of medical data.

Pricing on Amazon, medical study, text reading machines


Perez-Cruz, who is a veteran of Amazon, Bell Labs, Princeton University and University Carlos III of Madrid, has had a very interesting career dealing with scientific challenges.In 2016, he joined Stevens and contributed to the growing asset of the computer science department of the university. Stevens aims at making this a strong research department which in turn is drawing more talent and resources. Perez-Cruz is using this to his advantage in his work. Currently, at Stevens, he is working to develop something called as ‘interpretable machine learning’ which is a systematized intelligence that humans can still work on.

As far as the problem of the historical document analysis is concerned, Perez-Cruz is in the hopes that he will be able to develop improved character-recognition engines. With the help of short excerpts of documents written in varied styles, which have been earlier transliterated by experts, he aims to teach software to identify both the forms of characters and often correlated associations between letters and words, thus constructing a growing recognition engine over time that is absolutely precise. The only question remains, he says, is that how much data or how much handwriting that is transcribed, is sufficient to do this well. The work on this concept is still developing.

Perez-Cruz states that he believes even though it is a technical challenge, it may still be achievable. He is even more fascinated about the next part which is organisation of large quantities of transcribed matter into topics that can be used in a glance. He says that the machine should be able to give us information right away from these three-and-a-half centuries of data when transcribed and should itself learn from the locations of the words and sentences. This is, what he calls, topic modelling.

A key link: Systematically grouping large data into easily accessible topics


After sufficient data has been entered into the algorithm, it begins to spot the most vital identifying and organizing forms and designs in the data. Very often, it so happens that various cues from the human researchers are vital and are searched for.Perez-Cruz notes that eventually, we might discover that there are, let’s say, a few hundred topics or descriptions that run through the whole of this archive and then all of a sudden there may be 88-million-document problems that have been scaled-down to 200 or 300 ideas.

If algorithms can consolidate 88 million pages of text into a few hundred lots, a huge progress in systematisation and efficiency can be achieved by historians and researchers who need to make choices about which particular document, theme or time periods are to be searched, reviewed and analysed in the formerly unmanageable archive. The same concept could be used to find styles, themes and concealed meaning in other vast unread databases.

He concludes saying that one begins with a huge quantity of unorganised data and in order to understand what material does that data contain and how it can be used, a kind of a structure needs to be brought to that data. Once the data is comprehended, one can begin to read it in a particular way, understand better what questions are to be asked pertaining to that information and make better conclusions.

Wednesday 28 June 2017

Selfies: Selfie-Presentation in Everyday Life

Study – First Significant Experimental Research on Selfie

 
Georgia Institute of Technology researchers have scrutinized through 2.5 million selfie post on Instagram, to comprehend the photographic spectacle better and how people tend to form their personalities online and to determine what types of identity statement people tend to make on taking and sharing selfies. When it comes to Selfies, appearance tends to be almost everything.

Almost 52% of all selfies tend to fall in the category of appearance, with images of people portraying their make-up, clothes, lips etc. Images regarding looks seems to be twice more well-known than the other 14 categories altogether. After the appearance category, social selfies with friends, loved ones as well as pets were most common to 14%.

The ethnicity images at 13%, travel – 7% and health and fitness to 5%. It was observed by the researchers that the prevalence of ethnicity selfies is an indication that people seem to be proud of their background and also found that several selfies were solo picture instead of taken with a group. The data had been collected in the summer of 2015.the Georgia Tech team are of the belief that the study is the first significant experimental research on selfies.
 

Selfie – An Identity Performance

 
Generally, on Instagram, an overpowering 57% of selfies had been posted by 18-35 year old multitude which according to the researchers is not too surprising taking into account the demographic of the social media platform.

Selfies posted by the under-18 age group was about 30% while the older group of 35+ shared them less often around 13%. Appearance on the other hand was most popular among the crowd of all age groups. Julia Deeb-Swihart, lead author stated that selfies are an identity performance which means that users tend to carefully craft the way they may tend to appear online, which is an extension of that.

 Deep-Swihart had stated that `just like on other social media channels, people are inclined to project an identity promoting their wealth, health together with physical attractiveness, with selfies we decide how to present ourselves to the audience and the audiences tends to decide how it identifies you.

 

Type of Blending of Online/Offline Selves


  This work is stuck in the theory offered by Erving Goffman in `The Presentation of Self in Everyday Life’. The attires we tend to choose to wear together with the social roles we are inclined to play, are all intended to control the version of ourselves we prefer our peers to view.

Deeb-Swihart had commented that `selfies are a type of blending of our online and offline selves and is a way to prove what is true in your life or at least what one would want people to believe is true’. The data had been accumulated by the researchers by searching for `#selfie’, then utilised computer vision to confirm that the pictures really included faces.

 Almost half of them did not seem to have and they found plenty of spam with blank images or text. The accounts had been utilising the hashtag in order to show up in additional searches for the purpose of obtaining more followers.

Friday 23 June 2017

How to Create the Perfect App

App Streamlining Path to App Success


A lot of individuals would prefer pricing their app at the 99 cent price point and by default, it could not essentially be the best price for your app. But 99 cents could be a good offer for a game which kids tend to buy since it seems easy for them to convince their parents to spend some money on a game. However when it comes to an utility apps, there seems a sense that one would get what you pay for and hence people could really opt for a higher priced app in the same group.

Some may prefer to charge a million dollar from a user though no one would purchase it. However one needs to be realistic while not underestimating one’s services. An individual could experiment with their price and find a price where some seem to purchase your app at a fast speed. Several of the app makers tend to find the price to be about $4.

 App monetization assistance is provided to individuals by online service provider. The app had been developed in order to assist streamlining the path to app success and is expecting that with the help of videos they would be successful in directing the user step by step to their app career.

Various App Styles – Develop & Monetize


Guidance is provided in imparting knowledge on how to build an app from scratch, how to create an app from a template and how one could distribute your app. Moreover there are also videos which could explain the marketing techniques together with videos that provide full explanation.

At times reading on how to develop and market an Android app could be puzzling and daunting and essential information has been made available by the provider for every app making need though there could be requirement of further assistance and it is here that the provider offers the necessary guidance.

A short one minute video tutorials has been created which tends to portray how one could create an app from scratch, giving the user ideas for content, guiding them on how to make money from their app. They have also permitted the complete collection of app templates to be monetized which gives the user more than 50 various app styles that one could develop and monetize.

Style of Affiliate Ads


For creating apps one could log in and opt for the template style. Then you could insert your content either the URL of your company, your brand icon, family video, or any content which one may have developed and within a few clicks it is accomplished.

To monetize the app, one needs to insert the code of your ad publisher in your dashboard within the monetization tab. The best way to monetize your app in a passive way is the banner ads. The way it tends to work means that a person at any point of time is using your app and with the banners displayed, you tend to get revenue though the amount earned is not known since it is based on several factors.

This could comprise the style of affiliate ads one may have chosen, the number of times people may tend to view the banners and the time which is spent on your app.

Thursday 22 June 2017

Cyber Firms Warn of Malware That Could Cause Power Outages

Malware

Malicious Software – Modified with Ease Harming Critical Infrastructure

It was recently noted that malicious software had been uncovered by two cyber security firms which is presumed to have caused a December 2016 Ukraine power outage, cautioning that the malware could be modified with ease in harming critical infrastructure operations all over the world.

A Slovakian maker of anti-virus software – ESET together with Dragos Inc. a U.S. critical-infrastructure security firm had released information analyses of the malware called Industroyer or Crash override and had dispensed private alerts to governments as well as infrastructure operators to assist them in defending against the threat.

The U.S. Department of Homeland Security had mentioned that they were investigating the malware but it had not perceived any evidence to put forward that it had infected U.S. critical infrastructure. The two firms had stated that they were not aware of who had been behind the cyber-attack. Ukraine had put the blame on Russia but the officials in Moscow had denied the blame constantly.

The firms still cautioned that there could be added attacks utilising the same method by the group that built the malware or by imitators who alter the malicious software. ESET malware researcher Robert Lipovsky had stated in a telephone interview that the malware was easy to repurpose and utilise against other targets which was certainly alarming and could cause wide-scale destruction to organization systems that are dynamic.

System Compromised by Crash Override

That warning had been verified by the Department of Homeland Security stating that it was working to understand better the threat posed by Crash Override. The agency had mentioned in an alert post on its website that `the tactics, techniques and procedure described as part of the Crash override malware could be modified to target U.S dangerous information networks and systems’.

 The alert had posted around three dozen technical indicators that a system had been compromised by Crash Override and requested firms to contact the agency if they had doubted that their system had been compromised by the malware. Robert M. Lee founder of Dragos had stated that the malware had the potential of attacking power systems all over Europe and had the tendency to be leveraged against the United States with small modifications.

Risk to Power Distribution Organizations

Lee had further mentioned by phone that` it is able to cause outages of up to a few days in portions of a nation’s grid but is not strong enough to bring down an entire grid of a country’. Lipovsky had stated that through modifications, the malware could attack other kinds of infrastructure comprising of local transportation providers, gas and water providers.

A leader of Kroll’s cyber security practice, Alan Brill had mentioned in a telephone interview that power firms are concerned that there will be more attacks. He further added that they have been dealing with very smart people who came up with something and deployed it. It represents a risk to power distribution organizations everywhere.

Industroyer had been the only second piece of malware that has been uncovered till date which has the potential of disrupting industrial process to manually intervene, without the help of hackers. Stuxnet was first discovered in 2010 and is generally believed by security researchers to have been utilised by the United States as well as Israel for attacking nuclear program of Iran. The Kremlin and Federal Security Service of Russia had refrained from replying to their request for clarifications.

Deep Learning With Coherent Nanophotonic Circuits

 Nanophotonic Circuits
Light processor recognizes vowels

Nanophotonic module forms the basis for artificial neural networks with extreme computing power and low energy requirements

Supercomputers are approaching the enormous computing power of up to 200 petaflops, ie 200 million billions of operations per second. Nevertheless, they lag far behind the efficiency of human brains, mainly because of their high energy requirements.

A processor based on nanophotonic modules now provides the basis for extremely fast and economical artificial neural networks. As the American developers reported in the magazine "Nature Photonics", their prototype was able to carry out computing operations at a rate of more than 100 gigahertz with light pulses alone.

"We have created the essential building block for an optical neural network, but not yet a complete system," says Yichen Shen, from the Massachusetts Institute of Technology, Cambridge. The nanophotonic processor developed by Shen, together with his colleagues, consists of 56 interferometers, in which light waves interact and form interfering patterns after mutual interference.

These modules are suitable for measuring the phase of a light wave between the wave peak and the wave trough, but can also be used for a targeted change of this phase. In the prototype processor, these interferometers, which in principle correspond, in principle, to a neuron in a neural network, were arranged in a cascade.

After the researchers simulated their concept in advance with elaborate models, they also practically tested it with an algorithm for recognizing vowels. The principle of the photonic processor: A spoken vowel unknown to the system is assigned to a light signal of a laser with a specific wavelength and amplitude. When fed into the interferometer cascade, this light signal interacts with further additionally fed laser pulses and different interference patterns are produced in each interferometer.

To conclude these extremely fast processes, the resulting light signal is detected with a sensitive photodetector and is again assigned to a vowel via an analysis program. This assignment showed that the purely optical system could correctly identify the sound in 138 of 180 test runs. For comparison, the researchers also carried out the recognition with a conventional electronic computer, which achieved a slightly higher hit rate.

This system is still a long way from a photonic light computer, which can perform extremely fast speech recognition or solve even more complex problems. But Shen and colleagues believe it is possible to build artificial neural networks with about 1000 neurons from their nanophotonic building blocks.

In contrast to electronic circuits of conventional computers, the energy requirement is to be reduced by up to two orders of magnitude. This approach is one of the most promising in the future to compete with the viability of living brains.

Wednesday 21 June 2017

Gelsight Sensor Giving Robots a Sense of Touch

Innovative Technology – GelSight Sensor

The research group of Ted Adelson at MIT’s Computer Science and Artificial Intelligence Laboratory – CSAIL had unveiled an innovative sensor technology known as GelSight sensor, eight years ago which utilised physical contact with an object in providing an amazing comprehensive 3-D map of its surface.

The two MIT teams have now mounted Gelsight sensors on the grippers of robotic arms providing the robots with better sensitivity and agility. Recently the researchers had presented their work in twofold paper at the International Conference on Robotics and Automation.

Adelson’s group in one paper had utilised the data from the GelSight Sensor to allow a robot to judge the hardness of surfaces it tends to touch a crucial ability if household robots are to handle the daily objects. In the other Robot Locomotion Group of Russ Tedrake at CSAIL, GelSight Sensors were used to allow a robot to manipulate smaller objects than was earlier possible.

The GelSight sensor is said to be somewhat a low-tech solution to difficult issues comprising of a block of transparent rubber. The gel of its name is one face which is covered with metallic paint. When the paint coated face is pressed against an object, it tends to adapt to the objects shape

GelSight Sensor: Easy for Computer Vision Algorithms

Due to the metallic paint the surface of the object became reflective and its geometry became much easy for computer vision algorithms to understand. Attached on the sensor opposite the paint coated surface of the rubber block one will find three coloured light with a single camera.

Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences has explained that the system is said to have coloured light at various angles and it tends to have this reflective material and on viewing the colours, the computer is capable of figuring out the 3-D shape of what that thing would be.

A GelSight sensor in both the groups of experiments had been mounted on one side of a robotic gripper which is a device to some extent like the head of pincer though with flat gripping surfaces instead of pointed tips.

As for an autonomous robot, gauging the softness or hardness of objects is needed in deciding where and how hard to grasp them but also on how they would behave when moved, stacked or even laid on various surfaces. Moreover physical sensing would also assist robots in differentiating object which may look identical.

GelSight Sensor: Softer Objects – Flatten More

In earlier work, robot had made an effort to evaluate the hardness of object by laying them on a flat surface and gently jabbing them to see how much they give. However this is not how humans tend to gauge hardness. Instead our conclusion depends on the degrees to which the contact area from the object to our fingers seems to change as we press it.

Softer objects seem to flatten more increasing the contact area. This same approach had been utilised by the MIT researchers. A graduate student in mechanical engineering and first author on the paper from the group of Adelson, Wenzhen Yuan had utilised confectionary mould in creating 400 groups of silicon objects with 16 objects for each group.

 In each group, the object seemed to have the same shapes though with altered degrees of hardness which was measured by Yuan utilising a standard industrial scale. Then GelSight sensor was pushed against each object physically and thereafter documented on how the contact pattern seemed to change over a period of time thereby producing a short movie for each object.

In order to regulate both the data format and keep the size of the data adaptable, she had extracted five frames from each movie, consistently spaced in time describing the formation of the object which was pressed.

Changes in Contact Pattern/Hardness Movement

Eventually the data was provided to a neural network that mechanically looked for connections between changes in contact patterns and hardness movements resulting in the system taking frames of videos as inputs producing hardness scores with high accuracy.

A series of informal experiments were also conducted by Yuan wherein human subjects palpated fruits and vegetables ranking them according to their hardness. In every occasion, the GelSight sensor -equipped robot came to the same rankings.

The paper from the Robot Locomotion Group originated from the experience of the group with the Defense Advanced Research Projects Agency’s Robotics Challenge – DRC wherein academic as well as industry teams contended to progress control systems which would guide a humanoid robot through a sequence of tasks linked to theoretical emergency.

 An autonomous robot usually tends to utilise some types of computer vision system in guiding its operation of objects in its setting. Such schemes tend to offer reliable information regarding the location of the object till the robot picks the object up.

GelSight Sensor Live-Updating/Accurate Valuation

Should the object be small most of it will be obstructed by the gripper of the robot making location valuation quite difficult. Consequently at precisely the point where the robot needs to know the exact location of the object, its valuation tends to be unreliable.

 This had been the issue faced by the MIT team at the time of the DRC when their robot had picked up and turned on a power drill. Greg Izat, a graduate student in electrical engineering and computer science and first author on the new paper had commented that one can see in the video for DRC that they had spent two or three minutes turning on the drill.

 It would have been much better if they had a live-updating, accurate valuation of where that drill had been and where their hands were relative to it. This was the reason why the Robot Locomotion Group had turned to GelSight. Izatt together with his co-authors Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics and Mechanical Engineering, Adelson together with Geronimo Mirano, another graduate student in the group of Tedrake had designed control algorithms which tends to utilise computer vision system in guiding the gripper of the robot towards a tool and thereafter turn location estimation over to a GelSight sensor when the robot is said to have the tool in hand.

Monday 19 June 2017

Solar Paint Offers Endless Energy From Water Vapor

Solar Paint and its capability to Produce Fuels out of Water Vapor


Researchers always tend to turn the whirlwind with their innovative research and invention. This time they have decided to bewilder the world with the most innovative research in terms of paint. We have heard about the use of solar energy to generate electricity, but this time the impact of solar power will be located in paints as well. The researchers have unveiled this new development (Solar Paint) which can be used as a measure to generate water vapor which would further split to provide hydrogen. This has left all the science Nazis with utmost eagerness to follow up this research as soon as possible.

The paint would be so tempting because it would contain all essential compounds which would act like silica gel. This compound seems to be frequently used in most of the materials, these days. It is most commonly used in all the sachets in order to absorb moisture, so that the food, medicine or any other product in sachet would remain fresh and undetected from any sort of bacteria. But other than this gel, there are several other materials such as synthetic molybdenum-sulphide which also acts as a semi-conductor and behaves as a catalyst in spitting the water molecules into hydrogen and oxygen.

One of the renowned researchers at the University of RMIT in Melbourne known as Dr. Torben Daeneke, Melbourne, Australia, has confirmed that they once absorbed that when they added titanium particles to compounds it resulted in forming a paint that could absorb sunlight and thus, produced hydrogen from solar energy and moist air. Hence, the name solar paint was given.

Observation suggests that the white pigment which is already present in wall paints is known as titanium oxide, which means that just with the addition of this new component a simple material can upgrade itself to form large chunks of energy harvesters and real estate which produces fuel by converting walls of brick.

The researcher has further concluded that this invention in terms of solar paint has several advantages. Usage of water can be restricted to some extent, as the water vapor or moisture absorbed from the atmosphere can now be utilized to produce that too in much-affected ways. One of his colleagues also seconded him by adding that hydrogen is the cleanest and purest forms of energy which could be used as fuels by utilizing it in fuel producing cells and in combustion engines that are conventional with an alternative measure other than fossil fuels.

This invention can be used in all sorts of places irrespective of the weather conditions. May that be a hot or cold climate or places near to the oceans this formula would be applicable in all places. The formula is very simple, the sea or ocean water would evaporate due to sunlight and thus, the vapor formed can be utilized to produce fuels. The way solar paint is turning out to be beneficial in everyday life soon its impact would be realized globally. 

Friday 16 June 2017

DirectReality of Microsoft - New VR Interface Planned for Windows

Since Microsoft has presented projects such as HoloLens, the company is the pioneer in innovation for many people. Now Microsoft has once again secured more rights to something new called

Although Microsoft recently assured that VR is not shown during this year's E3 during the own briefing, the company only a few days earlier secured the rights to something called DirectReality.

Sounds exciting and many people will now say that it has nothing to do with gaming, but you are wrong. DirectReality is said to have a direct reference to gaming. Microsoft has just registered a new brand name just a few days ago, which points to a new interface for Windows and perhaps also the Xbox One Scorpio. What exactly lies behind DirectReality hides this year's E3 probably not give up.

Microsoft applied for trademark protection for the term "DirectReality" on June 2, 2017 in the United States and applied for computer software, computer games, software for holographic apps and their online versions. However, the US Patent and Trademark Office (USPTO) does not provide more detailed information. Companies such as Microsoft also often secure brand names, which are ultimately not used.

 

DirectX, Direct3D - DirectReality?


However, in the context of the forthcoming E3 games fair and the presentation of the Xbox One Scorpio, the brand DirectReality ensures some speculation. Microsoft has been using the term "Direct" for important interfaces for the Windows operating system, for example DirectX, Direct3D or DirectShow. Even the Xbox owes its name to DirectX, since it was originally planned as a DirectX box. This could be a new graphics interface for DirectReality that is related to the mixed reality features that Microsoft is planning for Windows 10 and in Hololens.

Perhaps a standard for VR / AR under Windows


However, it would also be conceivable that Microsoft, like DirectX for 3D graphics under Windows, is planning a new standard for virtual and augmented reality in Windows 10, which allows the development of software for hardware from many different manufacturers.

A standard would probably also be very useful for developers who want to offer their VR software for consoles like the Xbox One, Scorpio and Windows 10. There may be some information at E3 during the Microsoft press conference. Microsoft has a lot to tell and in the person of Phil Spencer already warned that this time 90 minutes will not be enough.

The Microsoft could be a feature with regard to the HoloLens, but of course also the Scorpio, which supports VR headsets, possibly also the Hololens. In addition to Virtual Reality, Augmented Reality and Mixed Reality, DirectReality would be a new form and could even be a reference to a controller unit. With a little luck, we know more about the E3, otherwise it could be a long-term project, so we could only learn about it in the distant future.

Thursday 15 June 2017

Novel Innovation Could Allow Bullets to Disintegrate After Designated Distance

bullet shot

Purdue University – Bullet to Be Non-Lethal

Presently bullets have been made from various materials particular for the projected application which tends to retain an important portion of their energy after travelling hundreds or thousands of meters. This could result in unwanted significances like unintended death or injury of those around the place as well as security damage if the target was missed.

Very often stray bullet shootings are overlooked consequence of firing which could result in severe injury or even death to bystanders or collateral damage victims in the military. Hence a need in law enforcement, military together with civilian segments for a safer bullet would considerably decrease collateral damage as well as injury.

Technology has been created that could avoid these occurrences at Purdue University. Research group headed by a professor of materials engineering and electrical and computer engineering, Ernesto Marinero has designed novel materials and fabrication which enables a bullet to be non-lethal which collapses after a selected distance.

This technology was the consequence of a need for safer bullet which would considerably decrease security damage as well an injury in law enforcement, civilian and military segments. Conservative bullet tends to have a substantial percentage of their energy after travelling a hundreds or even thousands of meters.

Combination of Stopping Power of Standard Bullets/Restriction of Range

The newly developed Purdue innovation helps the bullet to break over a predetermined period owing to the heat that is generated at the time of firing in combination with air drag together with an internal heating component. The heat conducts over the complete bullet part, melting the low temperature binder material facing drag forces that tends to result in breakdown.

The technology is said to be a combination of stopping power of standard bullets, the shrapnel-eliminating aids of frangible bullets together with a restriction of range in decreasing injury or death of would-be spectators. The Office of Technology Commercialization of Purdue Research Foundation has patented the technology and is said to be available for license. The researchers at Purdue University had established materials together with fabrication for ammunition which became non-lethal after a chosen space.

A professor of emergency medicine and director of the Violence Prevention Research Program at UC Davis School of Medicine and Medical Centre, Garen Wintemute had commented that `stray bullet shootings gave rise to fear and insecurity among the people. They tend to remain indoors and stop their children from playing out in the open thereby changing their pattern of their daily routine to evade from being struck by a bullet intended for someone in mind.

No Research – Exploring Epidemiology of Firings

However, no research had been done at the national level in exploring the epidemiology of these firings and such information is essential in identifying preventive measures’. He further added that stray bullet firings are mostly a side effect of planned violence what is indirectly known as collateral damage.

Those who tend to get shot have little or no warning; opportunities to indulge in preventive measures once the shooting tends to take are restricted. We will only be capable of preventing these shootings to the extent that we are able to prevent firearm violence unless we intend to bulletproof the complete communities together with the residents.

Millimeter-Wave Technology: Highly Sensitive Tracking Nose in Space

Millimeter-Wave Technology

Low Power Millimeter-Wave Amplifier

It was recently announced by Hiroshima University and Mie Fujitsu Semiconductor Limited – MIFS, the development of a low power millimetre wave amplifier which tends to feed on 0.5 V power supply, covering the frequency range from 80 GHz to 106 GHz. It had been invented utilising Deeply Depleted Channel – DDC technology of MIFS and is the first W-band amplifier (75-110GHz) which operates with low power supply voltage.

 Technology details will be presented at the IEEE Radio Frequency Integrated Circuits Symposium – RFIC 2016 which was from June 4th to 6th in Honolulu, Hawaii. The W-band is said to cover frequencies utilised by automotive radars. Radars with millimetre-wave beam would be essential for sophisticated driver-assistance and self-driving to scan capability which can `see’ in day as well as night conditions as well as in adverse weather situations.

 This type of phased array would comprise of up to hundreds of transmitters and receivers. Since battery-powered cars tend to be more common, it is vital that these circuits have a tendency to be low power and lowering the power-supply voltage seems to be the most effective means of achieving the same. But transistor performance tends to fall with voltage and W-band amplifier has not operated so far as low as 0.5 V.

High-Performance Silicon MOS Transistors

A W-band amplifier at 0.5 V had been successfully demonstrated by the team of researchers by bringing together DDC technology of MIFS and design techniques created by Hiroshima University. The DDC technology provides high-performance silicon MOS transistors also at low voltages and is presently made available as a 55-nm CMOS process from MIFS.

 Moreover, the design techniques tend to enhance transistor as well as circuit performance at millimetre-wave regularities. A graduate School of Advance Sciences of Matter, Hiroshima University, Professor Minoru Fijishima had commented that `now seriously low power W-band circuits really seem possible, they would think about what can be done with them. Applications aren’t restricted to automotive radars and high-speed communication between base stations.

 What if one had radar on their smartphone? Presently smartphones tend to sense things already such as acceleration, audible sound, and visible light together with the magnetic field of the Earth. However the only active searching device is that tiny LED – light emitting diode, which can brighten at best a few meters.

W-Band Amplifier - Reliability

He further stated that by adding millimetre-wave radar on your smartphone, it does not have to be a imaginary main radar that tends to sense waves reflected back but your smartphone could respond to waves from the radar of your friend, sending some signals back to them.

 A new set of applications can be developed comprising of games. Professor Fujishima further added that another significance of our 0.5 V W-band amplifier is reliability and the researchers are aware that they are long lasting. They tend to degrade as one measure them, within days or hours and not years due to the supposed hot-carrier effects.

The 0.5 V supply voltage is said to reduce hot-carrier generation, considerably. In comparison to conventional SMOS the DDC transistors provide amazing performance in low-power processes. The research group intends to progress in exploring the possibility of low-voltage millimetre-wave CMOS circuits.

Tuesday 13 June 2017

High Pressure Key to Lighter, Stronger Metal Alloys, Scientists Found 

metal
With the scientists and technicians creating a revolution, the world has witness some of the great changes. We are quite grateful that we are able to live a more peaceful and cosy life when compared to our earlier generations. However, the world has progressed in every field be it technology, medicine or even metallurgy. There are reports every day that amuse us and we are quite blessed to live in this era. Each day we come across some new kinds of discoveries and this has only got better with each passing day. Certainly, a change has shaped the world into a better place. Few weeks ago, a report was released which stated that if high pressure was allowed, then it could make the new and improvised versions of metal compounds which are can resist heat than the normal metal alloys. This theory of the conventional metal alloys was released by the researchers of the Stanford University

THE BLENDING OF METAL ALLOYS

It has been since ages that metallurgy has seen the blending of the metals in order to create improved and stable metal alloys. They used to have exclusive properties for the last hundreds of years. The previous theory included with one or two important metal comprising a higher portion of the mixture and the other elements are in less proportion.

They form a compact structure. Ever since the inception of the metallurgy, the scientists were successful in creating 2 structures till date. The third structure has taken a huge toll on the scientists as they have not been able to figure out still.

THE STUDY

The recent study which was published in the recent journal have reported that they were successful to come up with a new design of an alloy which comprised of normal metals. The structure is somewhat hexagonal in shape and it is closely packed. This type of similar structures has been created in the past but they used to contain a huge amount of harmful elements.

The elements were mostly alkaline metals or rare species of metals. They have been able to make the compact structure from the very common metals which are normally used in the engineering department.

THE PRESSURE APPLIED ON METAL ALLOYS

They have found that it is ultimately the pressure that helps in achieving all these. A new device kind of form has been used in order to achieve this. The device is used to confine the minute samples of the high entropy metal alloys to get pressurized to a high pressure normally as 55 Giga Pascals.

The pressure is so humongous that it can be seen when there is large meteorite crashed on the surface of the earth. The high pressure seems to break the interaction of the magnetic impact. The metal alloys still maintain the compact hexagonal structure even if the high pressure is taken off from them. So, we are grateful to the great renowned scientists.

Largest Hybrid Flywheel Battery Project to Help Grid Respond to Energy Demand

 Flywheel Battery

The Biggest Innovation of Hybrid Flywheels to Satiate the Demand of Energy.

The engineers from the university of Sheffield are up for a biggest venture till date that would compensate the ever-growing demand of energy till date. To meet this venture Europe’s biggest and UK’s the very ancient flywheel system induced by battery will be coming together to the grids of United Kingdom and Irish that will enable to meet the demand of energy with adaptable Balanced Energy and Freqcon.

A project of 4 million euro, with 2.9 million is being developed for this innovative measure of developing hybrid flywheel battery that would provide hybridized storage of energy with aiming at balancing of excessive power in Europe on the prevailing grid infrastructure.

Wheels to fly work by stabilizing a rotor to a maximum speed using electrical energy that effectively stores the energy within the system as a rotational energy; so, that it can be converted back to electricity whenever required. Flywheels don’t degenerate over time in comparison to batteries that degenerate over time. Thus, the combination of two uplifts the system supporting storage to function more marvelously and it also minimizes according to the lifetime of the system. This seems to be the most effective formula that would give a new dimension to the production of batteries.

This latest technology initiated by Schwungrad Energie Limited, includes Adaptive Balancing Power Gmbh who is sure to guarantee dynamic flywheel technology. Freqcon Gmbh has assured that they are sure to build amazing adaptable power converters that can connect the flywheel to grids.

A noted professor called Dr. Dan Gladwin belonging from the Department of Electrical and Electronic Engineering; University of Sheffield has confirmed that the United Kingdom national grid is progressing towards being volatile because of the increasing non-perishable sources of energy. This would have the capability to indulge itself in manipulating differentiations from the minimal 50Hz frequency or as per the demand is.

Technologies related to Battery and flywheel has the potential to provide immediate responses, and can also travel energy induced by this technology to be both adaptive and responsive in times of fluctuating frequency.

The facility of hybrid flywheel battery is reported to be put up in Ireland first, which would be foreseen by Energie of Schwungrad at their very hybrid flywheel battery facility, which has already led to the expansion of high demonstrable projects, in combination to EirGrid. Hybrid flywheel battery has the power to support the power of 500kW and can also store 10kWh energy all throughout.

Further reports are such that shows progress in terms of Hybrid flywheel battery is still going on and it will upgrade to offer 1MW of power and 20KWH storage of energy and can also be used to provide hybridized form of energy storage with the aid of batteries which would locate responsive frequency services.

The way the progress in terms of flywheels is taking place, in no time this would emerge as a valuable innovation which would be capable of satiating the ever increasing demands of energy all throughout the world that too in a much volatile way.

Saturday 10 June 2017

Scientists Demonstrate Microwave Spectrometer Tailored for the Majorana Quest

Majorana

Majorana Particles: The Researchers Quest Pass on to Next Level

A scientific team led by Attila Geresdi to research in Majorana particles has recently paved way to pass the research of the Majorana particle to the next level. They had illustrated a new technology altering certain description for future control of Majorana particles. The Majorana states, unusual quantum particles, only survive under very special condition.

While in theory projected in 1938, the researcher team of Leo Kouwenhoven inm 2012 were determined for the first time in a chip. The key constituent is a nanowire ariled by a superconducting part’, states Attila Geresdi, directing researcher of the study.

These magical particles are the structure blocks of topological quantum computation, a bright path in quantum technology that is chased by various research groups in cooperation with Microsoft.

The topological quantum bits are as such bastioned from faults, that means that if you execute a quantum procedure, it ever works. In the road towards the quantum computing based on Majorana particles there are big obstacles on the way to face, but this recent great work open the grand doors to a new program of quantum experiments. Through this new method both primal physics and scientific challenges towards the activity of Majorana states can be explored.

The Quest for Majorana Particles

The Majorana particles, named after Ettore Majorana, are used in elementary particle physics to mathematically describe fermions (i.e., particles with a half-numbered spin) when they are equal to their own antiparticles: so-called majorana fermions.

This property implies that the particles described must not bear any electrical charge. The researchers across the globe has great quest for this Majorana particles which are the building blocks for the topological quantum computer. Generally, the realization of this Majorana particles is quite difficult as they exists only in a very particular circumstances. Majorana research through Nanowires

Semiconductor nanowires represent an interesting research area at the interface between basic research and technology. Analysis of the growth process, properties and possible applications is part of the research on semiconductor nanowires.

Successive semiconductor nanowires have been successfully produced and characterized by various growth methods in the electron microscope. Further studies, e.g. Of the electrical and optical properties are currently being carried out. The superconducting propinquity phenomenon in semiconductor nanowires has newly alter the work of new superconducting edifice.

The researchers across the globe in colloboration with Microsoft are conducting researches to find out the Majorana using the Nanowire covered with a super conducting layer. Fortunatly the team of researchers found that the Majorana states only rely on measuring of electron flow through Nanowires.

The Delft research team joined hands with scientists from Yale University combines a Nanowire to a Microwave Spectrometer which does not disturbs Majorana's in any manner. The researchers across the globe are greatly facing hurdles on the research for Topological quantum computing.

But, this wonderful work of the researchers has paved way and pass quantum computing to the next level. This amazing work of these researchers can be found at Nature Physics.



Friday 9 June 2017

Intel unveils 18 core / 36 thread Core i9-7980XE

Intel was reeling under huge pressure from its competitors in past couple of months due to the launch of highly affordable range of processors. However Intel is making a comeback in a glorious fashion with the unveiling of new Core X series processors which will boast the 18 cores and 36 threads specification. Before this AMD was ruling the market with its Ryzen 9 1998X which had 16 cores and 32 threads along with the ThreadRipper processor.

This new offering from the Intel will be featured right alongside the other existing line up and it is expected to make use of the LGA 2066 pin socket that is usually found on the X299 motherboards. Intel has made this massive upgrade in the number of cores after a pretty long time that also at a better price point which has left a number of tech experts surprised.

Everything about the Intel Core X series

Before the official unveiling of the Core X series everyone was expecting it come with anything above 12 cores and no one was expecting the 16 or 18 cores offering at all. Intel has worked to keep the cores details under wrap and take the market by surprise as well subdue the consistent rise of the AMD varied SKU lineup with a remarkable product offering.

It is worth noting that the complete line up for the Skylake X and Kabylake X are designed to make use of the LGA 2066 socket and users will certainly get a wide count spread to choose from while powering their devices in future. The best thing about the new Intel platform is that they are highly scalable. Intel will be shipping the Core X series processors along with the latest version of the Intel Turbo Boost Max Technology 3.0 which will help in improving the single as well as dual core performances.

Secondly one should also note that the unveiling of the new X299 platform also brings the demise of the X99 platform. Intel has claimed that the new core X series will bring 10 percent faster multi-thread performance as well as 15 faster on the single threaded performance than the earlier generation processors.

More power comes with the Extreme Edition

Intel will also be bringing the Extreme edition CPUs which will take the performance levels to a whole new level for the computing activities. The new Core X series will allow the AVX-512 ratio offset along with the memory controller trim voltage control into the play. This functionality will help the users in getting a better stability at higher clocks while working on the robust projects.

This is specifically designed to handle the CPU heavy programs with ease with the existing programs without limiting the overall performance of the device. This will help in bringing fast video encode, audio production, image rendering as well as offering real time previews in the best resolution to the users.