Showing posts with label technology news. Show all posts
Showing posts with label technology news. Show all posts

Tuesday, 29 January 2019

Tips to Convert Videos to the Best Format

Do you want to convert your video to the ‘best’ format for a particular device or platform? On the surface that may sound easy, but how do you identify the best format – and what should you look for?
While there are several ways that you could find the best format and convert your videos to it, these tips should help make it a lot easier:

  • Make sure the format has hardware support

    For a video to be played, the format needs to be supported so that the device or platform it is viewed on is able to decode it. However that decoding can take place either using software or hardware.

    The problem with software decoding is that it requires a lot of processing power – especially for high quality videos. That is why as a rule the ‘best’ format should always have hardware support.

  • Factor in the compression

    Part of the video format (i.e. the video codec) will dictate the type of compression that is used to encode and store the video. Newer formats normally have more efficient types of compression, meaning that they can compress the same quality of video to a smaller file size than older formats.

    As you can imagine this is an important factor because the ‘best’ format should compress the video to the smallest file size possible while maintaining its quality. However it is complicated by the fact that it takes time before devices have hardware support for newer formats built-in.

  • Try working backwards based on how the video will be used

    Instead of trying to identify the best format based on its hardware support and compression, you could work backwards based on how the video will be used. For distribution formats such as MP4 with H.264 are the best option, and the same goes for online videos.

    In general MP4 with H.264 is a ‘safe’ format for most devices, but you could check if HEVC is supported seeing as it has better compression rates.

See how these tips can help you to convert your video to the best format? Once you’ve identified what it is, all you need to do is use a video converter to switch your video to that format. For example you could use Movavi Video Converter to convert QuickTime to MP4, AVI to FLV, MPG to MKV, and so on.

Regardless of how you approach it, following these tips should help you to end up with a format that has the best possible compression while still supported by the hardware of the device it will be played on. That is as good as it gets, and should allow you to enjoy high quality videos without taxing your processor (or storage) too much.

Saturday, 1 September 2018

Memory Processing Unit Could Bring Memristors to the Masses


Memristors: Computers of the Future

Today’s world is all about doing things fast. We want our phones to work faster, our computers and even our toasters. So, scientists are continually on the lookout for the next big thing that will make computer and the like run faster. One such thing discovered by scientists is a memristor. If you’ve not heard of it, then its no wonder as the term was coined just recently or with the discovery itself.

Memristors not only make your computer or phone work faster but will also cut energy consumption,like you wouldn’t believe. Memristor is a way of arranging advance computer parts on a chip so that it performs faster and with less energy consumption.

Where will Memristors be used? 

Memristors, according to scientists, will improve performance especially in those low power environments such as your smartphone. It can also be used to make an already efficient thing even more efficient like in the case of supercomputers.

How do Memristors work? 

Semiconductors in the industry make things fast by ensuring faster processing but when it comes to receiving and sending data, that is the part that takes time as these semiconductor processors have to work with other parts to do it.

Memristors is a solution to this problem. Named as combination of memory and resistors, which you may have already figured, memristors can process and save data in memory all in the same place, which will significantly speed up calculations and such.

How are Memristors different from Traditional means: 

Traditional means use bits of 1s and 0s but in the case of memristors, they work on a continuum.
The team behind memristors use smaller blocks to break down a large mathematical problem which improves the flexibility and efficiency of the system. These smaller blocks are called “memory processing units” and can be useful in implementing machine learning and artificial intelligence algorithms.

They can also be used in areas of simulation such as in predicting the weather. Mathematical problems in the form of rows and columns are directly imposed on the grid of memristors. Operations that multiply and sum the rows and columns in the table are then done simultaneously.

Whereas in the case of a traditional processor, it would perform mathematical calculations such as the sum and multiplication of rows and column individually taking a lot of time and energy. While with memristors all this happens in one step itself.

Using Memristors in Practical problems: 

Many science and engineering problems are very difficult to solve because of their complex forms and numerous variables needed in their models. Memristors can be used to simplify such problems and model them in the correct manner to get the right answer taking up much less time and with that energy.

When it comes to partial differential problems, solving them exactly is near impossible. To get an approximate value itself is the job for supercomputers. These types of problems involve loads of data and getting a memristor to sit in a supercomputer and perform these calculations will save up a lot of time and get results much faster.

Tuesday, 17 April 2018

This Fire Detecting Wallpaper Can Turn an Entire Room into an Alarm

Fire Detecting Wallpaper

Fire Detecting Wallpaper: The wallpaper of the future!

Have you ever heard of wallpaper that can not only detect fire in the area that it is in but also helps in preventing its spread? Well neither have I until now that is. Researchers have come up with wallpaper that can detect fire and which is also fire resistant. Made of a material that is found in bone teeth and hormones, yes you read right, this fire detecting wallpaper, may actually stop the spread of flames and also alert you to the fact that your house is on fire.

Those colorful, beautiful wallpaper that you find in stores today are actually highly flammable made of such materials such as plant cellulose fibers and synthetic polymers, these wallpapers will help a fire spread in the nick of time making recovery of anything near impossible. Researchers of the fire detecting wallpaper have swapped out those flammable materials for something strange yet environmentally friendly.

The strange material in fire detecting wallpaper:

Fire detecting wallpaper is made of a material commonly found in bone teeth and hormone, (God knows where they got the idea from), this material known as hydroxyapatite is fashioned into long and by that I mean really long nanowires to give it high flexibility.

This hydroxyapatite material, that is used to make the fire detecting wallpaper, actually helps the wallpaper in preventing the spread of flames.

Making the fire detecting wallpaper “smart”:

Researchers didn’t just make the fire detecting wallpaper preventive of fire, they also wanted to make it “smart”. To do this they added sensors to the fire detecting wallpaper made from drops of graphene oxide in an inky mixture.

This graphene oxide acts in two ways, at room temp, it acts as an insulator, that is it blocks the flow of electricity while under high temperature say when there is a fire, it makes the fire detecting wallpaper conductive and this completes a circuit that sound an alarm.

Researchers also boast that this alarm on the fire detecting wallpaper can last for a prolonged period of more than five minutes.

So to fit it all in one nice package, the fire detecting wallpaper is not only non-flammable but also high in temperature resistance and has an automatic fire alarm.

The wallpaper that you find in today’s stores are highly flammable and won’t do nothing when it comes to a fire in the house while the fire detecting wallpaper has a high flexibility, it can also be processed into various shapes, made in different colors and made with a commercial printer.

But all this won’t come cheap, the fire detecting wallpaper, because of their materials come at a very steep price tag. They may be environmentally friendly but not really pocket friendly, making you think that you’d rather take your chances when and if there is a fire with normal wallpaper.

The next thing on the agenda, therefore, for scientists is that they now are looking for more cost effective ways of making fire detecting wallpaper, that will be easy on a person’s wallet.

Tuesday, 20 February 2018

The Next Generation of Cameras Might see Behind Walls

Single Pixel Camera/Multi-Sensor Imaging/Quantum Technology


Users are very much taken up with the camera technology, which has given an enhanced look to the images clicked. However these technological achievements have more in store for the users. Single-pixel cameras, multi-sensor imaging together with quantum technologies would bring about great achievements in the way we tend to take images.

The updated camera exploration has been moving away from increasing the number of mega-pixels to merging camera data with computational processing. It is a radical new approach wherein the incoming data may not seem like an image. It tends to be an image after a sequence of computational steps which involves complex mathematics together with modelling on how light tends to travel through the scene or the camera.

The extra layer of computational processing tends to eliminate the chains of conservative imaging systems and there may be an instance where we may not need camera in the conservative sense any longer. On the contrary we would utilise light detectors which few years back would never have been considered for imaging.

 However, they would be capable of performing incredible results like viewing through fog, inside the human body as well as behind the walls.

Illuminations Spots/Patterns


The single pixel camera is one of the examples that depend on a simple source.The usual cameras tend to utilise plenty of pixels – tiny sensor features in order to capture a scene which is probably illuminated by an individual source.

However one can also manage thing in a different manner, capturing information from several light sources with an individual pixel. To achieve this one would need a controlled light source such as a simple data projector which tends to illuminate the scene a single spot at a time or with a sequence of various patterns.

For every individual illumination spot or pattern one can then measure the quantity of light reflected thereby adding all together in creating the ultimate image. Evidently the drawback of taking a photo in this way is that one will have to send plenty of illumination spots or pattern to obtain an image – one that would take only one snapshot with a regular camera.

However this type of imaging would enable in creating otherwise impossible camera, for instance that which tends to work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

Quantum Entanglement 


These types of camera could be utilised in taking images through fog or thick snowfall. They could also imitate the eyes of some animals and mechanically increase the resolution of an image based on what is portrayed. There is also a possibility of capturing images from light particles which have not interacted with object needed to be photographed.

This would have the benefit of the idea of `quantum entanglement’ which two particles can be connected in a way meaning that whatever tends to occur to one can occur to the other even though they are apart at a long distance.

 Single pixel imaging is considered as one of the simplest innovation in future camera technology and depends on the traditional concept of what forms an image. Presently we are observing a surge of interest for methods wherein lot of information is utilised though out-dated techniques tend to gather only a small portion of it.

It is here that multi-sensor approaches involving a number of detectors pointing at the same scene could be utilised. One ground-breaking example of this was the Hubble telescope that produced images made from a mixture of several different images taken at various wavelengths.

Photon & Quantum Imaging

However, one can now purchase commercial version of this type of technology like the Lytro camera that tends to accumulate information regarding light intensity and direction on the similar sensor producing images, which could be progressed after the image has been taken. The next generation camera will possibly seem like the Light L16 camera featuring ground-breaking technology based on over 10 various sensors.

Their data are connected through a computer with a provision of 50Mb, refocus able and re-zoomable, professional-quality image. The camera tends to appear like a very thrilling Picasso interpretation of a crazy cellphone camera. Researchers have been working hard on the issue of seeing through fog, beyond walls as well as imaging deep within the human body and brain. All these techniques depend on linking images with models explaining how light tends to travel through or around various substances.

Another remarking method which has been achieving ground is based on artificial intelligence to `learn’ in recognising objects from the data and these methods have been inspired by learning process in the human brain which probably likely to play a major role in the forthcoming imaging system.

Individual photon and quantum imaging technologies have been developing to the extent that they can take image with extremely low light levels as well as videos with exceptionally fast speed attaining a trillion frames per second. This is adequate to capture images of light travelling across a scene.

Monday, 12 February 2018

The Signs Your Child Might Have Screen Addition, Revealed

Children Engaged in Devices – Screen Addiction

With the progress in technology it is not surprising to see children engaged in deviceswhich have turnedthem into screen addiction. Though the approach of making them to start at an early age has been criticised by paediatric experts and adolescent researchers, several of the apps made available for download for children below five years of age in the Apple app store has shown that many parents as well as app developers have been ignoring the warnings with regards to the devices.

This exposure to screens for children comprising of video games, televisions, computers and tablets could be the reason for the addition trend which has been increasing – screen addiction. While this possibility has been considered by parents in the past by asking `how much screen time is too much? It seems that they had phrased the query wrongly.

As per a latest research published in the journal Psychology of Popular Media Culture, how children utilise the devices not on how much time they tend to spend on them, seems the strongest predictor of emotional or social issue lined with screen addiction’. Actually it does not really matter if a child tends to spend an hour or five, gazing at a screen, but would not suggest five hours duration.

An All-Consuming Activity

According to the new study there is more to it than the number of hours spent with the screen. What really matters whether usage of screen tends to cause issues in the other areas of life or it has become an all-consuming activity.

Now the query lies in how precisely could one tell if the child is addicted to screens. One needs to identify the warning signs like – should the screen time interfere with the daily activities, tends to cause conflict for the child or in the family or seems to be the only activity which brings some happiness to the child.

these gestures are displayed by the child, it could be essential to take action since screen addiction is connected to issues related to relationships, emotion and conduct. However considering it from the positive side, it could be most likely alright to keep them entertained with games on iPad for some time.

The television has been replaced by tablets and phone in soothing the children and keeping them busy. For instance, it has been revealed that one out of three kids tend to utilise gadgets much before they can even speak.

Kids utilising these devices at their tender age could have a substantial effect on the mental health of these toddlers.

Technology Addiction – Influence Behaviour/Sleeping Pattern

Dr Richard Graham, London-based Consultant Adolescent Psychiatrist and Clinical Psychologist Dr Jay Watts have stated that technology addiction could have an influence on the behaviour and sleeping pattern of a child. Five signs had been highlighted in an interview with MailOnline stating that one should observe if the child seems to be hooked.

 Moreover they had also emphasized on the importance of taking a digital detox in order to resolve the obsession. Dr Graham from the Capio Nightingale Hospital which is a mental health hospital located in central London had commented that when people tend to feel an uncomfortable sense of withdrawal when they are not online, is a known fact that the relationship with technology is not handled in a proper manner.

Dr Watts added that parents presently tend to struggle with understanding how crucial social media is to the present generation, the modern day playground is virtual. He added that when electronic devices began to have more importance over behaviour than anyone else or thing and when children seemed to get upset when they were deprived of the technology, it is at that point of time that one needs to begin changing things.

In the case of children the main issue is about the way they tend to get addicted to technology and the way they feel when using it.

Unhealthy Independence 

Those kids who tend to portray any indications of severe distress and agitation when deprived of the technology could be considered as unhealthy dependency. It could be somewhat a condition similar to a drug user and this unhealthy dependency could mean that the child gets agitated when they are deprived on the use of technology.

Dr Graham clarifies that the addiction could be apparent itself in other behaviour pattern. The influence of technology could affect the sleeping pattern of the child; interfere with meal times together with eating habits making the youngster to act up during play time. Dr Graham further stated that addicted children could also tend to be secretive as well as defensive regarding their devices and the usage of them and also debate with parents on a regular basis.

Moreover, children addicted to technology could also refrain or ignore real-life activities, refusing to go to locations where they would not be in a position to use their devices like the cinema.Dr Watts mentioned that it is quite guaranteed that parents are under the misconception that their kid has been spending much time on smartphone or online.

Restrict Time Spent on Usage of Technology 

The main concern is to talk to other parents at school or to observe if a child is more preoccupied than the others. If there seems to be a real difference, one needs to speak to the child regarding cybersafety but also study what could be on the mind of the child which could be addictive within and how this addiction could be helpful in avoiding in the real world.

 It seems essential to restrict the children on the time spend on the usage of technology in order to prevent the formation of unhealthy dependence according to Dr Graham. Techniques comprises of ensuring prolonged periods wherein the youngsters are absorbed on the `real world’ and play time with the other kids.

Forming a determined routine time allowance could be an excellent place to begin with. It could also be essential to ensure that adults switch off their phone or keep it on silent mode while having meals and while spending quality time with family and friends since examples given by the parents could be fruitful and meaningful.

Monday, 18 December 2017

Small Earthquakes at Fracking Sites May Be Early Indicators of Bigger Tremors

7 fears about fracking: science or fiction?

The extraction of shale gas with fracking or hydraulic fracturing has revolutionized the production of energy in the United States, but this controversial technology, banned in France and New York State, continues to generate criticism and protests.

The detractors of the technique, which consists of injecting water and chemical additives at high pressure to fracture the rock containing the hydrocarbons, warn about the possible contamination of water, methane leaks and earthquakes, among other risks.

The British Royal Academy of Sciences, the Royal Society, said in its 2012 report that risks can be effectively managed in the UK "as long as the best operational practices are implemented," Richard Selley, professor at the University of Emeritus of Imperial College in London and one of the authors of the report.

But others, who have contrary opinions, are equal of strict. For example, regarding the possibility that fracking poses a risk of methane leakage, William Ellsworth, a professor of geophysics at Stanford’s School of Earth, Energy & Environmental Sciences. It is not a matter of determining if the wells may have leaks, but the question must be, what percentage has leaks.

In the middle of an intense and growing controversy about fracking, Stangford University Researchers investigated what science says up to now.

Can it cause earthquakes?

Two of them occurred in 2011 in England and led to the temporary suspension of the exploration with fracking.

The first, which occurred in April of that year, near the city of Blackpool, reached 2.3 on the Richter scale and was registered shortly after the company Cuadrilla used hydraulic fracturing in a well.

On May 27, after resumption of fracturing in the same well, seismicity of 1.5 was recorded.

The network of monitors of the British Geological Society, BGS, captured both events, which were not felt by the local inhabitants.

The company Cuadrilla and the government commissioned separate studies.

"Both reports attribute the seismic events to the fracturing operations of Cuadrilla," said the Royal Society, the British Academy of Sciences, in its joint report with the Royal Academy of Engineers on hydraulic fracturing, published in 2012.

Earthquakes can be unleashed mainly by high pressure injection of wastewater or when the fracturing process encounters a fault that was already under stress. However, the Royal Society said that activities such as coal mining also produce micro-organisms. The suspension of fracking in the United Kingdom was lifted in December 2012, following the report of the Royal Society, which ensured that fracking can be safe "provided that the best operational practices are implemented.

In the United States, a study published in March 2013 in the journal Geology linked the injection of wastewater with the 5.7 magnitude earthquake in 2011 in Prague, Oklahoma. The wastewater injection operations referred to in the study were conventional oil exploitation. However, seismologist Austin Holland of the Oklahoma Geological Survey said that while the study showed a potential link between earthquakes and wastewater injection, "it is still the opinion of the Oklahoma Geological Survey that those tremors could have occurred naturally."

Another study published in July 2013 in the journal Science and led by Nicholas van der Elst, a researcher at Columbia University, found that powerful earthquakes thousands of kilometers away can trigger minor seismic events near wastewater injection wells.

The study indicated that seismic waves unleashed by the 8.8 earthquake in Maule, Chile, in February 2010, moved across the planet causing tremors in Prague, Oklahoma, where the Wilzetta oilfield is located.

"The fluids in the injection of sewage into wells are bringing existing faults to their limit point," said Van der Elst.

Can fracking contaminate the water?

At the request of the US Congress, the Environmental Protection Agency of that country, Environmental Protection Agency, EPA, is conducting a study on the potential impacts of hydraulic fracturing on water sources for human consumption.

A final draft of the report will be released at the end of 2014 to receive comments and peer review. The final report "will probably be finalized in 2016," the EPA confirmed.

In 2011, Stephen Osborn and colleagues at Duke University published a study in the journal of the US Academy of Sciences, according to which the researchers detected contamination of methane water sources near fracking exploration sites in the Marcellus formation. in Pennsylvania and New York.

The study did not find, however, evidence of contamination by chemical additives or the presence of high salinity wastewater in the fluid that returns to the surface along with the gas.

For its part, the Royal Society, the British Academy of Sciences, said that the risk of fractures caused during fracking reaching the aquifers is low, as long as gas extraction takes place at depths of hundreds of meters or several kilometers and wells and the tubing and cementing process are built according to certain standards.

A case cited by the Royal Society in its 2012 report is that of the town of Pavillion, Wyoming, where fracking caused the contamination of water sources for consumption, according to an EPA study. Methane pollution was attributed in this case to poor construction standards and shallow depth of the well, at 372 meters. The study was the first of the EPA to publicly link hydraulic fracturing with water pollution.

However, as in the Duke University study, there were no cases of contamination by the chemical additives used in hydraulic fracturing.

We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, below the aquifer.

How to control the use of chemical additives?

Trevor Penning, head of the toxicology center at the University of Pennsylvania recently urged the creation of a working group on the impact of fracking with scientists from Columbia, John Hopkins and other universities.

Penning told that in the United States "it is decided at the level of each state if companies have an obligation to publicize the list of additives they use."

The industry established a voluntary database of used additives, on the fracking focus site. Penning explained that the additives used in the fracking fluid can be very varied and of many kinds, such as surfactants, corrosion inhibitors, biocides etc.

In toxicology they work on the basis that no chemical is safe, but that is the dose that makes the poison. Additives that could cause concern if they exceed safe levels are substitutes for benzene, ethylene glycol and formaldehyde.

"The potential toxicity of wastewater is difficult to assess because many chemical additives used in hydraulic fracturing fluid are undisclosed commercial secrets," Penning added.

The scientist also told that "the potential toxicity of wastewater is difficult to evaluate because it is a complex mixture (the additives can be antagonistic, synergistic or additive in their effects)".

Anthony Ingraffea, professor of engineering at Cornell University, warned of the impact of the September 2013 floods in Colorado, where only 20,000 wells are located in one county. "A good part of the infrastructure was destroyed, which means that the ponds with sewage tanks with chemical additives are now in the water courses and there are leakages of damaged gas pipelines." "The clear lesson is that infrastructure for fracking in floodplains should never be built.

What is done with wastewater?

These waters are what is known as flowback or reflux water, that is, injected water, with chemical additives and sand, which flows back when the gas starts to come out.

Approximately 25% to 75% of the injected fracturing fluid returns to the surface, according to the Royal Society. These wastewater is stored in open-pit tanks dug into the ground and covered (open pits), treated and reused or injected at high pressure into rock formations. The danger of leakage of wastewater is not unique to the extraction of shale gas, but is common in many industrial processes, notes the Royal Society.

“The wastewater may contain radioactive materials of natural occurrence, Naturally Ocurring Radioactive Materials, NORM, which are present in the shale rock in quantities significantly lower than the exposure limits," says the Royal Society report.

Can it exhaust water resources?

The use of water in large quantities in fracking operations is a cause of concern for some. "For natural gas, for example, fracking requires millions of gallons of water (around 2 to 5 million, or even more than 10 million, that is, from 7 to 18 or up to 37 million liters) for fracturing, which is several times more than conventional extraction requires, "John Rogers, senior energy analyst and co-manager of the Energy and Water Initiative of the Union of Concerned Scientists, Union of Scientists Aware, told.

"The extraction of shale gas by fracking consumes on average of 16 gallons of water per megawatt-hour, while conventional gas extraction uses 4. That is, fracking requires 4 times what conventional extraction requires, "said Rogers.

"That amount of water is less than what is involved in the extraction of coal, but the use of water is very localized and can be very important in the local scene, in terms of what would be available for other uses."

The Water-Smart Power study of the Union of Aware Scientists points out that about half of the hydraulic fracturing operations in the United States occur in regions with high or extremely high water stress, including Texas and Colorado.

Melissa Stark, global director of new energies at Accenture consultancy and author of the report "Shale gas water and exploitation", admits that the extraction of shale gas with hydraulic fracturing uses a lot of water (about 20 million liters per well), but notes that "it does not use more water than other industrial processes, such as irrigation for agriculture. The volumes required may seem large, but they are smaller compared to other water uses for agriculture, electric power generation and municipal use," he told.

Can there be methane leaks?
Anthony Ingraffea, professor of engineering at Cornell University in the United States, says that it is not about determining if wells can leak, but the question must be, what percentage has leaks?

Ingraffea analyzed the situation of the new 2012 wells in the Marcellus formation in Pennsylvania, based on the comments of the inspectors, according to records of the Pennsylvania Department of Environmental Protection.

According to Ingraffea, the inspectors registered 120 leaky wells, that is, they detected faults and leaks in 8.9% of the gas and oil exploration wells drilled in 2012.

A study published in September 2013 by the University of Texas, sponsored among others by nine oil companies, ensured that while methane leaks from shale gas extraction operations are substantial - more than one million tons per year - they were less than the estimates of the US Environmental Protection Agency.

However, the Association of Physicians, Scientists and Engineers for a Healthy Energy in the USA, of which Anthony Ingraffea is president, questioned the scientific rigor of that study, noting that the sample of 489 wells represents only 0.14% of wells in the country and also the wells analyzed were not selected at random "but in places and hours selected by the industry".

Some reported images of tap water that catches fire if a match is approached could be explained by the previous presence of methane.

"We must not forget that methane is a natural constituent of groundwater and in some places like Balcombe, where there were protests, the oil flows naturally to the surface," Richard Selley, professor emeritus of Imperial Petroleum Geology.

"We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, beneath the aquifer," added Selley.

How does global warming impact?

Between 1981 and 2005, US carbon emissions They increased 33%. But since 2005 they dropped by 9%. The reduction is due in part to the recession, but according to the US Energy Information Administration, Energy Information Administration, EIA, about half of that reduction is due to shale gas.

Globally, coal provides 40% of the world's electricity, according to the International Energy Agency, International Energy Agency. Advocates of shale gas extraction say it is cleaner than coal and can be a transition fuel, while expanding the use of renewable sources such as solar or wind energy.

In Spain, for example, renewable energies "are bordering 12% and there is an objective of the European Union so that by 2020 20% of European energies are renewable," said Luis Suarez, president of the Official College of Geologists of Spain, ICOG.

But others point out that the gas extracted in the process of hydraulic fracturing is methane, a gas much more potent than carbon dioxide as a greenhouse gas.

According to the Intergovernmental Panel on Climate Change, IPCC, a molecule of methane equals 72 of carbon dioxide after 20 years of emission, and 25 molecules of carbon dioxide at 100 years.

Robert Howarth and colleagues at Cornell University estimated that between 4 and 8% of the total methane production of a well escapes into the atmosphere and adds that there is also emission from the reflux waters that flow along with the gas to the atmosphere. surface after fracturing.

But this analysis is controversial. Lawrence Cathles, also of Cornell University, says the high potential for methane heating in 20 years must be counteracted by the fact that methane has a much shorter life in the atmosphere than CO2.

Robert Jackson of Duke University in North Carolina says that instead of worrying about fracking emissions themselves we should concentrate on leaks in the distribution chain. "Only in the city of Boston we found 3,000 methane leaks in the pipes," Jackson told to New Scientist magazine.

Wednesday, 4 October 2017

Biological Clock Discoveries by 3 Americans Earn Nobel Prize

Nobel Prize
The discoverers of the 'internal clock' of the body, Nobel Medicine 2017

The winners are Jeffrey C. Hall, Michael Rosbash, and Michael W. Young

US scientists Jeffrey C. Hall, Michael Rosbash and Michael W. Young today won the 2017 Nobel Prize in Medicine, "for their discoveries of the molecular mechanisms that control the circadian rhythm," according to the jury of the Karolinska Institute in Stockholm, responsible for the award. The prize is endowed with nine million Swedish crowns, about 940,000 euros.

Thanks in part to his work, today it is known that living beings carry in their cells an internal clock, synchronized with the 24-hour turns of the planet Earth. Many biological phenomena, such as sleep, occur rhythmically around the same time of day, thanks to this inner clock. Its existence was suggested centuries ago. In 1729, the French astronomer Jean-Jacques d'Ortous de Mairan observed the case of mimosas, plants whose leaves open during the day into the sunlight and close at dusk. The researchers discovered that this cycle was repeated even in a dark room, suggesting the existence of an internal mechanism.

In 1971, Seymour Benzer and his student Ronald Konopka of the California Institute of Technology took a momentous leap in research. They caught vinegar flies and induced mutations in their offspring with chemicals. Some of these new flies had alterations in their normal 24-hour cycle. In some, it was shorter and in others, it was longer, but in all of them, these perturbations were associated with mutations in a single gene. The discovery could have earned the Nobel, but Benzer died in 2007, at age 86, for a stroke. And Konopka died in 2015, at age 68, of a heart attack.

The Nobel, finally, was taken to Hall (New York, 1945), Rosbash (Kansas City, 1944) and Young (Miami, 1949). The three used more flies in 1984 to isolate that gene, baptized "period" and associated to the control of the normal biological rhythm. Subsequently, they revealed that this gene and others self-regulate through their own products - different proteins - generating oscillations of about 24 hours. It was "a change of paradigm", in the words of the Argentine neuroscientist Carlos Ibáñez, of the Karolinska Institute. Each cell had a self-regulating internal clock.

The scientific community has since established the importance of this mechanism in human health. This inner clock is involved in the regulation of sleep, in hormone release, in eating behavior and even in blood pressure and body temperature. If, as occurs in people working in shifts at night, the pace of life does not follow this internal script, can increase the risk of suffering different diseases, such as cancer and some neurodegenerative disorders, says Ibanez. The syndrome of fast time zone change, better known as jet lag, is a clear sign of the importance of this internal clock and its mismatches.

The Karolinska researcher sets an example with a 24-hour cycle, in which the internal clock anticipates and adapts the body's physiology to the different phases of the day. If the day begins with deep sleep and a low body temperature, the release of cortisol at dawn increases blood sugar. The body prepares its energies to face the day. When night falls, with a peak blood pressure, melatonin, a hormone linked to sleep, is secreted.

These inner rhythms are known as circadian by the Latin words circa, around, and dies, day. The scientific community now knows that these "around the clock" molecular dashes emerged very soon in living things and were preserved throughout its evolution. They exist in both single-cell life forms and in multicellular organisms such as fungi, plants, animals, and humans.

At the time of its discovery, Hall and Rosbash were working at Brandeis University in Waltham, and Young was researching at Rockefeller University in New York. Its recognition follows the tonic of the Swedish awards. Men have won 97% of Nobel prizes in science since 1901. In the category of Medicine, statistics improve slightly: 12 of the 214 women are awarded the prize: 5.6%.

Monday, 18 September 2017

Engineers Developing Methods to Construct Blood Vessels Using 3D Printing Technology

3D Printing Technology
From time to time new and interesting news about 3D printing technology in the field of health arise. In the near future, this technology will allow fabrics to be created on demand to repair any organ affected by an illness. There are a lot of medical advances commencing day to day and 3D printing technology is one among them which is really an astonishing factor in the field of medical science.

However, in spite of the promising of these and other advances, to date, it has only been possible to create fine tissues of living cells in the laboratory using 3D printing technology. When we tried to create tissues with a thickness greater than several layers of cells, those in the intermediate layers died from lack of oxygen and the impossibility of eliminating their residues.

They did not have a network of blood vessels to transmit oxygen and nutrients to each cell. Therefore the challenge was served because if a network of blood vessels were artificially created for this purpose using 3D printing technology, larger and more complex cell tissues could be developed.

To solve this problem, the team led by Professor Changxue Xu of Industrial, Manufacturing and system engineering and with his colleague Edward. E. Whitacre Junior college of Engineering has used a 3D printing technology specially adapted for this purpose with three different types of bio-inks. The first head extrudes a biotin of the extracellular compound, the biological material that binds the cells in the tissue. The second extrude a type of biotin which contains extracellular tissue and living cells.

An alternative to more complex installations

The creation of model blood vessels to aid in the study of diseases, such as strokes, can be complicated and costly in addition to consuming a lot of time. And the results can not always be truly representative of a human vessel. Changxue Xu's research has designed a new method to create models of veins and arteries using 3D printing technology that is more efficient, less expensive and more accurate. Changxue Xu and his team have created vascular channels using 3D printing technology.

An important advance is the ability to establish multiple layers of cells in the channels. Normally, when these microfluidic vascular chips are made, they only have one layer of cells. But the blood vessels within the body are composed of three to four different types of cells. The innermost cells, the endothelial cells, are those that come into contact with the blood, but the other layers of the cells help the internal cells. If there is an injury or a blood clot, there is an entire reaction that takes place between these cells.

3D printing technology has now made a difference in manufacturing. "We can use 3D printing technology to create the mold and use that mold to inject any gel and cells in whatever shape we want," says Changxue Xu. The difficulty so far was that much of the work has usually been done in "clean rooms," rooms that are environmentally controlled to prevent contamination and ultra-disinfected. Changxue Xu has a room like that, so the work has to be done at other universities.

Tuesday, 5 September 2017

Supercapacitive Performance of Porous Carbon Materials Derived from Tree Leaves


Converting Fallen Leave – Porous Carbon Material

An innovative system of converting fallen tree dried leaves to porous carbon material which could be utilised in producing high tech electronics have been found by researchers in China. Researchers have defined in a study printed in the Journal of Renewable and sustainable energy, on the procedure of converting tree leaves into a system of integrating into electrodes as active resources. Initially the dried leaves are ground into powder and thereafter heated to 220 degrees Celsius for about 12 hours which formed a powder comprised of small carbon microspheres.

The carbon microspheres are then said to be preserved with a solution of potassium hydroxide and heated on gradually increasing the temperature in sequences from 450 to 800 degrees Celsius. Due to the chemical treatment it tends to corrode the surface of the carbon microspheres which tends to make it tremendously permeable.

The concluding production which is a black carbon powder is said to have a great surface area owing to the existence of several small holes which tend to have been chemically carved on the surface of the microspheres. The great surface area provides the ultimate produce with unusual electrical properties.

Permeable Microspheres

Led by Hongfang Ma of Qilu University of Technology in Shandong, the detectives followed a succession of standard electrochemical test on the permeable carbon microspheres in order to enumerate their possibility for utilisation in electronic devices.

The current-voltage curves for these materials showed that the element tends to make exceptional capacitor. Additional tests indicated that the materials had in fact been super capacitors having precise capacitances of 367 Fards/gram.

 These were said to be over thrice the value seen in some of the graphene super capacitors. Capacitor is said to be an extensively utilised element which tends to store energy on holding a charge on two conductors, which are detached from each other with the support of an insulator.

Super capacitor tend to store 10 to 100 times the energy as an ordinary capacitor and has the tendency of accepting and delivering charges much quicker than a usual rechargeable battery. Hence super capacitive materials have the potentials for an extensive selection of energy storage essential in particular in computer technology as well as hybrid or electric vehicles.

Enhance – Electrochemical Properties

The roadsides of northern China are said to be scattered with deciduous phoenix trees which produce abundant fallen leaves during autumn and these leaves are usually burnt in the colder climate, aggravating the air pollution issue of the country.

The investigators in Shandong, China, had recently found the new system of resolving this issue by means of converting waste biomass into porous carbon materials which could be used in energy storage technology. Besides tree leaves, the team together with the others have also succeeded in changing potato waste, corn straw, pine wood, rice straw as well as other agriculture wastes into carbon electrode materials.

Professor Ma together with her colleagues expects to enhance more on the electrochemical properties of porous carbon materials by augmenting the preparation procedure and enabling fixing or adjustment of the raw materials.

Wednesday, 12 July 2017

iPhone 8 to ditch fingerprint sensor for face scanner, reports say

iPhone 8

iPhone 8 – Refurbished Security System

The upcoming iPhone 8 of Apple would be featuring a refurbished security system wherein the users can unlock the device by utilising their face instead of their fingerprints. The 10th anniversary of iPhones is estimated in having a radical redesign that would comprise of a security system which will scan the faces of the users in order to check who could be using the device.

As per Bloomberg, it is said that the 3D scanning scheme would be replacing the Touch ID as a means of verifying payments, log in to apps as well as in unlocking the phone. It could function at various angles and hence the iPhone has the tendency of getting unlocked by merely looking at it, if the same is flat on the table and also held up right. It has been reported that the scanning system has been designed for the purpose of speed and precision and can scan the face of the user and unlock the device within a few hundred milliseconds.

Since it tends to analyse 3D instead of 2D images, it is likely to be capable of differentiating between a persons’ face and an image of the person. Presently available in Galaxy S8 of Samsung in strengthening the security of the device, Apple could also utilise the eye scanning technology.

Face Scanning Technology

Bloomberg had reported that the face scanning technology could secure more than the Touch ID, first released in 2013 on the iPhone S5 since it tends to draw on more identifiers. Apple has claimed that its fingerprint scanner tends to have only a 1 in 50,000 chance of being unlocked by a stranger’s fingerprint. According to an analyst having reliable track record, Ming-Chi Kuo, the iPhone 8 is said to feature an edge-to-edge OLED screen having the maximum screen-to-body ratio than any smartphone prevailing at the moment.

Apple would probably remove the Home button as well as the Touch ID scanner in order to make provision for the display. Kuo has also predicted that Apple would be releasing three new phones in September, namely the iPhone 8, iPhone 7S and iPhone 7S Plus. The iPhone 8 would be featuring the most vivid redesign among the three, having a 5.2-inch size screen retained in a device which would be the same size as the iPhone 7. Besides that it would also have less colour options and will be available with a glass front with steel edges towards the back.

New Chip Dedicated to Processing Artificial Intelligence

A well-linked Apple blogger, John Gruber had mentioned that the top iPhone could be named as `iPhone Pro’ recommending that the cost could be $1,500 or higher. The remaining two devices would be featuring LCD screens and will be available in sizes of 4.7-inch and 5.5-inch. Like the present iPhone 7, these devices would probably have a Home button together with Touch ID.

It is also said that the three phones would be having a Lightning port together with an embedded USB-C equipped with storage of 64GB or 256GB, if the predictions of Kuo tends to be accurate. Moreover they would be available with a new chip that is dedicated to processing artificial intelligence and the same is being verified presently.

Monday, 10 July 2017

Watching Cities Grow

Great Resolution Civilian Radar Satellite

Major cities in the world have been increasing and as per the estimates of United Nations, presently half of the population of the world tends to be living in cities. Towards 2050, the figure is expected to mount to two thirds of the population of the world.

 Professor for Signal Processing in Earth Observation at TUM, Xiaoxiang Zhu has informed that this growth has given rise to high demands on building and infrastructure safety since destruction events could threaten thousands of human lives at once. Zhu together with her team had established a method of early detection of probable dangers for instance; subterranean subsidence could cause the collapse of buildings, bridges, tunnels or even dams.

The new system tends to make it possible in noticing and visualizing changes as small as one millimetre each year. Data for the latest urban images tends to come from the German TerraSAR-X satellite which is one of the great resolution civilian radar satellite in the world. Since 2007, the satellite, circulating the earth at an altitude of approximately 500 kilometres tends to send microwave pulses to the earth and collects their echoes. Zhu has explained that at first these measurements were only in a two dimensional image with a resolution of one meter.

Generate Highly Accurate Four-Dimensional City Model

The TUM professor worked in partnership with the German Aerospace Centre – DLR and was also in charge of her own working team. The DLR tends to be in control of the operation and use of the satellite for scientific purposes.

The consequence of the images is restricted by the statistic that reflections from various objects which are at an equivalent distance from the satellite, will layover with each other and this effect tends to decrease the three-dimensional world to a two-dimensional image. Zhu had not only created her own algorithm that tends to make it possible in reconstructing the third and also fourth dimension, but also set a world record at the same time.

 Four dimensional point clouds having a density of three million points for each square kilometre had been reconstructed. This rich recovered information gave rise to generate highly accurate four-dimensional city models.

Radar Measurements to Reconstruct Urban Infrastructure

The trick was that the scientists utilised images taken from slightly various viewpoints. Every eleven days, the satellite tends to fly over the region of interest but its orbit position does not always seem to be precisely the same. The researchers utilise these 250 meter orbital variations in radar tomography to localize each point in three-dimensional space.

This system utilises similar principle used by computer tomography that tends to develop a three-dimensional view of the inner area of the human body. Various radar images taken from different viewpoints have been linked in creating a three-dimensional image. Zhu states that since this system processes only poor resolution in the third dimension, additional compressive sensing system which makes it possible for improving the resolution by 15 times is applied.

Scientists could utilise the radar dimensions to restructure urban organization on the surface of the earth with great accuracy, from TerraSAR-X, for instance the 3D shape of individual buildings. This system has already been utilised in generating highly precise 3D models in Berlin, Paris, Las Vegas and Washington DC.

Friday, 7 July 2017

Hot Electrons Move Faster Than Expected

 Hot Electrons

Ultrafast Motion of Electrons

A new research has given rise to solid-state devices which tend to utilise excited electrons. Engineers and scientists at Caltech have for the first time, been in a position of observing directly the ultrafast motion of electrons instantly after they have been excited by a laser. It was observed that these electrons tend to diffuse in their surroundings quickly and beyond than earlier anticipated.

This performance called as `super-diffusion has been hypothesized though not seen before. A team headed by Marco Bernardi of Caltech and the late Ahmed Zewail had documented the motion of electrons by utilising microscope which had captured the images with a shutter speed of a trillionth of a second at a nanometer-scale spatial resolution and their discoveries had appeared in a study published on May 11 in Nature Communications.

 The excited electrons had displayed a diffusion rate of 1,000 times higher than earlier excitation. Though the phenomenon had lasted only for a few hundred trillionths of a second, it had provided the possibility for operation of hot electrons in this fast system in transporting energy and charge in novel devices.

Assistant professor of applied physics and materials science in Caltech’s Division of Engineering and Applied Science, Bernardi had informed that their work portrayed the presence of fast transient which tends to last for a few hundred picoseconds at the time when electrons move quicker than their speed of room temperature, indicating that they can cover longer distance in a given period of time when operated with the help of lasers.

Ultrafast Imaging Technology

He further added that this non-equilibrium behaviour could be employed in novel electronic, optoelectronic as well as renewable energy devices together with uncovering new fundamental physics. Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry, professor of physics as well as the director of the Physical Biology Centre for Ultrafast Science and Technology at Caltech, colleague of Bernardi had passed away on 2nd August 2016.

The research had been possible by scanning ultrafast electron microscopy, which is an ultrafast imaging technology initiated by Zewail, with the potential of creating images with picosecond time with nanometer spatial resolutions. The theory and computer models had been developed by Bernardi which clarified the tentative results as an indicator of super-diffusion.

Bernandi has plans of continuing the research by trying to answer the fundamental questions regarding the excited electrons, like how they equilibrate among themselves as well as with atomic vibrations in material, together with applied ones like how hot electrons could increase the efficiency of energy conversion devices such as solar cells and LEDs.

Super Diffusion of Excited Carriers in Semiconductors

The paper has been entitled `Super Diffusion of Excited Carriers in Semiconductors’. Co-authors comprise of former postdoc Ebrahim Najafi of Caltech, who is said to be the main author of the paper and a former graduate student, Vsevolod Ivanov. The research has been supported by the National Science foundation, together with the Air Force Office of Scientific Research, the Gordon and Betty Moor Foundation as well as the Caltech-Gwangju Institute of Science and Technology – GIST, program.

Saturday, 1 July 2017

Sensor Solution: Sensor Boutique for Early Adopters

Sensor Boutique
It is known that a very individual fraction of infrared light is absorbed by every chemical substance. This absorption can be used for recognising substances with the help of optical methods, which is almost like the concept of a human fingerprint.

To elaborate this concept, when the infrared radiation, that falls within a certain range of wavelength, are absorbed by molecules, they are animated to a higher level of vibration, in which they rotate and vibrate in a typical and distinctive pattern or rather in a “fingerprint” pattern. These patterns can be used for identifying specific chemical species. Such kind of a method is used, let’s say, for example, in the chemical industry but also has its uses in the health sector or in criminal investigation. A company often needs an individually tailored sensor solution if it plans a new project.

EU-funded pilot line called MIRPHAB (Mid InfraRedPhotonics devices fABrication for chemical sensing and spectroscopic applications) support companies that in search for a suitable system and help in the development of sensor technology and measurement technology in mid-infrared (MIR). Participating in this project is the Fraunhofer Institute for Applied Solid State Physics IAF.

Pilot line for ideal spectroscopy solutions

A company has very individual needs if it is looking for a sensor solution, for example, if it has to identify a particular substance in a production process. This begins with the substances that have to be recorded to the number of sensors required up to the speed of the process of production.Considering most of the cases, a custom-made solution that suits all does not suffice and various suppliers are required for the purpose of developing the optimal individual solution.Here is where MIRPHAB comes into picture and proves to be very useful.

Leading European research institutes and companies belonging to the MIR environment have collaborated to provide customers with a custom-made and best suited offers made from a single source. Parties that are interested can get in touch with a central contact person, who can then make a compilation of the best solutions possible from the MIRPHAB members component portfolio as per the modular principle.

EU funding has supported MIRPHAB in the development of the individual MIR sensor solution within the framework, in order to fortify the European industry in the long run and increase in its leading position in chemical analysis and sensor technology. This considerably lessens the investment costs and as a result also reduces the entry point for companies in the MIR area.

Companies that have previously faced high costs and development efforts are now looking at a high-quality MIR sensor solution as an object of interest due to its combination with the virtual infrastructure which is a development caused by MIRPHAB.Also, MIRPHAB provides companies access to the latest and modern technologies, enabling them with an added advantage as an early adopter compared to the competition.

Custom-madesource forMIR lasers

The Freiburg-basedFraunhofer Institute for Applied Solid State Physics IAF along with the Fraunhofer Institute for Photonic Microsystems IPMS situated in Dresden, is providing a central component of the MIRPHAB sensor solution. The Fraunhofer IAF is presenting the new technology of quantum cascade lasers that emanate laser light in the range of MIR. In this type of laser, the range of the wavelength of the emitted light is spectrally extensive and can be adapted as per requirement during manufacturing. To select a particular wavelength within the broad spectral range, an optical diffraction grating has to be used to choose and then coupled back into the laser chip. The wavelength can be adjusted constantly by turning the grating. This grating is created at the Fraunhofer IPMS in a scaled-down form in so-called Micro-Electro-Mechanical-System or MEMS technology.Thus it is then possible to oscillate the grating up to one kilohertz of frequency. This further enables the tuning of the laser source’s wavelength up to a thousand times per second over a large range of spectrum.
The Fraunhofer Institute for Production Technology IPT in Aachen also has involvement in MIRPHAB in order to make the manufacturing of lasers and ratings more proficient and to enhance them for pilot series fabrication.With the help of its proficiency, it changes the production of the quickly adaptable MIR laser into industrially applicable manufacturing processes.

Process exploration in actuality

Currently, there are many applications in the field of spectroscopy that are still in the category of visible or near the range of infrared and use comparatively feeble light sources. MIRPHAB provides solutions has the concept of infrared semiconductor lasers as a foundation. These have comparatively higher intensity of light thus allowing the scope for completely new applications. This results in a recording of up to 1,000 spectra per second with the help of the MIR laser source which, as an example, provides for the real time programmed monitoring and control of biotechnological processes and chemical reactions. Thus, MIRPHAB’s contribution is considered to be important and vital to the factory of the future.

Tuesday, 27 June 2017

Space Robot Technology Helps Self-Driving Cars and Drones on Earth

Support Robots to Navigate Independently
The significance of making units of self-driving cars together with grocery delivery through drone could be revealed through an improbable source – autonomous space robots.

An assistant professor of aeronautics and astronautics, Marco Pavone has been creating technologies to assist robots in adjusting to unknown as well as altering environments. Pavone had been working in robotics at Jet Propulsion Laboratory of NASA before coming to Stanford and had maintained relationships with NASA centres together with collaboration with the other departments at Stanford. He views his work in space and Earth technologies as complementary.

 He commented that in a sense, some robotics techniques which tend to have been designed for autonomous cars could be very useful for spacecraft control. Similarly the algorithms which he and his students devised to assist robots make decisions and assessments on their own with a span of a second could help in space exploration as well as they could improve on driving cars and drone from the Earth.

One of the projects of Pavone tends to centre on supporting robots to navigate independently in bringing space debris out of orbit, delivering tools to astronauts and grasp spinning, speeding objects out of the vacuum of space.
Gecko-Inspired Adhesives
There is no boundary for error while grabbing objects in space. Pavone informed that in space when you approach an object, if you are not very careful in grasping it at the time it is contacted, the object would float away from you. Bumping an object in space would make recovering it very difficult.

Pavone had teamed up with Mark Cutkosky, a professor of mechanical engineering, who had spent the last decade perfecting gecko-inspired adhesives, in order to resolve the grasping issue.

 The gecko grippers support a gentle approach as well as a simple touch in order to `grasp’ an object, enabling easy capture and release of spinning, unwieldy space debris. However the delicate navigations needed for grasping in space is not an easy job. Pavone had stated that one have to operate in close proximity to other objects, spacecraft or debris or any object one might have in space that needs advanced decision making potentials.

 Pavone together with his co-workers developed systems which enabled space robot to independently respond to such flexible situations and competently grab space objects with their gecko-strippers.
Perception-Aware Planning
The subsequent robot could move as well as grab in real time, updating its decisions at a rate of several thousand times a second. This kind of decision-making technology is said to be beneficial in solving navigation issue with drones that are Earth-bound.

 A graduate student Benoit Landry had stated that for these types of vehicles, navigating at high speed in proximity to buildings, people together with the other flying objects seems difficult to perform. He focused that there seems to be a delicate interplay between making decisions and environmental perception. He added that in this perceptive, several aspects of decision making for independent spacecraft tend to be directly significant to drone control.

Landry together with Pavone have been working on `perception-aware planning’ that enables drones to consider fast routes as well as to `see’ their surroundings besides improved estimate on where they are. The work is presently being extended towards handling of interactions with the humans, a main section to organize autonomous system like the drones and self-driving cars.


Reduced Gravity Atmospheres
Landry had also mentioned that the background of Pavone at NASA had been a good complement to the academic work. When a robot is said to land on a small solar system body type an asteroid, added challenges tend to come up.

 These atmospheres seem to have total different gravity than the Earth. Pavone had stated that if one were to drop an object from waist-height, the same would take a couple of minute to settle to the ground. Ben Hockman, a graduate student in the lab of Pavone, had worked on a cubic robot known as Hedgehog, in order to deal with low-gravity atmospheres such as asteroids.

 The robot passed through uneven, rugged and low-gravity territories by hopping rather than driving like the traditional rovers. Ultimately, Pavone and Hockman desired Hedgehog to be capable of navigating and carrying out tasks without being obviously told how to perform it by a human located millions of miles away. Hockman had mentioned that the prevailing Hedgehog robot is said to be designed for reduced gravity atmospheres though it could be adjusted for Earth.

It would not hop quite that far since we tend to have more gravity though it could be utilised to cross more rugged territories where wheeled robots are unable to go. Hockman viewed the research that he had been doing with Pavone as core scientific exploration adding that science attempts to answer the difficult questions we don’t know the answers to and exploration seeks to find whole new questions we don’t even know yet how to ask.

Monday, 26 June 2017

Sony Unveils New 'Spider-man' Game at E3 Expo


Sony’s Updated Game - `Spider-Man’

An updated game `Spider-man, for PlayStation video console, had been unveiled by Sony, at the Electronic Entertainment Expo – E3, in Los Angeles recently. It is said that Spider-man is likely to be released in 2018 and is being established by Insomniac games, which is the group dealing with the contribution of PlayStation like `Resistance’ and `Ratchet & Clank.

The president and CEO of Sony Interactive Entertainment America, Shawn Layden, at the time of unveiling the `Spider-man’ game, had commented that the future is here and it is now with PlayStation 4 Pro and PS VR. Virtual reality – VR is rapidly gaining new battleground in the gaming scenario wherein developers are in search of winning over fans with immersive headsets and accessories.

Sony Corp had mentioned that last week it had sold over one million units of its virtual reality headset all over the world and was enhancing production. Besides this at the E3, Sony had also announced that the cult game `Shadow of the Colossus’ would be getting a high-definition remake for PlayStation 4. This game as well as the next `God of War’ edition are likely to be released next year. Though since Spider-Man 2 the Spidey game has not be mostly good, we seem to live in hope.


Reclaiming Earlier Glory

The reason of Spider-Man 2 is the standard of Spidey game which came down to the feeling of sandbox. The new game after an original story, not a tie-in film, tends to look fixed in reclaiming some of the earlier glory. Though there had not been any strong announcement, it surely seems like it was utilising the Spider Man 2 model.

Perhaps the biggest E3 2017 news, so far is the launch of the Xbox One X and after several months of speculation, the `Project Scorpio’ game console was unveiled by Microsoft. One of the most striking features of the Xbox One X is its design.

A dreadful amount of hardware has been crowded into what is claimed to be the smallest Xbox by Microsoft. Microsoft’s answer to the PS4 Pro is the new high-end console which will be hitting the shelves on 7 November which will be costing £449.99. At its global E3 showcase, Sony may not have exposed a brand new console though there had been no lack of best-seller game being provided. Ubisoft contributions covered varieties from action shooters like `Far Cry 5’ to sports, piracy, dance, together with space money and virtual reality.


Prime Announcement – New Game - `Far Cry’ Series

However, the prime announcement had been the new game in the tremendously prevalent `Far Cry’ series. The future edition of the first-person shooter action-adventure is said to be the 11th instalment in the award-winning series which is scheduled for a release on 27 February 2018.

Assassin’s Creed’ franchise, the next game in the long-running is called `Assassin’s Creed: Origins’ and is said to be one of the most expected games of the year. Assassin’s Creed is considered to be a franchise centred on exploit-adventure video game series designed by Ubisoft. Plenty of rumours have been circulated and speculated with regards to `Assassin’s Creed” Origins’ much ahead of E3 2017 and the new video game has been heading for Egypt taking the story back to an ancient world. On October 27, versions of Origins custom-made for playing on Xbox One, PlayStation 4 and Windows-powered personal computers will be released.

Thursday, 22 June 2017

Cyber Firms Warn of Malware That Could Cause Power Outages


Malicious Software – Modified with Ease Harming Critical Infrastructure

It was recently noted that malicious software had been uncovered by two cyber security firms which is presumed to have caused a December 2016 Ukraine power outage, cautioning that the malware could be modified with ease in harming critical infrastructure operations all over the world.

A Slovakian maker of anti-virus software – ESET together with Dragos Inc. a U.S. critical-infrastructure security firm had released information analyses of the malware called Industroyer or Crash override and had dispensed private alerts to governments as well as infrastructure operators to assist them in defending against the threat.

The U.S. Department of Homeland Security had mentioned that they were investigating the malware but it had not perceived any evidence to put forward that it had infected U.S. critical infrastructure. The two firms had stated that they were not aware of who had been behind the cyber-attack. Ukraine had put the blame on Russia but the officials in Moscow had denied the blame constantly.

The firms still cautioned that there could be added attacks utilising the same method by the group that built the malware or by imitators who alter the malicious software. ESET malware researcher Robert Lipovsky had stated in a telephone interview that the malware was easy to repurpose and utilise against other targets which was certainly alarming and could cause wide-scale destruction to organization systems that are dynamic.

System Compromised by Crash Override

That warning had been verified by the Department of Homeland Security stating that it was working to understand better the threat posed by Crash Override. The agency had mentioned in an alert post on its website that `the tactics, techniques and procedure described as part of the Crash override malware could be modified to target U.S dangerous information networks and systems’.

 The alert had posted around three dozen technical indicators that a system had been compromised by Crash Override and requested firms to contact the agency if they had doubted that their system had been compromised by the malware. Robert M. Lee founder of Dragos had stated that the malware had the potential of attacking power systems all over Europe and had the tendency to be leveraged against the United States with small modifications.

Risk to Power Distribution Organizations

Lee had further mentioned by phone that` it is able to cause outages of up to a few days in portions of a nation’s grid but is not strong enough to bring down an entire grid of a country’. Lipovsky had stated that through modifications, the malware could attack other kinds of infrastructure comprising of local transportation providers, gas and water providers.

A leader of Kroll’s cyber security practice, Alan Brill had mentioned in a telephone interview that power firms are concerned that there will be more attacks. He further added that they have been dealing with very smart people who came up with something and deployed it. It represents a risk to power distribution organizations everywhere.

Industroyer had been the only second piece of malware that has been uncovered till date which has the potential of disrupting industrial process to manually intervene, without the help of hackers. Stuxnet was first discovered in 2010 and is generally believed by security researchers to have been utilised by the United States as well as Israel for attacking nuclear program of Iran. The Kremlin and Federal Security Service of Russia had refrained from replying to their request for clarifications.

Deep Learning With Coherent Nanophotonic Circuits

 Nanophotonic Circuits
Light processor recognizes vowels

Nanophotonic module forms the basis for artificial neural networks with extreme computing power and low energy requirements

Supercomputers are approaching the enormous computing power of up to 200 petaflops, ie 200 million billions of operations per second. Nevertheless, they lag far behind the efficiency of human brains, mainly because of their high energy requirements.

A processor based on nanophotonic modules now provides the basis for extremely fast and economical artificial neural networks. As the American developers reported in the magazine "Nature Photonics", their prototype was able to carry out computing operations at a rate of more than 100 gigahertz with light pulses alone.

"We have created the essential building block for an optical neural network, but not yet a complete system," says Yichen Shen, from the Massachusetts Institute of Technology, Cambridge. The nanophotonic processor developed by Shen, together with his colleagues, consists of 56 interferometers, in which light waves interact and form interfering patterns after mutual interference.

These modules are suitable for measuring the phase of a light wave between the wave peak and the wave trough, but can also be used for a targeted change of this phase. In the prototype processor, these interferometers, which in principle correspond, in principle, to a neuron in a neural network, were arranged in a cascade.

After the researchers simulated their concept in advance with elaborate models, they also practically tested it with an algorithm for recognizing vowels. The principle of the photonic processor: A spoken vowel unknown to the system is assigned to a light signal of a laser with a specific wavelength and amplitude. When fed into the interferometer cascade, this light signal interacts with further additionally fed laser pulses and different interference patterns are produced in each interferometer.

To conclude these extremely fast processes, the resulting light signal is detected with a sensitive photodetector and is again assigned to a vowel via an analysis program. This assignment showed that the purely optical system could correctly identify the sound in 138 of 180 test runs. For comparison, the researchers also carried out the recognition with a conventional electronic computer, which achieved a slightly higher hit rate.

This system is still a long way from a photonic light computer, which can perform extremely fast speech recognition or solve even more complex problems. But Shen and colleagues believe it is possible to build artificial neural networks with about 1000 neurons from their nanophotonic building blocks.

In contrast to electronic circuits of conventional computers, the energy requirement is to be reduced by up to two orders of magnitude. This approach is one of the most promising in the future to compete with the viability of living brains.

Monday, 19 June 2017

Solar Paint Offers Endless Energy From Water Vapor

Solar Paint and its capability to Produce Fuels out of Water Vapor

Researchers always tend to turn the whirlwind with their innovative research and invention. This time they have decided to bewilder the world with the most innovative research in terms of paint. We have heard about the use of solar energy to generate electricity, but this time the impact of solar power will be located in paints as well. The researchers have unveiled this new development (Solar Paint) which can be used as a measure to generate water vapor which would further split to provide hydrogen. This has left all the science Nazis with utmost eagerness to follow up this research as soon as possible.

The paint would be so tempting because it would contain all essential compounds which would act like silica gel. This compound seems to be frequently used in most of the materials, these days. It is most commonly used in all the sachets in order to absorb moisture, so that the food, medicine or any other product in sachet would remain fresh and undetected from any sort of bacteria. But other than this gel, there are several other materials such as synthetic molybdenum-sulphide which also acts as a semi-conductor and behaves as a catalyst in spitting the water molecules into hydrogen and oxygen.

One of the renowned researchers at the University of RMIT in Melbourne known as Dr. Torben Daeneke, Melbourne, Australia, has confirmed that they once absorbed that when they added titanium particles to compounds it resulted in forming a paint that could absorb sunlight and thus, produced hydrogen from solar energy and moist air. Hence, the name solar paint was given.

Observation suggests that the white pigment which is already present in wall paints is known as titanium oxide, which means that just with the addition of this new component a simple material can upgrade itself to form large chunks of energy harvesters and real estate which produces fuel by converting walls of brick.

The researcher has further concluded that this invention in terms of solar paint has several advantages. Usage of water can be restricted to some extent, as the water vapor or moisture absorbed from the atmosphere can now be utilized to produce that too in much-affected ways. One of his colleagues also seconded him by adding that hydrogen is the cleanest and purest forms of energy which could be used as fuels by utilizing it in fuel producing cells and in combustion engines that are conventional with an alternative measure other than fossil fuels.

This invention can be used in all sorts of places irrespective of the weather conditions. May that be a hot or cold climate or places near to the oceans this formula would be applicable in all places. The formula is very simple, the sea or ocean water would evaporate due to sunlight and thus, the vapor formed can be utilized to produce fuels. The way solar paint is turning out to be beneficial in everyday life soon its impact would be realized globally. 

Thursday, 15 June 2017

Novel Innovation Could Allow Bullets to Disintegrate After Designated Distance

bullet shot

Purdue University – Bullet to Be Non-Lethal

Presently bullets have been made from various materials particular for the projected application which tends to retain an important portion of their energy after travelling hundreds or thousands of meters. This could result in unwanted significances like unintended death or injury of those around the place as well as security damage if the target was missed.

Very often stray bullet shootings are overlooked consequence of firing which could result in severe injury or even death to bystanders or collateral damage victims in the military. Hence a need in law enforcement, military together with civilian segments for a safer bullet would considerably decrease collateral damage as well as injury.

Technology has been created that could avoid these occurrences at Purdue University. Research group headed by a professor of materials engineering and electrical and computer engineering, Ernesto Marinero has designed novel materials and fabrication which enables a bullet to be non-lethal which collapses after a selected distance.

This technology was the consequence of a need for safer bullet which would considerably decrease security damage as well an injury in law enforcement, civilian and military segments. Conservative bullet tends to have a substantial percentage of their energy after travelling a hundreds or even thousands of meters.

Combination of Stopping Power of Standard Bullets/Restriction of Range

The newly developed Purdue innovation helps the bullet to break over a predetermined period owing to the heat that is generated at the time of firing in combination with air drag together with an internal heating component. The heat conducts over the complete bullet part, melting the low temperature binder material facing drag forces that tends to result in breakdown.

The technology is said to be a combination of stopping power of standard bullets, the shrapnel-eliminating aids of frangible bullets together with a restriction of range in decreasing injury or death of would-be spectators. The Office of Technology Commercialization of Purdue Research Foundation has patented the technology and is said to be available for license. The researchers at Purdue University had established materials together with fabrication for ammunition which became non-lethal after a chosen space.

A professor of emergency medicine and director of the Violence Prevention Research Program at UC Davis School of Medicine and Medical Centre, Garen Wintemute had commented that `stray bullet shootings gave rise to fear and insecurity among the people. They tend to remain indoors and stop their children from playing out in the open thereby changing their pattern of their daily routine to evade from being struck by a bullet intended for someone in mind.

No Research – Exploring Epidemiology of Firings

However, no research had been done at the national level in exploring the epidemiology of these firings and such information is essential in identifying preventive measures’. He further added that stray bullet firings are mostly a side effect of planned violence what is indirectly known as collateral damage.

Those who tend to get shot have little or no warning; opportunities to indulge in preventive measures once the shooting tends to take are restricted. We will only be capable of preventing these shootings to the extent that we are able to prevent firearm violence unless we intend to bulletproof the complete communities together with the residents.