Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Tuesday 9 January 2018

Researchers Chart the ‘Secret’ Movement of Quantum Particles

The secret movement of the quantum particles is a secret no more

Quantum mechanics is a secretive domain which doesn’t give out everything at once rather scientists have to dig deeper to make new discoveries. This time around scientists has been able to come up a theoretical way of mapping the secret movement of the quantum particles which has been done earlier.

It is worth noting that the basic and quite fundamental idea associated with the quantum theory is that all of the quantum objects has the ability exists both like a particle or wave and they don’t even remain present in the manner unless they are being measured. The way quantum objects are present were famously described by Erin Schrodinger through his experiment which asked whether the cat is dead or not dead in the box.

Finding and measuring the quantum objects


Most of scientists assumes that quantum objects remains present in the wave form which helps in using mathematical tool and come up with rational representation of the quantum particles as they appear in the nature. Therefore it was necessary to map or track the secret movement of the quantum particles to know the exact nature of the quantum objects.

Every particle is expected to interact with the environment and when it does so it is registered as a tagging activity. A group of researchers has simply happened to outline a way in which these very tagging interactions can be tracked without even the need of looking at it.

Telepathy is a reality


Earlier quantum scientists put forward the idea that information can easily be transmitted two different persons without even the need of particles moving in between them. It might appear to be a figment of imagination or taken right out of a science fiction fantasy as it strikes just like telepathic connections.

However quantum scientists have a newly coined word for this phenomenon which is called counterfactual communication. The naming of this term is quite unusual as this mode of communication simply goes against long stand way standard way of communication wherein information has to be transferred between two different sources and in order to make it happen ‘particles’ have t move.

Now it is vital to measure this new method of communication and this can only be done through exactly finding where the particles when information is being transferred between the two different objects. The task of pin-pointing the particles in the quantum world isn’t a simple and usual task therefore scientists came up with the tagging method which allowed them to chart the secret movement of the particles in the quantum world. This method only helps in tracking the movement but it give any insight into what particle is doing.

A number of prior research showed that these particles might have the potential to do some non-classical things when they are not being observed and this non-classical thing could be the ability to remain in places at the same time. Using the tagging scientists will be soon uncovering the mystery surrounding the quantum particles when they are not being observed.

Friday 22 December 2017

What is HDMI 2.1? Everything You Need to Know about It

HDMI 2.1

A new age begins with the adoption of the HDMI 2.1

Most of the people don’t think much about the HDMI cables until it is broken or needs to be fixed. It should be noted that HDMI cables are widely used to render video content from the Bluray player to the television, soundbar or projector and it has a dedicated community working towards enhancing its capability for better. A new version of this cable has been developed dubbed as HDMI 2.1 which happens to be future ready thereby allowing users to view 4K to 10K from any source to their screen like a magic wand.

HDMI 2.1 The new age cable for modern needs

The technological evolution is happening at a breakneck speed especially when to comes to computing and consumer electronics. It is worth noting that HDMI 2.0 was released way back in 2013 and it had maximum ability to offer video content in 4K at 60 frames per second and now the capability of the monitors, television and others devices has been increased significantly. The HDMI 2.1 will help in doubling the performance of the cables from its earlier iteration wherein it will allow overreaching the 4K up to the massive 10K at 120 frames per second.

More pixels means requirement of greater bandwidth and it is possible only with the new advanced HDMI 2.1. These cables can easily upload the task of channelling video content at the upper limit of 48 Gbps rather than the paltry 18 Gbps. It also supports the dynamic HDR which will allow the televisions to adjust video as per the frame information. This is almost similar to the way Dolby Vision and HDRR10+ operates now days. People who love consumer virtual reality and core gaming will love to have this advanced cable to get the best experience on their televisions and consoles.

Features of the new HDMI 2.1 cables


As put forward by the HDMI Forum the new Ultra High Speed HDMI cable with boast of the HDMI 2.1 support allowing them to handle higher data load efficiently. It has highly advanced high bandwidth depending features which can help in generating uncompressed 8K video with HDR without much hassle. The best thing about these new age cable will be the ‘backward compatibility’ thereby it will be able all the pre-existing devices with ease and grace.

Some other features to be seen new HDMI cable include Variable Refresh Rate which is designed to eliminate the lag, frame tearing and stuttering for bettering game play experience. It will also have Quick Frame Transport which will help in reducing the latency in order to offer highly interactive VR experience to the users with smoother and no lag content render. Another feature worth mentioning is the Quick Media Switching which will help in eliminating the delay which causes black screen before content is displayed as it usually happens when watching movies.

The HDMI Forum will be running extensive compliance tests in the varied during the first three quarters of the next. The availability and adoption of the HDMI 2.1 in the consumer market will depend upon on the manufacturers.

Wednesday 20 December 2017

Ruthenium Monomers: Organic Electronics Beyond Cell Phone Screens

Ruthenium Monomers

Get ready for the age of Organic Electronics

When we think of future we assume of world of better machines, services and futuristic design and appeal taking care of our today’s problems. Organic electronics can help in achieving the very same future we think of with the help of the ruthenium monomers thereby we will get age of carbon based molecules rather than the silicon atoms.

The team behind this research

A team of researchers from the Princeton University, Georgia Institute of technology along with the Humbolt University of Berlin Germany has helped in creating such technology which will pave way for the organic electronics in the future. Their research has already published in the journal named Nature Materials wherein a piece was written on the feasibility in creation of the organic semiconductors.

Organic semiconductors created using this advanced technology backed with ruthenium monomers will help in creating a wide range of flexible electronics in near future. This will also usher an age of emerging technologies ranging from the solar energy conversion to the high quality colour displays which will be seen on the smartphones and other consumers electronics.

How organic semi-conductors are being made?


It is worth noting that the usual semiconductors are made out of silicon which has become a modern found of electronics. It allows engineers to take varied advantages associated with unique properties at controlling the electrical currents. Semiconductors are used across the devices and applications from the computing, switching to signal amplification. They can be found easily in the energy saving devices like the solar cells and the light emitting diodes. To tinker with the ability and functionality of the semiconductors researchers makes use of process called doping wherein a chemical makeup is modified in order to very small amount of impurities or chemical in it.

Organic semi-conductors are developed through using molecular dopants which ensures that it helps in creating highly efficient organic electronic devices. Scientists herein make use of very stable kind of molecular p-dopant which can be easily and successfully deployed in the devices. But so far they had been able to develop such molecular n-dopants which can work with the materials having low electron affinity than the others. The use of the n-doping on the semiconductors has helped in creating high efficiency organic light emitting diode which offers better conductivity than ever before.

What makes the new advanced technology best bet for future?


The best thing about the organic semiconductors is that it can be easily use in the fabrication of flexible devices which will help bringing energy saving products which can optimum functioning at the low temperatures. However the major disadvantage arising from the use of the organic semiconductor that it tends to have a relatively poor electrical conductivity. This disadvantage can result in causing unwanted difficulties in the processes and can even hamper the overall efficiency of the devices. However researchers have started working on improving the electrical properties of the organic semiconductors with ruthenium monomers in order to make them the best option available in the market.

Monday 18 December 2017

Small Earthquakes at Fracking Sites May Be Early Indicators of Bigger Tremors

Fracking
7 fears about fracking: science or fiction?

The extraction of shale gas with fracking or hydraulic fracturing has revolutionized the production of energy in the United States, but this controversial technology, banned in France and New York State, continues to generate criticism and protests.

The detractors of the technique, which consists of injecting water and chemical additives at high pressure to fracture the rock containing the hydrocarbons, warn about the possible contamination of water, methane leaks and earthquakes, among other risks.

The British Royal Academy of Sciences, the Royal Society, said in its 2012 report that risks can be effectively managed in the UK "as long as the best operational practices are implemented," Richard Selley, professor at the University of Emeritus of Imperial College in London and one of the authors of the report.

But others, who have contrary opinions, are equal of strict. For example, regarding the possibility that fracking poses a risk of methane leakage, William Ellsworth, a professor of geophysics at Stanford’s School of Earth, Energy & Environmental Sciences. It is not a matter of determining if the wells may have leaks, but the question must be, what percentage has leaks.

In the middle of an intense and growing controversy about fracking, Stangford University Researchers investigated what science says up to now.

Can it cause earthquakes?

Two of them occurred in 2011 in England and led to the temporary suspension of the exploration with fracking.

The first, which occurred in April of that year, near the city of Blackpool, reached 2.3 on the Richter scale and was registered shortly after the company Cuadrilla used hydraulic fracturing in a well.

On May 27, after resumption of fracturing in the same well, seismicity of 1.5 was recorded.

The network of monitors of the British Geological Society, BGS, captured both events, which were not felt by the local inhabitants.

The company Cuadrilla and the government commissioned separate studies.

"Both reports attribute the seismic events to the fracturing operations of Cuadrilla," said the Royal Society, the British Academy of Sciences, in its joint report with the Royal Academy of Engineers on hydraulic fracturing, published in 2012.

Earthquakes can be unleashed mainly by high pressure injection of wastewater or when the fracturing process encounters a fault that was already under stress. However, the Royal Society said that activities such as coal mining also produce micro-organisms. The suspension of fracking in the United Kingdom was lifted in December 2012, following the report of the Royal Society, which ensured that fracking can be safe "provided that the best operational practices are implemented.

In the United States, a study published in March 2013 in the journal Geology linked the injection of wastewater with the 5.7 magnitude earthquake in 2011 in Prague, Oklahoma. The wastewater injection operations referred to in the study were conventional oil exploitation. However, seismologist Austin Holland of the Oklahoma Geological Survey said that while the study showed a potential link between earthquakes and wastewater injection, "it is still the opinion of the Oklahoma Geological Survey that those tremors could have occurred naturally."

Another study published in July 2013 in the journal Science and led by Nicholas van der Elst, a researcher at Columbia University, found that powerful earthquakes thousands of kilometers away can trigger minor seismic events near wastewater injection wells.

The study indicated that seismic waves unleashed by the 8.8 earthquake in Maule, Chile, in February 2010, moved across the planet causing tremors in Prague, Oklahoma, where the Wilzetta oilfield is located.

"The fluids in the injection of sewage into wells are bringing existing faults to their limit point," said Van der Elst.

Can fracking contaminate the water?

At the request of the US Congress, the Environmental Protection Agency of that country, Environmental Protection Agency, EPA, is conducting a study on the potential impacts of hydraulic fracturing on water sources for human consumption.

A final draft of the report will be released at the end of 2014 to receive comments and peer review. The final report "will probably be finalized in 2016," the EPA confirmed.

In 2011, Stephen Osborn and colleagues at Duke University published a study in the journal of the US Academy of Sciences, according to which the researchers detected contamination of methane water sources near fracking exploration sites in the Marcellus formation. in Pennsylvania and New York.

The study did not find, however, evidence of contamination by chemical additives or the presence of high salinity wastewater in the fluid that returns to the surface along with the gas.

For its part, the Royal Society, the British Academy of Sciences, said that the risk of fractures caused during fracking reaching the aquifers is low, as long as gas extraction takes place at depths of hundreds of meters or several kilometers and wells and the tubing and cementing process are built according to certain standards.

A case cited by the Royal Society in its 2012 report is that of the town of Pavillion, Wyoming, where fracking caused the contamination of water sources for consumption, according to an EPA study. Methane pollution was attributed in this case to poor construction standards and shallow depth of the well, at 372 meters. The study was the first of the EPA to publicly link hydraulic fracturing with water pollution.

However, as in the Duke University study, there were no cases of contamination by the chemical additives used in hydraulic fracturing.

We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, below the aquifer.

How to control the use of chemical additives?

Trevor Penning, head of the toxicology center at the University of Pennsylvania recently urged the creation of a working group on the impact of fracking with scientists from Columbia, John Hopkins and other universities.

Penning told that in the United States "it is decided at the level of each state if companies have an obligation to publicize the list of additives they use."

The industry established a voluntary database of used additives, on the fracking focus site. Penning explained that the additives used in the fracking fluid can be very varied and of many kinds, such as surfactants, corrosion inhibitors, biocides etc.

In toxicology they work on the basis that no chemical is safe, but that is the dose that makes the poison. Additives that could cause concern if they exceed safe levels are substitutes for benzene, ethylene glycol and formaldehyde.

"The potential toxicity of wastewater is difficult to assess because many chemical additives used in hydraulic fracturing fluid are undisclosed commercial secrets," Penning added.

The scientist also told that "the potential toxicity of wastewater is difficult to evaluate because it is a complex mixture (the additives can be antagonistic, synergistic or additive in their effects)".

Anthony Ingraffea, professor of engineering at Cornell University, warned of the impact of the September 2013 floods in Colorado, where only 20,000 wells are located in one county. "A good part of the infrastructure was destroyed, which means that the ponds with sewage tanks with chemical additives are now in the water courses and there are leakages of damaged gas pipelines." "The clear lesson is that infrastructure for fracking in floodplains should never be built.

What is done with wastewater?

These waters are what is known as flowback or reflux water, that is, injected water, with chemical additives and sand, which flows back when the gas starts to come out.

Approximately 25% to 75% of the injected fracturing fluid returns to the surface, according to the Royal Society. These wastewater is stored in open-pit tanks dug into the ground and covered (open pits), treated and reused or injected at high pressure into rock formations. The danger of leakage of wastewater is not unique to the extraction of shale gas, but is common in many industrial processes, notes the Royal Society.

“The wastewater may contain radioactive materials of natural occurrence, Naturally Ocurring Radioactive Materials, NORM, which are present in the shale rock in quantities significantly lower than the exposure limits," says the Royal Society report.

Can it exhaust water resources?

The use of water in large quantities in fracking operations is a cause of concern for some. "For natural gas, for example, fracking requires millions of gallons of water (around 2 to 5 million, or even more than 10 million, that is, from 7 to 18 or up to 37 million liters) for fracturing, which is several times more than conventional extraction requires, "John Rogers, senior energy analyst and co-manager of the Energy and Water Initiative of the Union of Concerned Scientists, Union of Scientists Aware, told.

"The extraction of shale gas by fracking consumes on average of 16 gallons of water per megawatt-hour, while conventional gas extraction uses 4. That is, fracking requires 4 times what conventional extraction requires, "said Rogers.

"That amount of water is less than what is involved in the extraction of coal, but the use of water is very localized and can be very important in the local scene, in terms of what would be available for other uses."

The Water-Smart Power study of the Union of Aware Scientists points out that about half of the hydraulic fracturing operations in the United States occur in regions with high or extremely high water stress, including Texas and Colorado.

Melissa Stark, global director of new energies at Accenture consultancy and author of the report "Shale gas water and exploitation", admits that the extraction of shale gas with hydraulic fracturing uses a lot of water (about 20 million liters per well), but notes that "it does not use more water than other industrial processes, such as irrigation for agriculture. The volumes required may seem large, but they are smaller compared to other water uses for agriculture, electric power generation and municipal use," he told.


Can there be methane leaks?
Anthony Ingraffea, professor of engineering at Cornell University in the United States, says that it is not about determining if wells can leak, but the question must be, what percentage has leaks?

Ingraffea analyzed the situation of the new 2012 wells in the Marcellus formation in Pennsylvania, based on the comments of the inspectors, according to records of the Pennsylvania Department of Environmental Protection.

According to Ingraffea, the inspectors registered 120 leaky wells, that is, they detected faults and leaks in 8.9% of the gas and oil exploration wells drilled in 2012.

A study published in September 2013 by the University of Texas, sponsored among others by nine oil companies, ensured that while methane leaks from shale gas extraction operations are substantial - more than one million tons per year - they were less than the estimates of the US Environmental Protection Agency.

However, the Association of Physicians, Scientists and Engineers for a Healthy Energy in the USA, of which Anthony Ingraffea is president, questioned the scientific rigor of that study, noting that the sample of 489 wells represents only 0.14% of wells in the country and also the wells analyzed were not selected at random "but in places and hours selected by the industry".

Some reported images of tap water that catches fire if a match is approached could be explained by the previous presence of methane.

"We must not forget that methane is a natural constituent of groundwater and in some places like Balcombe, where there were protests, the oil flows naturally to the surface," Richard Selley, professor emeritus of Imperial Petroleum Geology.

"We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, beneath the aquifer," added Selley.

How does global warming impact?

Between 1981 and 2005, US carbon emissions They increased 33%. But since 2005 they dropped by 9%. The reduction is due in part to the recession, but according to the US Energy Information Administration, Energy Information Administration, EIA, about half of that reduction is due to shale gas.

Globally, coal provides 40% of the world's electricity, according to the International Energy Agency, International Energy Agency. Advocates of shale gas extraction say it is cleaner than coal and can be a transition fuel, while expanding the use of renewable sources such as solar or wind energy.

In Spain, for example, renewable energies "are bordering 12% and there is an objective of the European Union so that by 2020 20% of European energies are renewable," said Luis Suarez, president of the Official College of Geologists of Spain, ICOG.

But others point out that the gas extracted in the process of hydraulic fracturing is methane, a gas much more potent than carbon dioxide as a greenhouse gas.

According to the Intergovernmental Panel on Climate Change, IPCC, a molecule of methane equals 72 of carbon dioxide after 20 years of emission, and 25 molecules of carbon dioxide at 100 years.

Robert Howarth and colleagues at Cornell University estimated that between 4 and 8% of the total methane production of a well escapes into the atmosphere and adds that there is also emission from the reflux waters that flow along with the gas to the atmosphere. surface after fracturing.

But this analysis is controversial. Lawrence Cathles, also of Cornell University, says the high potential for methane heating in 20 years must be counteracted by the fact that methane has a much shorter life in the atmosphere than CO2.

Robert Jackson of Duke University in North Carolina says that instead of worrying about fracking emissions themselves we should concentrate on leaks in the distribution chain. "Only in the city of Boston we found 3,000 methane leaks in the pipes," Jackson told to New Scientist magazine.

Friday 15 December 2017

Horrifying macOS Bug Lets Anyone Become Admin With No Password

macOS Bug

New bug found on macOS giving Admin Access to anyone without password input

All the users with a Mac should note that a new bug has been discovered on the latest version of macOS High Sierra. This particular bug can jeopardise your security as it allows anyone to get into the system that also as an administrator by simply typing ‘root’ in the username section. This bug is so dangerous after giving up the name as ‘root’ users are not even required to put in the password.
 

Taking the Twitter by storm

 
This dangerous bug has been found by a software engineer going by the name Lemi Orhanm Ergin. He claimed that this bug has the ability to grant admin access to anyone of any mac system within few seconds. The most horrifying thing about this bug is that it even allows anyone to login to the system just after the reboot. He described his finding in a series of tweets which were picked by a number of tech enthusiasts and soon the Twitter was flooded within huge number of users replicating the acts of the bugs.

It became apparently clear to millions of macOs High Sierra users that simply typing the ‘root’ in the username will help in bypassing the Apple security in no time. Some of the experts had stated that this bug is eerily similar to the Apple’s very own ‘root user’ login feature. It seems like this bug is actively making use of this feature which happens to enabled by default on the macOS. If you are whether your system is affected by this bug or then check your macOS by giving a click on the Apple logo present on the left top left corner of the screen. Now select the option “About this Mac” to know your device macOS version.
 

Bringing updates in quick time intervals

 
Apple has claimed that its macOS is simply the most secure operating system in the world but that doesn’t mean it is free of bugs. Apple is known to offers patches and fixes as quickly as possible which isn’t the case with other operating systems where users have to wait for months to get the incremental updates.

Just a few weeks ago Apple brought a massive supplemental update for the macOS High Sierra which helped in fixing a wide range of bugs along with improving the installer robustness and along with other issues. Some of the major issues resolved with this update include the fixing of the graphical problem associated with Adobe InDesign and issue related to addressed in the Yahoo accounts.

Apple has been quick at coming up with the fix as well this time around. Apple has even issued a statement where it stated that security is always been a top priority for every Apple product. It even clarified that the Apple engineers have found this issue in the Tuesday afternoon and they had started immediately working on patching up the security hole.

Now this big has been squashed in the macOS High Sierra for good and it shows why a huge number of are fan of the Apple products.

Monday 11 December 2017

Samsung: Graphene Balls Boost Battery Charging Speed by 500 Percent

Graphene

Graphene Balls to Charge your phone Faster

Have you ever stepped out of your house and then realized that your phone is dead and charging could mean another hour or so when you don’t really have the time? Well, all that is going to change with Samsung’s 12 minute charge time. I don’t mean 12 minutes for just a bit of charge either but a full charge cycle.

In smartphones, a number of hardware has undergone changes to not only make the phone more efficient but also to make it more capable. But one of the things that has taken a backseat or which has not developed at the same rate, is the battery used in these smartphones.
 

How can a phone get a full charge in 12 minutes with Graphene balls?

 
Previously, or even now, smart phones had/have lithium-ion batteries. With today’s smartphones doing more than before, they also take up a lot of juice and this requires a lot of charge as well as the time it takes also increases.

Researchers have been looking for alternatives to these lithium-ion batteries but nothing seems to be promising until now.

A new study by Samsung has found, Graphene balls which is supposed to increase battery capacity by 45% and what is the greatest yet- it can increase charging time by a whopping 500%.
 

Why are Graphene Balls so great?

 
Samsung Advanced Institute of Technology or SAIT for short, has discovered this novel method of charging. But coming to the question of why are Graphene balls so great?

The answer is simple, with Graphene balls, the batteries not only have a higher capacity but also a faster charge time than ever before. Earlier, other solutions could either have a higher capacity or a faster charge time but not both.

Getting the ingredients for these graphene ball batteries is neither expensive nor difficult to find and another major advantage is that Samsung can incorporate Graphene balls into the batteries without majorly altering its manufacturing Equipment which otherwise could have been an expensive venture. So it means that the battery will still be able to give its best and at, hopefully a reasonable price for customers.
 

More about Graphene balls…

 
Graphene is a highly advanced substance which is a hexagonal lattice of carbon. SAIT used this lattice structure to create a Graphene ball with the help of silica. These Graphene balls will be located as a protective layer on the cathode and anode ends of the battery. The location is also deliberate as this point allows for greater charging capacity and faster speeds.

With the Graphene balls high stability, ability to act as a semi- conductor and its good thermal conductivity, it is proving to be a really good substitute for those lithium-ion batteries.

Everyone remembers the note 7 fiasco; this time around Samsung is taking no chances with the graphene balls. The graphene balls will allow the battery to charge without getting too hot.

When I’m talking increased capacity with the graphene balls, I mean a tablet’s worth of charging capacity that is 5000-6000mAh at a charge time of a mere 12 minutes.

Saturday 9 December 2017

Santa’s Village is Back in Business

Santa’s Village

Santa-Tracker: Google's Interactive Advent Calendar is Back

Google has built the Santa's Village and now The Santa Tracker is back: As every year, Google has again gathered a series of entertaining games in Santa's Village an interactive Advent calendar. This time kids can even program visually.
 

Santa-Tracker: From a misprint in 1955 into the Internet age

 
This Santa's Village has a long tradition at Google. Already in 2004, the search engine showed for the first time children on Google Maps the supposed course of Santa's journey from Santa's Village on Christmas Eve. The tradition is actually much older and was long before Google or the Internet. In fact, the Santa Tracker was created by a misprint in 1955.

At that time, the US mail order company Sears printed an advertisement that asked children to call Santa Claus from Santa's Village. However, instead of connecting to Sears, the number led to the North American Air and Space Defense Command (NORAD). In order not to disappoint the calling children, the local soldiers were ordered to give them the alleged position of Santa Claus. This is how a tradition emerged that, in cooperation with Google, spilled over into the Internet from 2004 onwards and it created Santa's Village.

In 2012, however, NORAD decided to monitor Santa Claus no longer with Google but with Microsoft's help. Obviously, after eight years, the search giant did not want to break with tradition anymore, so there are now two Santa trackers: Santa from Santa's Village created by Google and the original NORAD.
 
Google's Santa Tracker 2017: This year too with Coding Games
 
Google's Santa from Santa's Village has long been more than a preparation for Santa's fictive itinerary. It's now an interactive Advent calendar, featuring a range of Christmas games and informational materials. Kids learn, for example, how the Christmas traditions in different countries of the world look like.

In this beautiful Santa's Village, we particularly like the Coding game available since December 2nd. Here children have to program a virtual plotter so that a snowflake comes out of it. This is done using a visual programming language modeled on Scratch. This Santa's Village from Google is available as in previous years as a website and Android app.
 
The original: Santa Tracker by NORAD supports Alexa and Cortana
 
NORAD's original Santa tracker will also be available again in 2017. Here, too, children can learn about Christmas celebrations from all over the world. There are also different games. Unlike Google, however, the NORAD site makes no effort to deduce the desired language from your source. Children therefore have to manually switch to the various languages via a drop-down menu.

Interestingly, the Santa Tracker makers have not overslept the current hype about voice-controlled assistance systems. On Dec. 24th, according to the creators, you can ask Alexa and Microsoft's Cortana where Santa is currently in, and then get an answer from Santa's tracker.

Monday 4 December 2017

Beetles Backpacks Earthquake Detect Trapped Humans

Beetles Backpacks

New age real beetles equipped with backups set to boost disaster management efforts

Scientists were looking for new way to boost the disaster management efforts on the global scale by bringing the world’s first smallest disaster management squad. The look of the beetles is very much similar to wearing backpack but it does have a number of unique features. These beetles have the ability to detect the carbon dioxide within the collapsed building and they controlled remotely using the tiny packs fitted right on their back. The carbon dioxide being released by the people trapped inside the buildings and other places will help in getting them detected by these beetles and rescue efforts will be centered round getting them safely and soundly fast.

 

Saving lives by detecting carbon dioxide

 
Scientists are immensely hopeful that these beetles will be sent in areas which have suffered from earthquakes, hurricanes and other disaster to boost the rescue efforts. The detectors placed on the beetles will help the people controlling the beetles to locate the life in the rubble. These beetles will be able to find whoever or whatever is generating carbon dioxide and could help in lessening the time usually lost in finding the victims after disaster events. Secondly it will also remove the need of digging up areas in hope of finding people alive rather focusing on concentrating on such areas from where carbon dioxide is being generated and there is more likelihood of finding the trapped person or persons.
 

The team behind this new age technology

 
These new age cyborg beetles have been developed by a group of scientists at Nanyang Technological University (NTU). The lead scientist on this research has asserted that these beetles will help in locating, finding and detecting the survivors buried below the debris and rubles quickly than any other technology present at hand. It will also help in combing a wide or large area within the shortest amount of time to get to the survivors as quickly as possible. The small size and some tool used in the development of the cyborg beetles makes them more suitable to get through the small spaces with much agility.



Scientists had made use of the darkling beetles and their backs are fitted with the tiny computer with the use of simple beeswax. The computer is used by the beetle handler to send electrical pulses to the beetle which is received by its antenna and helps in steering them on the right month. Beetles makes use of the temperature and heartbeat vibration apart from the carbon dioxide generation in order to detect survivors.

A number of critics have emerged who are actively against the use of the beetles for the rescue efforts. They believe that putting a tiny computer on the back of the beetles and steering them to detect and fin survivors is nothing less than animal torture. They are asking the scientists to make use of the mechanical robots rather than real beetles. Scientists has stated that they had taken good care of the beetles and when they are not being controlled then are living their normal lives.

Monday 27 November 2017

Artificial Photosynthesis Gets Big Boost From New Catalyst Developed

A step Closer to Artificial Photosynthesis

 
We all know about the continuing threat of global warming. Global warming is caused by many gases in the air but one of the most important concerns in the world is about carbon emissions. As carbon emissions come from burning of fossil fuel, which gives us our energy, there are undoubtedly many steps taken by researchers worldwide to look for alternate sources of renewable energy.

One such attempt is being made by researchers at the University of Toronto. They are trying to replicate the photosynthesis process used by plants, in order to create a renewable source of energy. While there are many other renewable sources of energy out there such as wind, water and solar energy all these can be expensive. So researchers at the University are trying to use artificial photosynthesis to create an alternate source of energy.
 

What is meant by Artificial Photosynthesis?

 
So we all know that plants get their energy by using the sun’s rays to convert carbon dioxide and water into their food. So the scientists at the University of Toronto are trying to do just by using artificial photosynthesis.

In plants, water is broken down into protons and oxygen gas while with carbon dioxide; it is broken down into carbon monoxide. Carbon monoxide is what is actually needed, which is then through an industrial process known as Fischer-Tropsch synthesis converted to hydro carbon fuels which will give us our energy source.
 

Problems encountered in Artificial Synthesis:

 
While breaking down water and carbon dioxide into its respective components, researchers encountered problems with their catalysts. The process of breaking down the components required in artificial photosynthesis, involves two reactions, while the first reaction uses high levels of PH the second reaction uses neutral PH levels.

This inconsistency in replicating artificial photosynthesis poses a problem because the movement of particles between reactions consumes a lot of energy. Therefore the artificial photosynthesis process is not as efficient as it could be.

Overcoming problems in artificial photosynthesis:


In order to bring in more efficiency into the artificial synthesis process, researchers have developed a new catalyst for the initial reaction. Initially water was split into protons and oxygen but this reaction involved high PH levels which made the artificial photosynthesis process inefficient.

But now, researchers are using a new catalyst which will use a neutral PH level in the artificial photosynthesis process, just as in the second reaction. This discovery now means that energy is not lost by moving particles through both reactions.
 

Benefits of the Catalyst in Artificial photosynthesis:

 
The new catalyst which is made of nickel, cobalt, iron and phosphorous, consumes less energy in the artificial photosynthesis process and when combined with the second reaction, the overall energy consumption is brought down.

The elements used in making the catalyst are not only cheap but also safe. It can even be made at room temperatures using inexpensive equipment. This makes the overall artificial photosynthesis process not only inexpensive to replicate but also increases the efficiency of the system.

While testing the catalyst, it showed stability for all of the 100 hours that it was tested.

Wednesday 22 November 2017

What Are Headphone Drivers and How They Effect Sound Quality

Headphone
For in-ear headphones, there is always talk of dynamic drivers and balanced armature drivers. Today we deal with the topic and show the differences. If the in-ear listener sits properly in the ear canal and thus "closes" well, the space between the eardrum and the membrane is closed and very small. The whole thing then works like a kind of spring system (or "push-pull mechanism") and the membrane can move the eardrum well with little deflection and little energy, ensuant in a very good bass upshot. As soon as there is a leak in this system, this is immediately noticeable by the fact that low frequencies are lost (as is the case with ear buttons). This is because the human ear is less sensitive to low frequencies (below about 150 Hz) than to higher frequencies.

So if we want to hear low frequencies better then a lot of energy has to be applied to amplify them. When using loudspeakers, low frequencies are still physically noticeable. This is not the case with headphones. Also, speaker diaphragms are larger and more stable (thicker material), which allows much more air to be set in motion than headphones. In order to be able to make the best possible use of the low energy that the headphone system develops, care must be taken to ensure that the headphones or in-ear headphones are optimally terminated.

Ok! Which driver was used certainly determines how well the in-ear headphones sound. In the development of the drivers most of the money usually flows in the production of in-ear headphones.
 

What are balanced armature drivers?

 
Balanced Armature Drivers (BA) are often used only in in-ear headphones in the higher price segment. Balanced armature drivers are made to sound particularly good in a certain frequency range, such as: As the heights, which is why in-ear headphones with Balanced Armature drivers often several drivers installed. For example, the Sony XBA 3 iP incorporates 3 Balanced Armature drivers, which ensure that the entire sound spectrum is covered.
 

Advantages of Balanced Armature Driver

 
  • You can make a frequency range sound great
  • The sound sounds more detailed
  • The sound plays faster
  • The treble sounds clearer than dynamic drivers
  • They are smaller than in-ear headphones with dynamic sound transducers and weigh less
  • They need less power than dynamic drivers
 

Disadvantages of Balanced Armature drivers

 
  • The bass is weaker than dynamic drivers
  • In-ear headphones with Balanced Armature drivers are more expensive
  • Often several drivers are necessary to cover all frequency ranges
 

What are dynamic drivers?

 
Dynamic drivers in in-ear headphones make it possible for in-ear headphones to be offered at a good price. Unlike Balanced Armature drivers, only one driver covers the entire sound spectrum. They work on the same principle as loudspeaker boxes.
Advantages of dynamic drivers
  • Cheaper than Balanced Armature sound transducer
  • Bass frequency is better
  • The sound signature is better coordinated
  • Often they are also more robust than Balanced-Armature drivers
 

Disadvantages of dynamic drivers

 
  • No detailed sound like Balanced Armature drivers
  • The heights are not that clear in comparison
  • They weigh more and are bigger too
 

Balanced Armature Drivers and Dynamic Drivers

 
In some in-ear headphones both types of drivers are worn, such. B. the Sony XBA H3. The advantage of having multiple types of drivers is that the bass and treble sound great, but the case is usually larger and they weigh more.

Moving Armature driver


Moving Armature drivers are new drivers that combine the benefits of Balanced Armature drivers and dynamic drivers. Moving Armature drivers work like Balanced Armature drivers but have the advantage that the entire frequency range is covered in contrast to Balanced Armature drivers where multiple drivers are needed, the Moving Armature driver in-ear headphones only need a driver. However, very few models use this type of driver so far and they are also quite expensive.

The classic and most commonly encountered headphone driver is - as with speakers - the electrodynamic principle assign.

However, in order to map the entire frequency spectrum as accurately as possible, partially modified drivers are used, such as the Variomotion technology from AGK (depending on the frequency position, a larger or smaller part of the diaphragm swings) or the ring driver of the Sennheiser HD800.

Saturday 18 November 2017

Researchers Developed Flexible Photonic Devices

Flexible Photonic Devices
Photonics will have a direct impact on many areas of our daily life. Soon photonics will be fundamental, both for the improvement or replacement of existing processes and for the development of new solutions and new products.

On the other hand, society demands products with better and better features: new functionalities and improved properties, lightweight, flexible photonic devices, and capable of adapting to different materials and surfaces. Likewise, these developments must be competitive and not increase the price of the final product.

A team of MIT Associate professor Juejin Hu from the University of Central florida, China and France has developed a new method of making light based photonic devices. These special flexible photonic devices is made from a kind of glass called Chalcogenide. This specialized kind of glass material has a great flexible property which can be bend and strech to the very large extent without any damage. These flexible photonic devices can be used in field of biomedical sensors and flexible connectors in Electronics.

How about a device that can simultaneously detect blood oxygen level, heart rate and blood pressure? Yes, these flexible photonic devices of optical technology which are made from the strechy and bendable material can be mounted in skin to monitor the condition.

By using these kinds of new light based flexible photonic devices, we can stave off the condition for the conversion process. Because, if the original data is light based is having the advantages for a lot of applications.

The current photonic devices applied in the field are made up of rigid materials on rigid matters thus rises an intrinsic counterpart. The polymer based softmaterials is having a less refractive index tracks to not so good ability to circumscribe a light beam. To confront this issue, the team of MIT researchers have developed a stiff material that can stretch and bend which is almost like a Rubber. Its confuguration is like a spring made from a polymer matter has no noticeable abjection in its optical performance.

Other flexible photonic devices that are made by implanting nanorods of a rigid substrates in a polymer base need extra developmental steps. And hence they are not congruous with current systems. These flexible photonic devices can also be used for applications where the devices require to adapt to the rippled surfaces of some other material. But optics technology is extremely sensitive to strain, thus can observe deformations of lower than one hundredth of one percent.

This team recently has formulated a way of segregating layers of photonics, made of chalcogenide and graphene with customary semiconductor photonic electronic equipment. Current method of segregating such material need them to be made on a surface and then take off and tranfern to a semiconductor thin layer. This process is very difficult. But the new procedure permits the layers to be fancied directly on the surface of a semiconductor. This process no need a special temperature condition for the entire process and thus allows very simplified fabrication and more punctilious coalition.

This team of MIT researchers have confirmed very soon they develop this new technology of flexible photonic devices to reach commercially.

Friday 3 November 2017

DeformWear: Deformation Input on Tiny Wearable Devices

DeformWear
We have all seen various smart- somethings entering the market whether it be a smart watch or virtual glasses or wireless headphones, they all have one thing in common. They require an app on a smart phone or tab or some other device with not only WiFi connectivity but also a Bluetooth connection.
Using such devices at times therefore can be a handful while inputing all data into a phone or some other device. The world of science now brings DeformWear. DeformWear can be thought of as a tiny device the size of a pea that allows you to toggle data onto your gadget.

How was DeformWear made? 

Scientists felt the need for a tiny device that could be used fast and discreetly to handle devices when gadgets such as the Apple watch came out. Their screens were so small that using the device made it impractical for many people. So DeformWear was born. It is a gadget no less than the size of a pea that can be moved in all directions, pressed, pinched, pushed right, left, down and up.

How DeformWear was thought of: 

Researchers at Saarland University tested smart phone gestures with the use of a person’s skin and found that many of them pinched and pushed to the side to access smart phone apps. This study combined with sensors that were initially thought of for robots led to the creation of a body carried device the DeformWear.

Functioning of DeformWear: 

DeformWear has a diameter of 10 millimeters and can be handled just like a balloon but minus the bursting part. DeformWear comes with a sensor that then allows an individual to maneuver it in all directions.

Testing of DeformWear: 

DeformWear was made into a charm, bracelet and ring. The device was then tested on multiple people for different applications while using a smart watch and virtual glasses discreetly and fast. Later on it was used on smart televisions and to play music all without the need to look at the screen. The results from the testing of DeformWear were all found to be successful. DeformWear uses the fine motor functioning of fingers along with push and press functioning to give desired results.

Researchers at Saarland University hope to use this device quickly and discreetly when handling devices with no screen or when the device has too small a screen. With the introduction of DeformWear, the world can be ready for even smaller gadgets which can be handled with DeformWear. Also with the development of DeformWear, there may also be different ways in which the device can be worn. At present it is tested only as a charm, bracelet and ring.

Friday 27 October 2017

Material Could Bring Optical Communication Onto Silicon Chips

Soon silicon chips will feature optical communication with the discovery of a new material

With each passing year computing performance has advanced significantly and if we take decades into the equation then you will be astonished at the rate of advancement. Computing performance boost has been achieved through squeezing more number of transistors within a relatively tighter space on the microchips. Now scientists have been able to develop such ultrathin films placed on the semiconductor making optical communication possible on the microchips.
 

The ‘interconnect bottleneck’ in optical communication

 
The downsizing of the microchips over the years had led to signal leakage between the different components which eventually results in slower communication between them. This delay in communication has been termed as ‘interconnect bottleneck’ which has emerged as a major issue in the high-speed computing systems.

One of the best ways to eliminate the interconnect bottleneck in microchip is to make use of light to allow communication between different parts. Using wires for communication is simply out of the question but even using light isn’t a simple or easy way as silicon used to make chip doesn’t’ happen to emit light easily.
 

Finding a new material to emit light

 
Researchers have found a light emitter as well as detector which can help in bringing optical communication by integrating it in the silicon CMOS chips. A new device has been built from a common semiconductor material, molybdenum ditelluride, which belongs to a new revolution group of materials called two-dimensional transition-metal dichalcogenides.

The best thing about this material is that it can be stacked right top of the silicon wafers which wasn’t the case earlier. This 2D molybdenum ditelluride is such a remarkable ultra-thin material that it can be easily attached with any material without much hassle. A major difficulty faced by the scientists while looking for materials to integrate with the silicon semiconductors is that most of materials happen to emit light in the visible range. And silicon is notorious for absorbing the light emitted at such wavelengths. While molybdenum ditelluride happens to emit light in the infrared range which can’t be absorbed by the silicon and thereby it helps in enabling the optical communication on the microchip.

 

Future prospects of this new discovery in optical communication

 
Researchers have stepped their efforts towards finding other materials which can also be used for the chip based optical communication in future. Currently most of the telecommunication system operates mainly using the light having the wavelength of 1.3 or 1.5 micrometers. The good thing here is that molybdenum ditelluride happens to emit light at 1.1 micrometer which is suitable for usage in the silicon chips found specifically in the computers but unsuitable when it comes to usage in the telecommunications systems.

Therefore researchers are again looking for a new material which can help initiating the optical communication the telecommunication systems. Currently they are exploring another ultra-thin material known as black phosphorus which has the potential to emit light through altering the different layers used in the process. This research has been published in the science journal called Nature Nanotechnology.

Thursday 26 October 2017

Novel Circuit Design Boosts Wearable Thermoelectric Generators

Wearable Thermoelectric Generators
With Wearable Thermoelectric Generators, a continuous monitoring of the vital data is possible for athletes and patients. The difficulty is to supply the devices with power permanently. The Wearable Thermoelectric Generators with 40 mW of continuous power, which is worn along with the regular clothes, solves the problem.

Supported by Air Force Office of Scientific Research (AFOSR) and by PepsiCo, Inc., this research has paved a way to better understand the electronic and optical properties of polymer-based materials. A team of researchers from the Georgia Institute of Technology, under the leadership of Professor Shannon Yee, has developed a Wearable Thermoelectric Generators that is both light and flexible and uses the body's heat to generate electrical energy. The thermal generators were applied both to organic and inorganic substrates. However, the polymer variant achieves a significantly lower output power. The performance of the inorganic variant, on the other hand, was satisfactory, but the prototypes were rigid, comparatively heavy and therefore not usable.

The team of Georgia Institute of Technology around Professor Shannon Yee has now developed a Wearable Thermoelectric Generators method in which a p-type and n-type materials are each presented in a pasty form and on a fabric to be printed. The pastes penetrate through the meshes of the fabric and form a layer of Wearable Thermoelectric Generators about one hundred micrometers thick. As a result, several hundred thermoelectrically active points are formed in the combination of p- and n-conducting material on a specific surface of the fabric.

The structure of this Wearable Thermoelectric Generators is stable; it does not require additional ceramic substrates that absorb a large portion of the available thermal energy. Here, the fabric itself serves as the upper and lower substrate of the generator between which the inorganic thermoelectrically active materials are introduced; the Wearable Thermoelectric Generators are also flexible. In particular, the weight of the Wearable Thermoelectric Generators in comparison with other systems could be substantially reduced: to about 0.13 g / cm 2. A 10 x 10 cm² Wearable Thermoelectric Generators designed for the power supply of a "smart fabric" produces an output power of 40 mW from the temperature difference between the skin of the wearer and the environment. 

Tuesday 17 October 2017

The Uses of Captured CO2

We’ve all heard of CO2, it is the gas that we breathe out. We also may have heard that CO2 emissions are a major concern to the atmosphere and us at large. So what do we do to control this progression into our ultimate destruction?

Recently scientists have found ways by which CO2 can be captured and transformed into something that is useful and not harmful. By this I mean that now those harmful CO2 emissions that we so dread, can now be mixed with other materials thereby making it not only more stronger but also and most importantly it will also reduce the CO2 emissions in the atmosphere.

They say that we as humanity should never sleep on the problem of pollution, well with this solution we can control CO2 emissions by actually sleeping. This may sound that I have taken a trip down the crazy lane but by sleeping you can actually control CO2 emissions. But let me tell you how. Scientists have now come up with a technology that is fitted into your pillow that allows the CO2 breathed out by us to be captured into our pillows thereby reducing the CO2 emissions in the air.

It is not only in pillows that can capture CO2 emissions but also other everyday items such as the soles of our shoes, the spines of our books or even the concrete of our buildings and roads are all made of or can contain our CO2 emissions.

So how can all this be done we might ask ourselves. Well CO2 which is a technically unreactive gas can be made to react with petrochemical raw materials which are used in making a lot of plastics. In this new form CO2 can account for upto 50% of the materials used to make up plastics. Also while using CO2 in this way, the CO2 emissions not only get used but also the resulting CO2 from the process also gets absorbed by the process. The resultant materials are also found to be alot stronger than if CO2 were not used.

Other companies are now using CO2 emissions to make jet fuel and diesel through carbon engineering . While somewhere else CO2 emissions are being captured and used in making soda ash which is an important ingredient in making fertilizer, dyes and synthetic detergents.

Scientists claim that through this process, by the year end the process would have reduced CO2 emissions by 3.5 million tonnes in the atmosphere which is like taking 2 million cars off the road.

But of course all the captured CO2 is very small compared to the actual amount that it is in the air and that could, nay will potentially harm us. Scientists have discovered that CO2 emissions account for 12 to 14 gigatonnes of toxic waste emissions a year. That is roughly 12 to 14 billion tonnes a year.
We burn a burn a lot of fossil fuels a year to provide for gas, coal and oil and all this adds to the CO2 emissions in the air among other undesirable gases. Although through the process of capturing CO2 emissions we are reducing it in the atmosphere, the process is used only in a very small scale and therefore is very expensive.

Wednesday 11 October 2017

Open Rotor Jet Engine That Could Revolutionize Air Travel

Open Rotor

A new design of with an open rotor jet engine is expected to revolutionize air travel

 
Aerospace will see a new egg shaped jet engine which has the potential to dramatically cut down the fuel consumption and simply change the scenario of modern air travel. This new technology or more importantly the new ‘open rotor’ prototype has been designed by a French engine maker going by the Safran and he is hoping to see them in action by the year 2030. This new technology based engine was developed in association with European Union and it is being tested currently at a French military based present near Marsellie. The Open Rotor engine technology simply bring out the usually hidden whirring part outside in order to capture more air and provide more powerful thrust to the plane.
 

How it works?

 
This new technology based engine design is expected to help in cutting down fuel consumption which will in turn help in reducing the air fares. This engine is designed like an elongated egg having two rows of blades present at the back end. This engine is designed with an aim of burning 15 percent lesser amount of fuel than the current generation of turbofan engines. Even the traditional technology used in the turbofan had improved significantly in the recent past but the prospects & potential of the new egg shaped engine is massive.

This newly engine will be placed right at the back of the airplane rather than being placed right under the wings which has been common place with today’s technology. It will also leave some room for the two rows carbon blades shaped like scimitar which will swirl in the opposite direction.
 

Tinkering with the traditional engine technology & design

 
This isn’t the first that people had decided to tinker with the traditional engine technology and design of our airplanes. In the 1980 a number of US engine makers worked on the unducted fans but this design and technology was dropped as the oil prices steeply declined. During that time airplane engines were considered to be extremely noisy. The current study and change in the popular engine design and technology is powered by the continuous rise in the energy costs and the impositions of different resolution for fewer emissions.

One of the researchers has stated that if we are looking forward to turn this engine into reality by 2030 then the development has begin from today. In case the oil prices ever jumped over $100 then the whole industry will get more interested in such a viable engine technology which reduces the amount of fuel consumption.

It shouldn’t come as a surprise that Rolls Royce has shown its interest in this new peculiar engine technology. A another question which is being thrown at the developers is how the passengers are going to react to find two find two naked engine right at the back of the airplane. Developers have no answer for it and this technology still has to be certified by the regulator before it debuts on the modern planes.

Tuesday 10 October 2017

What are the Importance of IT Disaster Recovery Plan

No matter what industry you work in, you very likely rely on some form of information technology. When something goes wrong with your IT system, it can bring operations to an immediate stop. The Importance of IT Disaster Recovery for business is most evident after a disaster. Without a plan, a company in any industry risks losing massive amounts of money, taking hits to their reputation, and may even pass these risks onto customers or clients. Waiting until a disaster hits, however, is dangerous and risky. Many people believe that such things can not happen to them, but in reality, they can, and do happen to everybody. Recent storms and cyberattacks should be enough to convince you that you need a disaster recovery plan.

Recent Examples

The climate on this planet is undergoing massive changes. This has led to many costly natural disasters. Hurricane Sandy, for example, did $65 billion worth of damage, and that was, at its height, a Category 2 hurricane. More recent hurricanes, like Harvey and Irma, were significantly stronger and cost even more money. Only 30% of respondents to a survey conducted under the name “2013/2014 Information Governance Benchmarking Survey,” believed that their organization had sufficient systems as it relates to disaster and crisis recovery and management and business continuity. The National Small Business Association conducted a study in which they found that a vast majority (83%) of organizations still had no business continuity plan a year after Hurricane Sandy wreaked its havoc.

Being Prepared

Even if your organization understands the importance of a contingency plan, it can be hard to understand how to start creating one. It is especially difficult for those who have never had to deal with disaster recovery. These people are susceptible to thinking that they will never need to utilize such a plan. They may also simply have no idea exactly what to prepare for. Either way, having a plan is essential for all organizations, as it is guaranteed to save costs in the long run.

If no one in your organization has any experience with planning for disaster recovery, it is important to consult with someone who has. There are so many things that need to be considered in a contingency plan, and it can be impossible to cover all of the bases without any experience in doing so. Many people also have difficulty recognizing the difference when it comes to disaster recovery vs. business continuity. While a business continuity plan is often used in conjunction with a disaster recovery plan, they are separate plans. The business continuity plan should indicate how business will continue after a disaster. The disaster recovery plan deals specifically with IT.

Once there is a well thought out plan in place, and any previous gaps in the plan have been fixed, any organization will be able to recover from a disaster quickly and efficiently. This helps prevent financial loss and loss of reputation. It can also help reduce the risk of damage to equipment during a disaster, and protect the privacy of client records.

World’s Longest Running Synchrotron Light Experiment Reveals Long Term Behaviour Of Nuclear Waste Materials

Nuclear waste experiment
Longest ongoing synchrotron light experiment in the World unveils behaviour of nuclear waste materials that are long term

University of Sheffield researchers, in collaboration with the Diamond Light Source, are the forerunners in studying and understanding gradual transformations in nuclear waste materials. Their experiment just reached a major milestone of 1000 days which makes it the world’s longest running synchrotron light experiment.

Led by Dr Claire Corkhill from the University’s Department of Materials Science and Engineering, the research has utilised the world’s best facilities at the Diamond Light Source to study the long-term behaviour of cement materials used in nuclear waste disposal through the synchrotron experiment.

Dr Corkhill explained that these cements are used to securely lock away the radioactive elements present in nuclear waste for a time period of more than about 10,000 years, hence it is vital that the properties of these materials are accurately predicted in the future. She also added that the exclusive provision at Diamond has enabled them to follow this reaction in situ, for a period of 1000 days and the data received from this study is already helping them recognize exact phases that will securely and safely lock away radioactive elements in a time span of 1000 years’ which is something they would not have been able to determine otherwise.

Dr Corkhill also stated that she has definite plans to return to Diamond Light Source to investigate and observe the reaction of these particular phases with uranium, plutonium and technetium on a single beamline of the X-ray absorption spectroscopy.


The Director of the Immobilisation Science Laboratory and co-investigator of this particular research, Professor Neil Hyatt, said that they all are very enthusiastic to be chosen to partake in this synchrotron experiment as the first ever consumers, in this world’s best facility and capability and are grateful to the people at the Diamond Light Source, Dr Chiu Tang, Dr Sarah Day and Dr Claire Murray from I11 in particular, for all the support they provided in helping all their experiments seeing the light of day and for being the perfect curators of their samples for a period of 1000 days. He added that they were very pleased that this 1000 day milestone established firmly their long-term association between the University of Sheffield and the scientists present at the Diamond Light Source.

Dr Corkhill is, at present, keeping an eye on the alterations in eight nuclear waste cement materials by determining the high resolution diffraction patterns at Diamond Light Source on the I11-1 beamline. Diamond Light Source happens to be UK’s national synchrotron science facility which was financed as a joint venture by the UK Government with the help of the Science & Technology Facilities Council (STFC) in collaboration with the Wellcome Trust. Dr Corkhill’s results and findings are currently being utilised to provide support to the ongoing safety case development meant for the UK Government policy to get rid of nuclear waste in a disposal facility that is geologically deep.
Scientists at Diamond plan to construct five more beamlines for these synchrotron experiments by 2020. But for now, there are no plans to put a stop to this experiment and it will in all probability continue to break records until the materials don’t change anymore or the space is required by another space.

Sunday 1 October 2017

TU Delft Researcher Makes Alcohol Out Of Thin Air

Method of Producing Alcohol from Thin Air

Ming Ma, a TU Delft PhD student of Delft University of Technology, The Netherlands, haslocated a method of producing alcohol from thin air.. He has found out a way of efficiently and accurately controlling the process of electro-reduction of CO2 in producing a wide range of useful products comprising of alcohol.

 With the possibility of utilising CO2as a means of resource in this manner could be just essential in dealing with climate change. His PhD defence took place on September 14th. For the purpose of modifying atmospheric CO2 concentration, carbon capture together with utilization CCU can be a practicable alternative strategy to carbon capture and sequestrations –CCS.

The electrochemical reduction of CO2 to fuels together with value-added chemicals has drawn significant attention as a capable solution. In this course of development, the captured CO2 is utilised as a means of resource and transformed to carbon monoxide – CO, methane – CH4, ethylene and also liquid products like formic acid –HCOOH, methanol – CH3OH and ethanol C2 H5OH. The great energy density hydrocarbons tend to be utilised directly and conveniently as fuels in the existing energy infrastructure.

Feedstock in Fischer-Tropsch

Besides the production of CO2 seems to be interesting as it could be utilised as feedstock in the process of Fischer-Tropsch, which is a strong technology used extensively in industry in the conversion of syngas CO and hydrogen H2 into useful chemicals like methanol and synthetic fuels like diesel fuel.

Ming Ma, in his PhD theory, while working in the group of Dr Wilson A. Smith had defined that the processes that seemed to occur at the nanoscale when various metals were utilised in the electro-reduction of CO2. For instance, while utilising copper nanowires in the electro-reduction procedure would lead to production of hydrocarbons though nanoporous silver could develop CO.

Moreover, as per the discovery of Ma, the process could be quite accurately regulated by altering the lengths of the nanowires as well as the potential of the electrical. On modifying these conditions one is capable of generating any carbon based production or alternatively combinations in any desired ratio, thereby producing the resources for the three follow up processes mentioned above. On utilising metal alloys in the procedure, would lead to more stimulating results.

Formic Acid – Favourable usage in Fuel Cells

Though platinum tends to produce hydrogen on its own, and gold tends to generate CO, an alloy of these two metals tend to unexpectedly produce formic acid – HCOOH, in relatedly huge quantities where formic acid is said to have the possibilities of a very favourable usage in fuel cells. After this, the next step for the team at the Smith Lab for solar Energy Conversion and Storage at TU Delft is to look for means of enhancing the selectivity of individual products as well to start the designing of scaling up the procedure.

Smith had received an ERC Starting Grant to work on that to improve the understanding of the complicated reaction mechanism to obtain an improved control of the CO2electro catalytic process. The other task in the lab is directed on solar driven splitting of water wherein the simple solution tends to make hydrogen production from solar water splitting more efficient and cheaper. With cheap efficient and stable photo electrode would help in improving water splitting with solar energy.

New Machine Learning Algorithms of Google and MIT Retouch Your Photos Before You Take Them

Google Pixel

New machine learning algorithms by Google and MIT retouch your photos before being captured


It is getting tougher and tougher, as time goes by, to extract more and better performance out of your phone’s camera hardware. That is the reason why companies like Google are adopting the method of computational photography: using machine learning algorithm to improvise the output. The most recent exploration from the search giant, conducted along with scientists from MIT, progresses this work to a new level, creating machine learning algorithm that are able to retouch your pictures just like a professional photographer in reality, prior to capturing them.

The researchers utilised machine learning algorithm to build their software, instructing neural networks on a dataset of 5,000 images that are produced by Adobe and MIT. Every image in this compilation has been worked upon and improved by five various photographers and Google and MIT’s algorithms made use of this data to understand what kind of improvements are to be made to different photos. This might involve increasing the brightness at certain places, reducing the saturation elsewhere and so on.

Machine learning algorithm has been used before to improve photos, but the real progress with this particular research is concision of the algorithm so that they are compact and resourceful enough to efficiently and seamlessly run on any user’s device. The software itself if as big as a single digital image and as a blog post from MIT describes, it could be very well capable to “development images in a assortment of styles.”

This proves that in order to train the neural networks, new sets of images can be used and could also be able to replicate a particular photographer’s specific look. In similar way, companies like Facebook and Prisma have produced artistic filters that imitate the style of famous painters. Although smartphones and cameras are already processing the imaging data in real time, these recent techniques are more subtle and spontaneous and rather than applying general settings to the whole of the individual image.

For slimming down the machine learning algorithm, the researchers utilised a few varied techniques. These consisted of converting the changes made to every photo into formulae and using co-ordinates that are grid-like to map the pictures out. All of this means that the data about how the photos can be retouched can be mathematically expressed, instead of full-scale photos.

Google researcher Jon Barron told MIT that this technology has the probability to be very valuable for real-time image enrichment on a mobile phone. He added that utilising this machine learning algorithm for computational photography has an interesting outlook but it is retrained because of the severe constraints in computation and power of mobile phones. This paper may offer a way to avoid these hindrances and create new, interesting, real-time photographic memories without getting the battery drained or giving a slow viewfinder experience.

It’s not unlikely that this machine learning algorithm will be seen in one of Google’s future Pixel phones. Earlier, the company used its HDR+ algorithms to show more detail in terms of light and shadow on mobile phones right since the time the Nexus 6. And Google’s computational photography lead, Marc Levoy, told The Verge last year that they are “only just begun to scratch the surface” with their work.