Showing posts with label technology news. Show all posts
Showing posts with label technology news. Show all posts

Monday 18 December 2017

Small Earthquakes at Fracking Sites May Be Early Indicators of Bigger Tremors

Fracking
7 fears about fracking: science or fiction?

The extraction of shale gas with fracking or hydraulic fracturing has revolutionized the production of energy in the United States, but this controversial technology, banned in France and New York State, continues to generate criticism and protests.

The detractors of the technique, which consists of injecting water and chemical additives at high pressure to fracture the rock containing the hydrocarbons, warn about the possible contamination of water, methane leaks and earthquakes, among other risks.

The British Royal Academy of Sciences, the Royal Society, said in its 2012 report that risks can be effectively managed in the UK "as long as the best operational practices are implemented," Richard Selley, professor at the University of Emeritus of Imperial College in London and one of the authors of the report.

But others, who have contrary opinions, are equal of strict. For example, regarding the possibility that fracking poses a risk of methane leakage, William Ellsworth, a professor of geophysics at Stanford’s School of Earth, Energy & Environmental Sciences. It is not a matter of determining if the wells may have leaks, but the question must be, what percentage has leaks.

In the middle of an intense and growing controversy about fracking, Stangford University Researchers investigated what science says up to now.

Can it cause earthquakes?

Two of them occurred in 2011 in England and led to the temporary suspension of the exploration with fracking.

The first, which occurred in April of that year, near the city of Blackpool, reached 2.3 on the Richter scale and was registered shortly after the company Cuadrilla used hydraulic fracturing in a well.

On May 27, after resumption of fracturing in the same well, seismicity of 1.5 was recorded.

The network of monitors of the British Geological Society, BGS, captured both events, which were not felt by the local inhabitants.

The company Cuadrilla and the government commissioned separate studies.

"Both reports attribute the seismic events to the fracturing operations of Cuadrilla," said the Royal Society, the British Academy of Sciences, in its joint report with the Royal Academy of Engineers on hydraulic fracturing, published in 2012.

Earthquakes can be unleashed mainly by high pressure injection of wastewater or when the fracturing process encounters a fault that was already under stress. However, the Royal Society said that activities such as coal mining also produce micro-organisms. The suspension of fracking in the United Kingdom was lifted in December 2012, following the report of the Royal Society, which ensured that fracking can be safe "provided that the best operational practices are implemented.

In the United States, a study published in March 2013 in the journal Geology linked the injection of wastewater with the 5.7 magnitude earthquake in 2011 in Prague, Oklahoma. The wastewater injection operations referred to in the study were conventional oil exploitation. However, seismologist Austin Holland of the Oklahoma Geological Survey said that while the study showed a potential link between earthquakes and wastewater injection, "it is still the opinion of the Oklahoma Geological Survey that those tremors could have occurred naturally."

Another study published in July 2013 in the journal Science and led by Nicholas van der Elst, a researcher at Columbia University, found that powerful earthquakes thousands of kilometers away can trigger minor seismic events near wastewater injection wells.

The study indicated that seismic waves unleashed by the 8.8 earthquake in Maule, Chile, in February 2010, moved across the planet causing tremors in Prague, Oklahoma, where the Wilzetta oilfield is located.

"The fluids in the injection of sewage into wells are bringing existing faults to their limit point," said Van der Elst.

Can fracking contaminate the water?

At the request of the US Congress, the Environmental Protection Agency of that country, Environmental Protection Agency, EPA, is conducting a study on the potential impacts of hydraulic fracturing on water sources for human consumption.

A final draft of the report will be released at the end of 2014 to receive comments and peer review. The final report "will probably be finalized in 2016," the EPA confirmed.

In 2011, Stephen Osborn and colleagues at Duke University published a study in the journal of the US Academy of Sciences, according to which the researchers detected contamination of methane water sources near fracking exploration sites in the Marcellus formation. in Pennsylvania and New York.

The study did not find, however, evidence of contamination by chemical additives or the presence of high salinity wastewater in the fluid that returns to the surface along with the gas.

For its part, the Royal Society, the British Academy of Sciences, said that the risk of fractures caused during fracking reaching the aquifers is low, as long as gas extraction takes place at depths of hundreds of meters or several kilometers and wells and the tubing and cementing process are built according to certain standards.

A case cited by the Royal Society in its 2012 report is that of the town of Pavillion, Wyoming, where fracking caused the contamination of water sources for consumption, according to an EPA study. Methane pollution was attributed in this case to poor construction standards and shallow depth of the well, at 372 meters. The study was the first of the EPA to publicly link hydraulic fracturing with water pollution.

However, as in the Duke University study, there were no cases of contamination by the chemical additives used in hydraulic fracturing.

We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, below the aquifer.

How to control the use of chemical additives?

Trevor Penning, head of the toxicology center at the University of Pennsylvania recently urged the creation of a working group on the impact of fracking with scientists from Columbia, John Hopkins and other universities.

Penning told that in the United States "it is decided at the level of each state if companies have an obligation to publicize the list of additives they use."

The industry established a voluntary database of used additives, on the fracking focus site. Penning explained that the additives used in the fracking fluid can be very varied and of many kinds, such as surfactants, corrosion inhibitors, biocides etc.

In toxicology they work on the basis that no chemical is safe, but that is the dose that makes the poison. Additives that could cause concern if they exceed safe levels are substitutes for benzene, ethylene glycol and formaldehyde.

"The potential toxicity of wastewater is difficult to assess because many chemical additives used in hydraulic fracturing fluid are undisclosed commercial secrets," Penning added.

The scientist also told that "the potential toxicity of wastewater is difficult to evaluate because it is a complex mixture (the additives can be antagonistic, synergistic or additive in their effects)".

Anthony Ingraffea, professor of engineering at Cornell University, warned of the impact of the September 2013 floods in Colorado, where only 20,000 wells are located in one county. "A good part of the infrastructure was destroyed, which means that the ponds with sewage tanks with chemical additives are now in the water courses and there are leakages of damaged gas pipelines." "The clear lesson is that infrastructure for fracking in floodplains should never be built.

What is done with wastewater?

These waters are what is known as flowback or reflux water, that is, injected water, with chemical additives and sand, which flows back when the gas starts to come out.

Approximately 25% to 75% of the injected fracturing fluid returns to the surface, according to the Royal Society. These wastewater is stored in open-pit tanks dug into the ground and covered (open pits), treated and reused or injected at high pressure into rock formations. The danger of leakage of wastewater is not unique to the extraction of shale gas, but is common in many industrial processes, notes the Royal Society.

“The wastewater may contain radioactive materials of natural occurrence, Naturally Ocurring Radioactive Materials, NORM, which are present in the shale rock in quantities significantly lower than the exposure limits," says the Royal Society report.

Can it exhaust water resources?

The use of water in large quantities in fracking operations is a cause of concern for some. "For natural gas, for example, fracking requires millions of gallons of water (around 2 to 5 million, or even more than 10 million, that is, from 7 to 18 or up to 37 million liters) for fracturing, which is several times more than conventional extraction requires, "John Rogers, senior energy analyst and co-manager of the Energy and Water Initiative of the Union of Concerned Scientists, Union of Scientists Aware, told.

"The extraction of shale gas by fracking consumes on average of 16 gallons of water per megawatt-hour, while conventional gas extraction uses 4. That is, fracking requires 4 times what conventional extraction requires, "said Rogers.

"That amount of water is less than what is involved in the extraction of coal, but the use of water is very localized and can be very important in the local scene, in terms of what would be available for other uses."

The Water-Smart Power study of the Union of Aware Scientists points out that about half of the hydraulic fracturing operations in the United States occur in regions with high or extremely high water stress, including Texas and Colorado.

Melissa Stark, global director of new energies at Accenture consultancy and author of the report "Shale gas water and exploitation", admits that the extraction of shale gas with hydraulic fracturing uses a lot of water (about 20 million liters per well), but notes that "it does not use more water than other industrial processes, such as irrigation for agriculture. The volumes required may seem large, but they are smaller compared to other water uses for agriculture, electric power generation and municipal use," he told.


Can there be methane leaks?
Anthony Ingraffea, professor of engineering at Cornell University in the United States, says that it is not about determining if wells can leak, but the question must be, what percentage has leaks?

Ingraffea analyzed the situation of the new 2012 wells in the Marcellus formation in Pennsylvania, based on the comments of the inspectors, according to records of the Pennsylvania Department of Environmental Protection.

According to Ingraffea, the inspectors registered 120 leaky wells, that is, they detected faults and leaks in 8.9% of the gas and oil exploration wells drilled in 2012.

A study published in September 2013 by the University of Texas, sponsored among others by nine oil companies, ensured that while methane leaks from shale gas extraction operations are substantial - more than one million tons per year - they were less than the estimates of the US Environmental Protection Agency.

However, the Association of Physicians, Scientists and Engineers for a Healthy Energy in the USA, of which Anthony Ingraffea is president, questioned the scientific rigor of that study, noting that the sample of 489 wells represents only 0.14% of wells in the country and also the wells analyzed were not selected at random "but in places and hours selected by the industry".

Some reported images of tap water that catches fire if a match is approached could be explained by the previous presence of methane.

"We must not forget that methane is a natural constituent of groundwater and in some places like Balcombe, where there were protests, the oil flows naturally to the surface," Richard Selley, professor emeritus of Imperial Petroleum Geology.

"We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, beneath the aquifer," added Selley.

How does global warming impact?

Between 1981 and 2005, US carbon emissions They increased 33%. But since 2005 they dropped by 9%. The reduction is due in part to the recession, but according to the US Energy Information Administration, Energy Information Administration, EIA, about half of that reduction is due to shale gas.

Globally, coal provides 40% of the world's electricity, according to the International Energy Agency, International Energy Agency. Advocates of shale gas extraction say it is cleaner than coal and can be a transition fuel, while expanding the use of renewable sources such as solar or wind energy.

In Spain, for example, renewable energies "are bordering 12% and there is an objective of the European Union so that by 2020 20% of European energies are renewable," said Luis Suarez, president of the Official College of Geologists of Spain, ICOG.

But others point out that the gas extracted in the process of hydraulic fracturing is methane, a gas much more potent than carbon dioxide as a greenhouse gas.

According to the Intergovernmental Panel on Climate Change, IPCC, a molecule of methane equals 72 of carbon dioxide after 20 years of emission, and 25 molecules of carbon dioxide at 100 years.

Robert Howarth and colleagues at Cornell University estimated that between 4 and 8% of the total methane production of a well escapes into the atmosphere and adds that there is also emission from the reflux waters that flow along with the gas to the atmosphere. surface after fracturing.

But this analysis is controversial. Lawrence Cathles, also of Cornell University, says the high potential for methane heating in 20 years must be counteracted by the fact that methane has a much shorter life in the atmosphere than CO2.

Robert Jackson of Duke University in North Carolina says that instead of worrying about fracking emissions themselves we should concentrate on leaks in the distribution chain. "Only in the city of Boston we found 3,000 methane leaks in the pipes," Jackson told to New Scientist magazine.

Wednesday 4 October 2017

Biological Clock Discoveries by 3 Americans Earn Nobel Prize

Nobel Prize
The discoverers of the 'internal clock' of the body, Nobel Medicine 2017

The winners are Jeffrey C. Hall, Michael Rosbash, and Michael W. Young

US scientists Jeffrey C. Hall, Michael Rosbash and Michael W. Young today won the 2017 Nobel Prize in Medicine, "for their discoveries of the molecular mechanisms that control the circadian rhythm," according to the jury of the Karolinska Institute in Stockholm, responsible for the award. The prize is endowed with nine million Swedish crowns, about 940,000 euros.

Thanks in part to his work, today it is known that living beings carry in their cells an internal clock, synchronized with the 24-hour turns of the planet Earth. Many biological phenomena, such as sleep, occur rhythmically around the same time of day, thanks to this inner clock. Its existence was suggested centuries ago. In 1729, the French astronomer Jean-Jacques d'Ortous de Mairan observed the case of mimosas, plants whose leaves open during the day into the sunlight and close at dusk. The researchers discovered that this cycle was repeated even in a dark room, suggesting the existence of an internal mechanism.

In 1971, Seymour Benzer and his student Ronald Konopka of the California Institute of Technology took a momentous leap in research. They caught vinegar flies and induced mutations in their offspring with chemicals. Some of these new flies had alterations in their normal 24-hour cycle. In some, it was shorter and in others, it was longer, but in all of them, these perturbations were associated with mutations in a single gene. The discovery could have earned the Nobel, but Benzer died in 2007, at age 86, for a stroke. And Konopka died in 2015, at age 68, of a heart attack.

The Nobel, finally, was taken to Hall (New York, 1945), Rosbash (Kansas City, 1944) and Young (Miami, 1949). The three used more flies in 1984 to isolate that gene, baptized "period" and associated to the control of the normal biological rhythm. Subsequently, they revealed that this gene and others self-regulate through their own products - different proteins - generating oscillations of about 24 hours. It was "a change of paradigm", in the words of the Argentine neuroscientist Carlos Ibáñez, of the Karolinska Institute. Each cell had a self-regulating internal clock.

The scientific community has since established the importance of this mechanism in human health. This inner clock is involved in the regulation of sleep, in hormone release, in eating behavior and even in blood pressure and body temperature. If, as occurs in people working in shifts at night, the pace of life does not follow this internal script, can increase the risk of suffering different diseases, such as cancer and some neurodegenerative disorders, says Ibanez. The syndrome of fast time zone change, better known as jet lag, is a clear sign of the importance of this internal clock and its mismatches.

The Karolinska researcher sets an example with a 24-hour cycle, in which the internal clock anticipates and adapts the body's physiology to the different phases of the day. If the day begins with deep sleep and a low body temperature, the release of cortisol at dawn increases blood sugar. The body prepares its energies to face the day. When night falls, with a peak blood pressure, melatonin, a hormone linked to sleep, is secreted.

These inner rhythms are known as circadian by the Latin words circa, around, and dies, day. The scientific community now knows that these "around the clock" molecular dashes emerged very soon in living things and were preserved throughout its evolution. They exist in both single-cell life forms and in multicellular organisms such as fungi, plants, animals, and humans.

At the time of its discovery, Hall and Rosbash were working at Brandeis University in Waltham, and Young was researching at Rockefeller University in New York. Its recognition follows the tonic of the Swedish awards. Men have won 97% of Nobel prizes in science since 1901. In the category of Medicine, statistics improve slightly: 12 of the 214 women are awarded the prize: 5.6%.

Monday 18 September 2017

Engineers Developing Methods to Construct Blood Vessels Using 3D Printing Technology

3D Printing Technology
From time to time new and interesting news about 3D printing technology in the field of health arise. In the near future, this technology will allow fabrics to be created on demand to repair any organ affected by an illness. There are a lot of medical advances commencing day to day and 3D printing technology is one among them which is really an astonishing factor in the field of medical science.

However, in spite of the promising of these and other advances, to date, it has only been possible to create fine tissues of living cells in the laboratory using 3D printing technology. When we tried to create tissues with a thickness greater than several layers of cells, those in the intermediate layers died from lack of oxygen and the impossibility of eliminating their residues.

They did not have a network of blood vessels to transmit oxygen and nutrients to each cell. Therefore the challenge was served because if a network of blood vessels were artificially created for this purpose using 3D printing technology, larger and more complex cell tissues could be developed.

To solve this problem, the team led by Professor Changxue Xu of Industrial, Manufacturing and system engineering and with his colleague Edward. E. Whitacre Junior college of Engineering has used a 3D printing technology specially adapted for this purpose with three different types of bio-inks. The first head extrudes a biotin of the extracellular compound, the biological material that binds the cells in the tissue. The second extrude a type of biotin which contains extracellular tissue and living cells.

An alternative to more complex installations

The creation of model blood vessels to aid in the study of diseases, such as strokes, can be complicated and costly in addition to consuming a lot of time. And the results can not always be truly representative of a human vessel. Changxue Xu's research has designed a new method to create models of veins and arteries using 3D printing technology that is more efficient, less expensive and more accurate. Changxue Xu and his team have created vascular channels using 3D printing technology.

An important advance is the ability to establish multiple layers of cells in the channels. Normally, when these microfluidic vascular chips are made, they only have one layer of cells. But the blood vessels within the body are composed of three to four different types of cells. The innermost cells, the endothelial cells, are those that come into contact with the blood, but the other layers of the cells help the internal cells. If there is an injury or a blood clot, there is an entire reaction that takes place between these cells.

3D printing technology has now made a difference in manufacturing. "We can use 3D printing technology to create the mold and use that mold to inject any gel and cells in whatever shape we want," says Changxue Xu. The difficulty so far was that much of the work has usually been done in "clean rooms," rooms that are environmentally controlled to prevent contamination and ultra-disinfected. Changxue Xu has a room like that, so the work has to be done at other universities.

Tuesday 5 September 2017

Supercapacitive Performance of Porous Carbon Materials Derived from Tree Leaves

carbon

Converting Fallen Leave – Porous Carbon Material

An innovative system of converting fallen tree dried leaves to porous carbon material which could be utilised in producing high tech electronics have been found by researchers in China. Researchers have defined in a study printed in the Journal of Renewable and sustainable energy, on the procedure of converting tree leaves into a system of integrating into electrodes as active resources. Initially the dried leaves are ground into powder and thereafter heated to 220 degrees Celsius for about 12 hours which formed a powder comprised of small carbon microspheres.

The carbon microspheres are then said to be preserved with a solution of potassium hydroxide and heated on gradually increasing the temperature in sequences from 450 to 800 degrees Celsius. Due to the chemical treatment it tends to corrode the surface of the carbon microspheres which tends to make it tremendously permeable.

The concluding production which is a black carbon powder is said to have a great surface area owing to the existence of several small holes which tend to have been chemically carved on the surface of the microspheres. The great surface area provides the ultimate produce with unusual electrical properties.

Permeable Microspheres



Led by Hongfang Ma of Qilu University of Technology in Shandong, the detectives followed a succession of standard electrochemical test on the permeable carbon microspheres in order to enumerate their possibility for utilisation in electronic devices.

The current-voltage curves for these materials showed that the element tends to make exceptional capacitor. Additional tests indicated that the materials had in fact been super capacitors having precise capacitances of 367 Fards/gram.

 These were said to be over thrice the value seen in some of the graphene super capacitors. Capacitor is said to be an extensively utilised element which tends to store energy on holding a charge on two conductors, which are detached from each other with the support of an insulator.

Super capacitor tend to store 10 to 100 times the energy as an ordinary capacitor and has the tendency of accepting and delivering charges much quicker than a usual rechargeable battery. Hence super capacitive materials have the potentials for an extensive selection of energy storage essential in particular in computer technology as well as hybrid or electric vehicles.

Enhance – Electrochemical Properties



The roadsides of northern China are said to be scattered with deciduous phoenix trees which produce abundant fallen leaves during autumn and these leaves are usually burnt in the colder climate, aggravating the air pollution issue of the country.

The investigators in Shandong, China, had recently found the new system of resolving this issue by means of converting waste biomass into porous carbon materials which could be used in energy storage technology. Besides tree leaves, the team together with the others have also succeeded in changing potato waste, corn straw, pine wood, rice straw as well as other agriculture wastes into carbon electrode materials.

Professor Ma together with her colleagues expects to enhance more on the electrochemical properties of porous carbon materials by augmenting the preparation procedure and enabling fixing or adjustment of the raw materials.

Wednesday 12 July 2017

iPhone 8 to ditch fingerprint sensor for face scanner, reports say

iPhone 8

iPhone 8 – Refurbished Security System

The upcoming iPhone 8 of Apple would be featuring a refurbished security system wherein the users can unlock the device by utilising their face instead of their fingerprints. The 10th anniversary of iPhones is estimated in having a radical redesign that would comprise of a security system which will scan the faces of the users in order to check who could be using the device.

As per Bloomberg, it is said that the 3D scanning scheme would be replacing the Touch ID as a means of verifying payments, log in to apps as well as in unlocking the phone. It could function at various angles and hence the iPhone has the tendency of getting unlocked by merely looking at it, if the same is flat on the table and also held up right. It has been reported that the scanning system has been designed for the purpose of speed and precision and can scan the face of the user and unlock the device within a few hundred milliseconds.

Since it tends to analyse 3D instead of 2D images, it is likely to be capable of differentiating between a persons’ face and an image of the person. Presently available in Galaxy S8 of Samsung in strengthening the security of the device, Apple could also utilise the eye scanning technology.

Face Scanning Technology

Bloomberg had reported that the face scanning technology could secure more than the Touch ID, first released in 2013 on the iPhone S5 since it tends to draw on more identifiers. Apple has claimed that its fingerprint scanner tends to have only a 1 in 50,000 chance of being unlocked by a stranger’s fingerprint. According to an analyst having reliable track record, Ming-Chi Kuo, the iPhone 8 is said to feature an edge-to-edge OLED screen having the maximum screen-to-body ratio than any smartphone prevailing at the moment.

Apple would probably remove the Home button as well as the Touch ID scanner in order to make provision for the display. Kuo has also predicted that Apple would be releasing three new phones in September, namely the iPhone 8, iPhone 7S and iPhone 7S Plus. The iPhone 8 would be featuring the most vivid redesign among the three, having a 5.2-inch size screen retained in a device which would be the same size as the iPhone 7. Besides that it would also have less colour options and will be available with a glass front with steel edges towards the back.

New Chip Dedicated to Processing Artificial Intelligence

A well-linked Apple blogger, John Gruber had mentioned that the top iPhone could be named as `iPhone Pro’ recommending that the cost could be $1,500 or higher. The remaining two devices would be featuring LCD screens and will be available in sizes of 4.7-inch and 5.5-inch. Like the present iPhone 7, these devices would probably have a Home button together with Touch ID.

It is also said that the three phones would be having a Lightning port together with an embedded USB-C equipped with storage of 64GB or 256GB, if the predictions of Kuo tends to be accurate. Moreover they would be available with a new chip that is dedicated to processing artificial intelligence and the same is being verified presently.

Monday 10 July 2017

Watching Cities Grow



Great Resolution Civilian Radar Satellite

Major cities in the world have been increasing and as per the estimates of United Nations, presently half of the population of the world tends to be living in cities. Towards 2050, the figure is expected to mount to two thirds of the population of the world.

 Professor for Signal Processing in Earth Observation at TUM, Xiaoxiang Zhu has informed that this growth has given rise to high demands on building and infrastructure safety since destruction events could threaten thousands of human lives at once. Zhu together with her team had established a method of early detection of probable dangers for instance; subterranean subsidence could cause the collapse of buildings, bridges, tunnels or even dams.

The new system tends to make it possible in noticing and visualizing changes as small as one millimetre each year. Data for the latest urban images tends to come from the German TerraSAR-X satellite which is one of the great resolution civilian radar satellite in the world. Since 2007, the satellite, circulating the earth at an altitude of approximately 500 kilometres tends to send microwave pulses to the earth and collects their echoes. Zhu has explained that at first these measurements were only in a two dimensional image with a resolution of one meter.

Generate Highly Accurate Four-Dimensional City Model

The TUM professor worked in partnership with the German Aerospace Centre – DLR and was also in charge of her own working team. The DLR tends to be in control of the operation and use of the satellite for scientific purposes.

The consequence of the images is restricted by the statistic that reflections from various objects which are at an equivalent distance from the satellite, will layover with each other and this effect tends to decrease the three-dimensional world to a two-dimensional image. Zhu had not only created her own algorithm that tends to make it possible in reconstructing the third and also fourth dimension, but also set a world record at the same time.

 Four dimensional point clouds having a density of three million points for each square kilometre had been reconstructed. This rich recovered information gave rise to generate highly accurate four-dimensional city models.

Radar Measurements to Reconstruct Urban Infrastructure

The trick was that the scientists utilised images taken from slightly various viewpoints. Every eleven days, the satellite tends to fly over the region of interest but its orbit position does not always seem to be precisely the same. The researchers utilise these 250 meter orbital variations in radar tomography to localize each point in three-dimensional space.

This system utilises similar principle used by computer tomography that tends to develop a three-dimensional view of the inner area of the human body. Various radar images taken from different viewpoints have been linked in creating a three-dimensional image. Zhu states that since this system processes only poor resolution in the third dimension, additional compressive sensing system which makes it possible for improving the resolution by 15 times is applied.

Scientists could utilise the radar dimensions to restructure urban organization on the surface of the earth with great accuracy, from TerraSAR-X, for instance the 3D shape of individual buildings. This system has already been utilised in generating highly precise 3D models in Berlin, Paris, Las Vegas and Washington DC.

Friday 7 July 2017

Hot Electrons Move Faster Than Expected

 Hot Electrons

Ultrafast Motion of Electrons


A new research has given rise to solid-state devices which tend to utilise excited electrons. Engineers and scientists at Caltech have for the first time, been in a position of observing directly the ultrafast motion of electrons instantly after they have been excited by a laser. It was observed that these electrons tend to diffuse in their surroundings quickly and beyond than earlier anticipated.

This performance called as `super-diffusion has been hypothesized though not seen before. A team headed by Marco Bernardi of Caltech and the late Ahmed Zewail had documented the motion of electrons by utilising microscope which had captured the images with a shutter speed of a trillionth of a second at a nanometer-scale spatial resolution and their discoveries had appeared in a study published on May 11 in Nature Communications.

 The excited electrons had displayed a diffusion rate of 1,000 times higher than earlier excitation. Though the phenomenon had lasted only for a few hundred trillionths of a second, it had provided the possibility for operation of hot electrons in this fast system in transporting energy and charge in novel devices.

Assistant professor of applied physics and materials science in Caltech’s Division of Engineering and Applied Science, Bernardi had informed that their work portrayed the presence of fast transient which tends to last for a few hundred picoseconds at the time when electrons move quicker than their speed of room temperature, indicating that they can cover longer distance in a given period of time when operated with the help of lasers.

Ultrafast Imaging Technology


He further added that this non-equilibrium behaviour could be employed in novel electronic, optoelectronic as well as renewable energy devices together with uncovering new fundamental physics. Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry, professor of physics as well as the director of the Physical Biology Centre for Ultrafast Science and Technology at Caltech, colleague of Bernardi had passed away on 2nd August 2016.

The research had been possible by scanning ultrafast electron microscopy, which is an ultrafast imaging technology initiated by Zewail, with the potential of creating images with picosecond time with nanometer spatial resolutions. The theory and computer models had been developed by Bernardi which clarified the tentative results as an indicator of super-diffusion.

Bernandi has plans of continuing the research by trying to answer the fundamental questions regarding the excited electrons, like how they equilibrate among themselves as well as with atomic vibrations in material, together with applied ones like how hot electrons could increase the efficiency of energy conversion devices such as solar cells and LEDs.

Super Diffusion of Excited Carriers in Semiconductors


The paper has been entitled `Super Diffusion of Excited Carriers in Semiconductors’. Co-authors comprise of former postdoc Ebrahim Najafi of Caltech, who is said to be the main author of the paper and a former graduate student, Vsevolod Ivanov. The research has been supported by the National Science foundation, together with the Air Force Office of Scientific Research, the Gordon and Betty Moor Foundation as well as the Caltech-Gwangju Institute of Science and Technology – GIST, program.

Saturday 1 July 2017

Sensor Solution: Sensor Boutique for Early Adopters

Sensor Boutique
It is known that a very individual fraction of infrared light is absorbed by every chemical substance. This absorption can be used for recognising substances with the help of optical methods, which is almost like the concept of a human fingerprint.

To elaborate this concept, when the infrared radiation, that falls within a certain range of wavelength, are absorbed by molecules, they are animated to a higher level of vibration, in which they rotate and vibrate in a typical and distinctive pattern or rather in a “fingerprint” pattern. These patterns can be used for identifying specific chemical species. Such kind of a method is used, let’s say, for example, in the chemical industry but also has its uses in the health sector or in criminal investigation. A company often needs an individually tailored sensor solution if it plans a new project.

EU-funded pilot line called MIRPHAB (Mid InfraRedPhotonics devices fABrication for chemical sensing and spectroscopic applications) support companies that in search for a suitable system and help in the development of sensor technology and measurement technology in mid-infrared (MIR). Participating in this project is the Fraunhofer Institute for Applied Solid State Physics IAF.

Pilot line for ideal spectroscopy solutions


A company has very individual needs if it is looking for a sensor solution, for example, if it has to identify a particular substance in a production process. This begins with the substances that have to be recorded to the number of sensors required up to the speed of the process of production.Considering most of the cases, a custom-made solution that suits all does not suffice and various suppliers are required for the purpose of developing the optimal individual solution.Here is where MIRPHAB comes into picture and proves to be very useful.

Leading European research institutes and companies belonging to the MIR environment have collaborated to provide customers with a custom-made and best suited offers made from a single source. Parties that are interested can get in touch with a central contact person, who can then make a compilation of the best solutions possible from the MIRPHAB members component portfolio as per the modular principle.

EU funding has supported MIRPHAB in the development of the individual MIR sensor solution within the framework, in order to fortify the European industry in the long run and increase in its leading position in chemical analysis and sensor technology. This considerably lessens the investment costs and as a result also reduces the entry point for companies in the MIR area.

Companies that have previously faced high costs and development efforts are now looking at a high-quality MIR sensor solution as an object of interest due to its combination with the virtual infrastructure which is a development caused by MIRPHAB.Also, MIRPHAB provides companies access to the latest and modern technologies, enabling them with an added advantage as an early adopter compared to the competition.

Custom-madesource forMIR lasers


The Freiburg-basedFraunhofer Institute for Applied Solid State Physics IAF along with the Fraunhofer Institute for Photonic Microsystems IPMS situated in Dresden, is providing a central component of the MIRPHAB sensor solution. The Fraunhofer IAF is presenting the new technology of quantum cascade lasers that emanate laser light in the range of MIR. In this type of laser, the range of the wavelength of the emitted light is spectrally extensive and can be adapted as per requirement during manufacturing. To select a particular wavelength within the broad spectral range, an optical diffraction grating has to be used to choose and then coupled back into the laser chip. The wavelength can be adjusted constantly by turning the grating. This grating is created at the Fraunhofer IPMS in a scaled-down form in so-called Micro-Electro-Mechanical-System or MEMS technology.Thus it is then possible to oscillate the grating up to one kilohertz of frequency. This further enables the tuning of the laser source’s wavelength up to a thousand times per second over a large range of spectrum.
The Fraunhofer Institute for Production Technology IPT in Aachen also has involvement in MIRPHAB in order to make the manufacturing of lasers and ratings more proficient and to enhance them for pilot series fabrication.With the help of its proficiency, it changes the production of the quickly adaptable MIR laser into industrially applicable manufacturing processes.

Process exploration in actuality

Currently, there are many applications in the field of spectroscopy that are still in the category of visible or near the range of infrared and use comparatively feeble light sources. MIRPHAB provides solutions has the concept of infrared semiconductor lasers as a foundation. These have comparatively higher intensity of light thus allowing the scope for completely new applications. This results in a recording of up to 1,000 spectra per second with the help of the MIR laser source which, as an example, provides for the real time programmed monitoring and control of biotechnological processes and chemical reactions. Thus, MIRPHAB’s contribution is considered to be important and vital to the factory of the future.

Tuesday 27 June 2017

Space Robot Technology Helps Self-Driving Cars and Drones on Earth

Support Robots to Navigate Independently
 
The significance of making units of self-driving cars together with grocery delivery through drone could be revealed through an improbable source – autonomous space robots.

An assistant professor of aeronautics and astronautics, Marco Pavone has been creating technologies to assist robots in adjusting to unknown as well as altering environments. Pavone had been working in robotics at Jet Propulsion Laboratory of NASA before coming to Stanford and had maintained relationships with NASA centres together with collaboration with the other departments at Stanford. He views his work in space and Earth technologies as complementary.

 He commented that in a sense, some robotics techniques which tend to have been designed for autonomous cars could be very useful for spacecraft control. Similarly the algorithms which he and his students devised to assist robots make decisions and assessments on their own with a span of a second could help in space exploration as well as they could improve on driving cars and drone from the Earth.

One of the projects of Pavone tends to centre on supporting robots to navigate independently in bringing space debris out of orbit, delivering tools to astronauts and grasp spinning, speeding objects out of the vacuum of space.
 
Gecko-Inspired Adhesives
 
There is no boundary for error while grabbing objects in space. Pavone informed that in space when you approach an object, if you are not very careful in grasping it at the time it is contacted, the object would float away from you. Bumping an object in space would make recovering it very difficult.

Pavone had teamed up with Mark Cutkosky, a professor of mechanical engineering, who had spent the last decade perfecting gecko-inspired adhesives, in order to resolve the grasping issue.

 The gecko grippers support a gentle approach as well as a simple touch in order to `grasp’ an object, enabling easy capture and release of spinning, unwieldy space debris. However the delicate navigations needed for grasping in space is not an easy job. Pavone had stated that one have to operate in close proximity to other objects, spacecraft or debris or any object one might have in space that needs advanced decision making potentials.

 Pavone together with his co-workers developed systems which enabled space robot to independently respond to such flexible situations and competently grab space objects with their gecko-strippers.
 
Perception-Aware Planning
 
The subsequent robot could move as well as grab in real time, updating its decisions at a rate of several thousand times a second. This kind of decision-making technology is said to be beneficial in solving navigation issue with drones that are Earth-bound.

 A graduate student Benoit Landry had stated that for these types of vehicles, navigating at high speed in proximity to buildings, people together with the other flying objects seems difficult to perform. He focused that there seems to be a delicate interplay between making decisions and environmental perception. He added that in this perceptive, several aspects of decision making for independent spacecraft tend to be directly significant to drone control.

Landry together with Pavone have been working on `perception-aware planning’ that enables drones to consider fast routes as well as to `see’ their surroundings besides improved estimate on where they are. The work is presently being extended towards handling of interactions with the humans, a main section to organize autonomous system like the drones and self-driving cars.

 



Reduced Gravity Atmospheres
 
Landry had also mentioned that the background of Pavone at NASA had been a good complement to the academic work. When a robot is said to land on a small solar system body type an asteroid, added challenges tend to come up.

 These atmospheres seem to have total different gravity than the Earth. Pavone had stated that if one were to drop an object from waist-height, the same would take a couple of minute to settle to the ground. Ben Hockman, a graduate student in the lab of Pavone, had worked on a cubic robot known as Hedgehog, in order to deal with low-gravity atmospheres such as asteroids.

 The robot passed through uneven, rugged and low-gravity territories by hopping rather than driving like the traditional rovers. Ultimately, Pavone and Hockman desired Hedgehog to be capable of navigating and carrying out tasks without being obviously told how to perform it by a human located millions of miles away. Hockman had mentioned that the prevailing Hedgehog robot is said to be designed for reduced gravity atmospheres though it could be adjusted for Earth.

It would not hop quite that far since we tend to have more gravity though it could be utilised to cross more rugged territories where wheeled robots are unable to go. Hockman viewed the research that he had been doing with Pavone as core scientific exploration adding that science attempts to answer the difficult questions we don’t know the answers to and exploration seeks to find whole new questions we don’t even know yet how to ask.

Monday 26 June 2017

Sony Unveils New 'Spider-man' Game at E3 Expo

Sony

Sony’s Updated Game - `Spider-Man’

An updated game `Spider-man, for PlayStation video console, had been unveiled by Sony, at the Electronic Entertainment Expo – E3, in Los Angeles recently. It is said that Spider-man is likely to be released in 2018 and is being established by Insomniac games, which is the group dealing with the contribution of PlayStation like `Resistance’ and `Ratchet & Clank.

The president and CEO of Sony Interactive Entertainment America, Shawn Layden, at the time of unveiling the `Spider-man’ game, had commented that the future is here and it is now with PlayStation 4 Pro and PS VR. Virtual reality – VR is rapidly gaining new battleground in the gaming scenario wherein developers are in search of winning over fans with immersive headsets and accessories.

Sony Corp had mentioned that last week it had sold over one million units of its virtual reality headset all over the world and was enhancing production. Besides this at the E3, Sony had also announced that the cult game `Shadow of the Colossus’ would be getting a high-definition remake for PlayStation 4. This game as well as the next `God of War’ edition are likely to be released next year. Though since Spider-Man 2 the Spidey game has not be mostly good, we seem to live in hope.

 

Reclaiming Earlier Glory

 
The reason of Spider-Man 2 is the standard of Spidey game which came down to the feeling of sandbox. The new game after an original story, not a tie-in film, tends to look fixed in reclaiming some of the earlier glory. Though there had not been any strong announcement, it surely seems like it was utilising the Spider Man 2 model.

Perhaps the biggest E3 2017 news, so far is the launch of the Xbox One X and after several months of speculation, the `Project Scorpio’ game console was unveiled by Microsoft. One of the most striking features of the Xbox One X is its design.

A dreadful amount of hardware has been crowded into what is claimed to be the smallest Xbox by Microsoft. Microsoft’s answer to the PS4 Pro is the new high-end console which will be hitting the shelves on 7 November which will be costing £449.99. At its global E3 showcase, Sony may not have exposed a brand new console though there had been no lack of best-seller game being provided. Ubisoft contributions covered varieties from action shooters like `Far Cry 5’ to sports, piracy, dance, together with space money and virtual reality.

 

Prime Announcement – New Game - `Far Cry’ Series

 
However, the prime announcement had been the new game in the tremendously prevalent `Far Cry’ series. The future edition of the first-person shooter action-adventure is said to be the 11th instalment in the award-winning series which is scheduled for a release on 27 February 2018.

Assassin’s Creed’ franchise, the next game in the long-running is called `Assassin’s Creed: Origins’ and is said to be one of the most expected games of the year. Assassin’s Creed is considered to be a franchise centred on exploit-adventure video game series designed by Ubisoft. Plenty of rumours have been circulated and speculated with regards to `Assassin’s Creed” Origins’ much ahead of E3 2017 and the new video game has been heading for Egypt taking the story back to an ancient world. On October 27, versions of Origins custom-made for playing on Xbox One, PlayStation 4 and Windows-powered personal computers will be released.

Thursday 22 June 2017

Cyber Firms Warn of Malware That Could Cause Power Outages

Malware

Malicious Software – Modified with Ease Harming Critical Infrastructure

It was recently noted that malicious software had been uncovered by two cyber security firms which is presumed to have caused a December 2016 Ukraine power outage, cautioning that the malware could be modified with ease in harming critical infrastructure operations all over the world.

A Slovakian maker of anti-virus software – ESET together with Dragos Inc. a U.S. critical-infrastructure security firm had released information analyses of the malware called Industroyer or Crash override and had dispensed private alerts to governments as well as infrastructure operators to assist them in defending against the threat.

The U.S. Department of Homeland Security had mentioned that they were investigating the malware but it had not perceived any evidence to put forward that it had infected U.S. critical infrastructure. The two firms had stated that they were not aware of who had been behind the cyber-attack. Ukraine had put the blame on Russia but the officials in Moscow had denied the blame constantly.

The firms still cautioned that there could be added attacks utilising the same method by the group that built the malware or by imitators who alter the malicious software. ESET malware researcher Robert Lipovsky had stated in a telephone interview that the malware was easy to repurpose and utilise against other targets which was certainly alarming and could cause wide-scale destruction to organization systems that are dynamic.

System Compromised by Crash Override

That warning had been verified by the Department of Homeland Security stating that it was working to understand better the threat posed by Crash Override. The agency had mentioned in an alert post on its website that `the tactics, techniques and procedure described as part of the Crash override malware could be modified to target U.S dangerous information networks and systems’.

 The alert had posted around three dozen technical indicators that a system had been compromised by Crash Override and requested firms to contact the agency if they had doubted that their system had been compromised by the malware. Robert M. Lee founder of Dragos had stated that the malware had the potential of attacking power systems all over Europe and had the tendency to be leveraged against the United States with small modifications.

Risk to Power Distribution Organizations

Lee had further mentioned by phone that` it is able to cause outages of up to a few days in portions of a nation’s grid but is not strong enough to bring down an entire grid of a country’. Lipovsky had stated that through modifications, the malware could attack other kinds of infrastructure comprising of local transportation providers, gas and water providers.

A leader of Kroll’s cyber security practice, Alan Brill had mentioned in a telephone interview that power firms are concerned that there will be more attacks. He further added that they have been dealing with very smart people who came up with something and deployed it. It represents a risk to power distribution organizations everywhere.

Industroyer had been the only second piece of malware that has been uncovered till date which has the potential of disrupting industrial process to manually intervene, without the help of hackers. Stuxnet was first discovered in 2010 and is generally believed by security researchers to have been utilised by the United States as well as Israel for attacking nuclear program of Iran. The Kremlin and Federal Security Service of Russia had refrained from replying to their request for clarifications.

Deep Learning With Coherent Nanophotonic Circuits

 Nanophotonic Circuits
Light processor recognizes vowels

Nanophotonic module forms the basis for artificial neural networks with extreme computing power and low energy requirements

Supercomputers are approaching the enormous computing power of up to 200 petaflops, ie 200 million billions of operations per second. Nevertheless, they lag far behind the efficiency of human brains, mainly because of their high energy requirements.

A processor based on nanophotonic modules now provides the basis for extremely fast and economical artificial neural networks. As the American developers reported in the magazine "Nature Photonics", their prototype was able to carry out computing operations at a rate of more than 100 gigahertz with light pulses alone.

"We have created the essential building block for an optical neural network, but not yet a complete system," says Yichen Shen, from the Massachusetts Institute of Technology, Cambridge. The nanophotonic processor developed by Shen, together with his colleagues, consists of 56 interferometers, in which light waves interact and form interfering patterns after mutual interference.

These modules are suitable for measuring the phase of a light wave between the wave peak and the wave trough, but can also be used for a targeted change of this phase. In the prototype processor, these interferometers, which in principle correspond, in principle, to a neuron in a neural network, were arranged in a cascade.

After the researchers simulated their concept in advance with elaborate models, they also practically tested it with an algorithm for recognizing vowels. The principle of the photonic processor: A spoken vowel unknown to the system is assigned to a light signal of a laser with a specific wavelength and amplitude. When fed into the interferometer cascade, this light signal interacts with further additionally fed laser pulses and different interference patterns are produced in each interferometer.

To conclude these extremely fast processes, the resulting light signal is detected with a sensitive photodetector and is again assigned to a vowel via an analysis program. This assignment showed that the purely optical system could correctly identify the sound in 138 of 180 test runs. For comparison, the researchers also carried out the recognition with a conventional electronic computer, which achieved a slightly higher hit rate.

This system is still a long way from a photonic light computer, which can perform extremely fast speech recognition or solve even more complex problems. But Shen and colleagues believe it is possible to build artificial neural networks with about 1000 neurons from their nanophotonic building blocks.

In contrast to electronic circuits of conventional computers, the energy requirement is to be reduced by up to two orders of magnitude. This approach is one of the most promising in the future to compete with the viability of living brains.

Monday 19 June 2017

Solar Paint Offers Endless Energy From Water Vapor

Solar Paint and its capability to Produce Fuels out of Water Vapor


Researchers always tend to turn the whirlwind with their innovative research and invention. This time they have decided to bewilder the world with the most innovative research in terms of paint. We have heard about the use of solar energy to generate electricity, but this time the impact of solar power will be located in paints as well. The researchers have unveiled this new development (Solar Paint) which can be used as a measure to generate water vapor which would further split to provide hydrogen. This has left all the science Nazis with utmost eagerness to follow up this research as soon as possible.

The paint would be so tempting because it would contain all essential compounds which would act like silica gel. This compound seems to be frequently used in most of the materials, these days. It is most commonly used in all the sachets in order to absorb moisture, so that the food, medicine or any other product in sachet would remain fresh and undetected from any sort of bacteria. But other than this gel, there are several other materials such as synthetic molybdenum-sulphide which also acts as a semi-conductor and behaves as a catalyst in spitting the water molecules into hydrogen and oxygen.

One of the renowned researchers at the University of RMIT in Melbourne known as Dr. Torben Daeneke, Melbourne, Australia, has confirmed that they once absorbed that when they added titanium particles to compounds it resulted in forming a paint that could absorb sunlight and thus, produced hydrogen from solar energy and moist air. Hence, the name solar paint was given.

Observation suggests that the white pigment which is already present in wall paints is known as titanium oxide, which means that just with the addition of this new component a simple material can upgrade itself to form large chunks of energy harvesters and real estate which produces fuel by converting walls of brick.

The researcher has further concluded that this invention in terms of solar paint has several advantages. Usage of water can be restricted to some extent, as the water vapor or moisture absorbed from the atmosphere can now be utilized to produce that too in much-affected ways. One of his colleagues also seconded him by adding that hydrogen is the cleanest and purest forms of energy which could be used as fuels by utilizing it in fuel producing cells and in combustion engines that are conventional with an alternative measure other than fossil fuels.

This invention can be used in all sorts of places irrespective of the weather conditions. May that be a hot or cold climate or places near to the oceans this formula would be applicable in all places. The formula is very simple, the sea or ocean water would evaporate due to sunlight and thus, the vapor formed can be utilized to produce fuels. The way solar paint is turning out to be beneficial in everyday life soon its impact would be realized globally. 

Thursday 15 June 2017

Novel Innovation Could Allow Bullets to Disintegrate After Designated Distance

bullet shot

Purdue University – Bullet to Be Non-Lethal

Presently bullets have been made from various materials particular for the projected application which tends to retain an important portion of their energy after travelling hundreds or thousands of meters. This could result in unwanted significances like unintended death or injury of those around the place as well as security damage if the target was missed.

Very often stray bullet shootings are overlooked consequence of firing which could result in severe injury or even death to bystanders or collateral damage victims in the military. Hence a need in law enforcement, military together with civilian segments for a safer bullet would considerably decrease collateral damage as well as injury.

Technology has been created that could avoid these occurrences at Purdue University. Research group headed by a professor of materials engineering and electrical and computer engineering, Ernesto Marinero has designed novel materials and fabrication which enables a bullet to be non-lethal which collapses after a selected distance.

This technology was the consequence of a need for safer bullet which would considerably decrease security damage as well an injury in law enforcement, civilian and military segments. Conservative bullet tends to have a substantial percentage of their energy after travelling a hundreds or even thousands of meters.

Combination of Stopping Power of Standard Bullets/Restriction of Range

The newly developed Purdue innovation helps the bullet to break over a predetermined period owing to the heat that is generated at the time of firing in combination with air drag together with an internal heating component. The heat conducts over the complete bullet part, melting the low temperature binder material facing drag forces that tends to result in breakdown.

The technology is said to be a combination of stopping power of standard bullets, the shrapnel-eliminating aids of frangible bullets together with a restriction of range in decreasing injury or death of would-be spectators. The Office of Technology Commercialization of Purdue Research Foundation has patented the technology and is said to be available for license. The researchers at Purdue University had established materials together with fabrication for ammunition which became non-lethal after a chosen space.

A professor of emergency medicine and director of the Violence Prevention Research Program at UC Davis School of Medicine and Medical Centre, Garen Wintemute had commented that `stray bullet shootings gave rise to fear and insecurity among the people. They tend to remain indoors and stop their children from playing out in the open thereby changing their pattern of their daily routine to evade from being struck by a bullet intended for someone in mind.

No Research – Exploring Epidemiology of Firings

However, no research had been done at the national level in exploring the epidemiology of these firings and such information is essential in identifying preventive measures’. He further added that stray bullet firings are mostly a side effect of planned violence what is indirectly known as collateral damage.

Those who tend to get shot have little or no warning; opportunities to indulge in preventive measures once the shooting tends to take are restricted. We will only be capable of preventing these shootings to the extent that we are able to prevent firearm violence unless we intend to bulletproof the complete communities together with the residents.

Tuesday 13 June 2017

Fast and Direct Vehicle Charging with Solar Energy

High Speed Charger – Cars Charged From Solar Panels

Solar vehicle, an electric means of transportation is driven totally or considerably through direct solar energy wherein photovoltaic – PV cells comprising of solar panel alters the energy of the sun directly into electric energy.Solar power could be utilised in providing power for communication or controls or also for other auxiliary operations.

At the moment, solar vehicle are not retailed as real regular conveyance device but mostly demonstration vehicles and engineering exercises that are frequently sponsored by government agencies. But indirectly solar-charged vehicles are said to be prevalent and solar boats are now made commercially available. Solar cars tend to be influenced by PV cells in converting sunlight into electricity in driving electric motors.

Similarly solar thermal energy that has a tendency of converting solar energy to heat, PV cells directly converts sunlight into electricity. Electric cars can be considered as sustainable if they tend to be charged utilising power which has been created sustainably. TU Delft in association with the company Power Research Electronics has developed a high speed charger which tends to allow cars to be charged with power directly from solar panels.

Vehicle-to-Grid System

This means that the electricity network is not required as a provisional stage for charging. Besides this, the scheme tends to work in two ways. The electric car can be charged with solar power and the energy from the charged car battery can be used as well to supply your house with electricity.

A system for charging electric vehicles directly with power from solar panels was demonstrated at the TU Delft Research Exhibition on 6th and 7th June’ 2017. Technology News regarding this was provided by Prof. Pavol Bauer from TU Delft who commented that solar panels tend to generate direct current which is essentially required in converting to alternating current before it is utilised to charge an electric car.

With the charger – 10 kW, which is no longer essential,this seems to mark a main step forward. He further added that in the Vehicle-to-Grid system, one could also go in any direction based on what is favoured at the time.

Possibility of Predicting Supply of Solar Energy

This would mean that one could also supply your home with power from the battery of the car. Besides this, there is also a possibility of delivering electricity back to the grid, though it may need conversion into alternating current.

The structure can be extended by connecting various chargers together with solar panels together and in the near future it could support business parks as well as residential districts in supplying electricity in a new manner, partially separating from the electricity grid.

 According to Bauer, another main benefit is that the structure tends to be smart. Information together with predictions regarding the present situation on the electricity market is taken into account. To some extent, there is a possibility of predicting the supply of solar energy.

Should there be an oversupply of solar power, the price tends to fall and vice versa and depending on this, one could make a choice for the smartest energy consumption plan.Twenty-high-speed chargers of this kind have been produced by Power Research Electronics and the company anticipates the system to be available towards the end of this year.

Thursday 8 June 2017

3D Printed Turbine Blades a 'Breakthrough', Says Siemens

SIEMENS MADE THE HEADLINES WITH 3D-PRINTED TURBINE BLADES

The world is full of competitors and the tech companies have taken it to another level. We always witness a tussle among the top-rated companies and we are the ones who get the advantage of being a common man. Due to the advanced technologies prevailing in this world, we have been able to lead a more comfortable life. Once we start getting things easily, we don’t really care from where it is coming and how it is happening.

We just enjoy the flow without knowing the nitty gritty. Well, the tech companies are always on the run to develop new gadgets and design something new for the betterment for the lives. Recently, the company Siemen have made a breakthrough in terms of the printing technology by challenging the 3D printed turbine blades.

THE BLADES

The German pioneer Engineering group Siemens has gone ahead and made a breakthrough by testing the 3D printed turbine blades. The technology of the 3D printing is new and it was purchased by them from Material solutions. The 3D printed turbine blades make revolutions of 13000 per minute and that too when the temperature is more than 1250 Celsius. This is certainly incredible. The mechanism has made really breakthrough. The 3D printed turbine blades have a weight more than 180 gm but when the blade rotates it weighs more than 11 tonnes with speed.

THE FACTS

Additive manufacturing is another term of 3D printing. This takes into consideration of adding extra thin layers of materials one after the other. The 3D printed turbine blades were made from a special technology of 3D printing which includes the impact of cooling geometry. A special type of powder made from metals is placed over the layer and the rest is done by the laser.

The parts can be designed and printed using the Auto Cad software which is in high demand for the designing purposes. The fact that is not still clear is that the time duration that the 3D printed turbine blades will take to roll into the market and become available for the common multitude. However, the testing duration is a bit more.

Once it becomes available for the market, it would gain a huge popularity. The scientists are trying their best to bring this into the commercial market. The tests are going on and also they need to check the durability of the material so that once it is rolled into the market, it does not create any sort of mess. The 3D printed turbine blades should be hard and durable in order to make and impact among the other competitors of the turbine blade dealers.

THE MATERIAL

The material is processed by using the process of casting. Well, it is one of the best technology in order to process such kinds of materials. But one thing that will annoy you is the time required for the completing the process. The time can be shortened by using additive manufacturing. The time is cut down to 3 months.

Tuesday 30 May 2017

Researchers Engineer Concocted Shape Shifting Noodles

I can say even your grannie can play with the noodles or spaghetti the one that MIT researchers has invented. Yes, these spaghetti/ noodles caters a lot more fun than the normal noodles can't. What so fun about these edible films? This MIT team has made the dining experience more synergistic and lot of fun. Just add water in it and you find these can transform their shapes.

The MIT's Tangible Media Group, have commix something similar to the eatable origami, that is in the shape of flat sheets of starch and gelatin. If you immersed in water, immediately they shoot into 3D formations, that comprises of regular shapes of pasta such as macaroni, noodles and rotini. These edible three-dimensional formations can also be skillfully arranged to crimp into the form of a flower and other irregular designs. To play with these culinary prospective, the MIT researchers formulated flat discs that wrap up around caviar, akin to cannoli, and spaghetti that impromptu divides into littler noodles while soused in hot stock. They have presented their work to the Association for Computing Machinery’s 2017

This MIT's team describes that these edible 3D formations are not only the art of culinary performance, but it is a functional way to cut down food-shipping costings.

The edible films could be piled together and shipped to the customers, so alter into their ultimate shape later, while plunged in water. If you make a perfect pack of it, you will gain 67 percent of package will retain empty says the co author of the paper.

Programmable pasta

MITs researchers, Wang and Yao had been working the effects of respective materials to moisture. They were working generally with a definite bacterium that can metamorphose its form, shrinking and enlarging in effect to humidness. Coincidentally, that particular bacterium is used to ferment soybeans to prepare a regular Japanese dish called as natto. They worked with gelatin, which of course spread out while it takes up water.

This material can spread out to respective degrees that depends on the density. Then the team researched to control the bending structures of the pasta, so that they create various 3D shape-changing gelatin sheets. These gelatin sheets were covered with cellulose strips that controls the amount of water the gelatin sheet can absorb. This cellulose strip act as a water barrier! The print the cellulose onto the gelatin sheets, that can predictably control the shapes response to water and the shapes that it finally expected.

Designing for a noodle democracy

Wang and Yao formulated various different structures from the gelatin sheets, from macaroni like designs, to structures that matches flowers and horse saddles. The team showed their newly invented edibles to the head chef of a high-class Boston bistro. These two professionals designed some culinary creations.

They transcribed the cellulose shapes and the attribute of entire structures they were able to make, and as well tested properties such as strength, make all this into a database. This team used a lab 3-D printer to shape cellulose onto the gelatin films, merely they have defined ways in which we can produce akin effects with more common method, such as screen printing. With this online tool can render design instructions, and a startup establishment can transport the materials to you. They want to change the design of noodles.

Sunday 28 May 2017

Construction Begins on Google’s Massive Seattle campus in Amazon’s Backyard

Google’s New Campus – Established by Paul Allen - Vulcan Read Estate

Works has begun on the new campus of Google in Seattle’s South Lake Union neighbourhood, on the outskirts of global headquarters of Amazon. The project is being established by Vulcan Real Estate of Paul Allen and tends to span two whole blocks and would comprise of office space of around 600,000 square feet, together with residential tower on every block with a combined 149 units according to Lori Mason Curran investment strategy director of Vulcan Real Estate.

Commencement of the work had started earlier in the month on the project that had been announced last year which will be reaching completion in early 2019. Vulcan also tends to have an adjacent block which is not a portion of the project of Google. According to Mason Curran, the site is intended for 216 residential units and 161,000 square feet of office space. It is said that Vulcan would not begin this block till it lands a tenant.

The main player for long has been Amazon in South Lake Union. However, the entry of Google in the neighbourhood down the road together with the rapidly growing presence for Facebook should provide plenty of competition for top talent.

Structure – Begin in Phases 2017 - 2019

Presently, Seattle area footprint of Google comprises of offices in the Fremont neighbourhood and the Eastside suburb of Kirkland, Wash. It is uncertain what Google intends to do with its Fremont space when the new Seattle campus seems to be ready. However at the moment, investing in the neighbourhood, Google had signed a deal, earlier this year to sublease the former sound Mind & Body Gym space in 2014 that Tableau had altered into around 50,000 square feet of tech office space, for its engineering team.

The structure would begin in phases from 2017 and would complete in 2019. According to an official from Vulcan, Google has agreed to leases which would be lasting 14 to 16 years, a new sign of its long-term commitment to the Seattle region. Joined with the recently expanded offices in Kirkland, Wash of Google, the technology giant would be approaching 1 million square feet in the area with this latest space. Presently Google tends to have over 1,900 employees in Washington State, in offices in Seattle and Kirkland.

South Lake Union – Thriving Hub

Google would be moving from its present location in Seattle’s Fremont neighbourhood, wherein the space had been constrained in recent years considering the rapid expansion of Tableau Software together with the other technology companies. Clyde McQueen, site Lead for Google Seattle had mentioned in a news release that South Lake Union is a thriving hub and they were excited for the new space. They loved being in the Fremont neighbourhood though would need more breathing space.

They are yet to catch a view of Lake Union, from the new location. Besides providing a bigger footprint for Google in the city, the news is prominent since Amazon is not the one taking the building and Vulcan had established the online retailer’s prevailing campus in the South Lake Union neighbourhood. More focus has been done by Amazon on its expansion in the neighbourhood towards the south where it is rolling its own three block campus on the northern outskirts of downtown Seattle.

Saturday 27 May 2017

What is Spectrum Why is It a Big Deal for Cell Phone Companies?

Spectrum Best Opportunities for Wireless Companies

The most recent government action regarding radio airwaves would be affecting the prospect of the wireless industry and services. We need to be aware on the updates. Recently was the start of the Federal Communication Commissions’ Incentive Auction that had taken valuable spectrum away from TV broadcasters to sell it to companies who would offer wireless service.

This specific band of spectrum is said to be priced extremely and industry insiders have named it `beachfront property’, since it tends to operate at lower frequency. This means that it has the potential of running across greater distances going through walls for greater coverage. This is the reason why this auction needs the attention of the viewer. It represents one of the best opportunities for wireless companies in getting more spectrums which is how things get conveyed to the phone from videos to work emails.

 Moreover it could also remodel the wireless industry, providing smaller carriers with an opportunity of offering the similar, robust nationwide coverage which Verizon Wireless and AT&T tend to provide. Overall, they seem to control over 70% of the wireless market of US. Further than the two big, T-Mobile, satellite TV provider Dish Network and cable giant Comcast are other large players who intend joining in the auction.

Spectrum – Range of Radio Frequencies

Spectrum is said to be the range of radio frequencies which are utilised in transmitting sound, data as well as video to TVs or smartphones. These are the airwaves which had ignited Beatlemania in December 1963 when radio stations had started playing `I Want to Hold your Hand’.

Presently it tends to deliver `Games of Thrones’ to your phone through the HBO Co app. Extra spectrum would mean quicker and more-reliable wireless service. Spectrum is said to be a restricted resource which is controlled for the greater part by the US government. Companies could have it if they are inclined in participating in auctions, acquire a company with spectrum holding or purchase licenses from one another in a secondary market.

To manage the ever-increasing volume of data traffic that is created over phones, tablets, cars together with the other gadget as well as machines, additional spectrum is essential. The 600 MHz band, the spectrum which has taken hold in the auction, has traditionally been utilised in transmitting TV signals and is probably the last time the government would be capable of auctioning off such spectrum.

Lack Significant Volume of Low-Bank Spectrum

Low-band spectrum tends to function consistently indoor and over great distances and could be helpful to carriers in keeping up with the increasing demands for coverage on the part of the customer.

The two big winners in the last auction of low-band spectrum in 2008- AT&T and Verizon have erected the foundation of their 4G LTE networks on low-band 700 MHz spectrum which was another sliver of spectrum that had once been utilised in broadcasting TV. The third largest wireless provider of the nation, T-Mobile has been making attempts in assembling an identical set as assets for its own network though for the most part, T-Mobile is said to lack significant volume of low-band spectrum.

Incentive Auction is actually two auctions in one and there is a reverse auction which would permit TV broadcaster in selling their airwaves back to the government. A forward auction would be following in the next few months by which the government would be selling these same airwaves to wireless companies. Broadcasters would be getting a cut of the proceeds in exchange for giving up their spectrum.

Largest & Complex Auction

This auction is said to be the largest and the most complex auction ever run by the FCC. Dish Network, the Satellite TV provider is anticipated to bid in the modern FCC spectrum auction though it is not certain if the company would be as aggressive as it had been in the AWS-3 auction, last year when it had paid $10 billion.

Estimates had recommended that the auction could generate around $60 billion though several in the industry were of the opinion that the price is not accurate considering the constrained budgets of the carriers. Eight years back, the 700MHz had raised $19.6 billion and last year AWS-3 auction had collected in a record setting of $45 billion. AT&T, Verizon and T-Mobile would probably purchase the bulk of available spectrum in this auction since the FCC is aware how critical low-bank spectrum is in forming a company which could oppose AT&T and Verizon.

 The agency has fixed out a sliver particularly for smaller players such as T-Mobile together with some rural operators in bidding on without facing the deep pockets of AT&T together with Verizon. Dish is said to be another wild card in this auction that had been steadily progressing up a conflict chest of spectrum. It is not certain what Dish intends to do with the spectrum which is own already and the licenses it would probably pick in this auction.

T-Mobile Tougher Substitute to AT&T

There is a possibility that the company would be looking for a partner or would be developing a fixed wireless service which would serve as a substitute broadband connection to DSL or cable. Comcast also tends to bid in the auction as long as a competitor in the auction to T-Mobile for spectrum reserve for smaller operators. The provider has developed an extensive wireless network utilising Wi-Fi technology and would ultimately partner along with a cellular operator in building a network to rival with traditional wireless carrier.

 Comcast has given no indication on how it intends to utilise the spectrum. It could utilise the licenses in building its own network though there is a possibility that Comcast would be using the licenses as a bargaining chip while dealing with traditional carriers. Particularly absent from the auction is Sprint, the fourth ranked carrier. Based on how the auction tends to turn out, T-Mobile would probably end as a big winner.

That means that for the average consumer, T-Mobile can be a tougher substitute to AT&T and Verizon. Usually, T-Mobile tends to have frail signals beyond major cities though the spectrum from this auction would be helpful in covering additionalregions of US particularly those in the suburbs as well as rural regions.