Showing posts with label technology news. Show all posts
Showing posts with label technology news. Show all posts

Thursday 10 February 2022

What is a Condenser Microphone?

Willing to know what is a condenser microphone? When it comes to studio recording applications, you should choose condenser mics mostly for their sensitivity and fidelity. Generally, compared to dynamic mics, a condenser microphone is capable of offering a wider frequency response range. But it has a lower input sensitivity. This indicates that these are capable of picking up the input signal more quickly. In this article, we will let you know about them in detail.

What is a Microphone?

A microphone is called a mic colloquially. It works as a transducer that helps to convert sound into an electrical signal. People use these in PCs to record voice, speech recognition, VoIP, etc. Besides, it is used for non-acoustic purposes like ultrasonic or knock sensors.

What are the different types of mic?

The various types of mic are as follows:

  • Liquid Microphone. 
  • Carbon Microphone. 
  • Fiber Optic Microphone. 
  • Dynamic Microphone. 
  • Electret Microphone. 
  • Ribbon Microphone. 
  • Laser Microphone. 
  • Condenser Microphone. 
  • Microelectromechanical Microphone 
  • Crystal Microphone 

What is a condenser microphone?

The name condenser comes from the “capacitor”. It is a term used previously for “capacitor”. A condenser microphone helps to convert acoustic energy into an electrical signal through diaphragm movement in a fixed-charge capacitor-based capsule and electrostatic principles.

How it differs from other types:

Before choosing the better one, you should know for which purpose you want to use the mic. Whether you are looking for a live performance mic or a studio mic, you should take a while to select. In this case, you are required to determine first if you need an instrument mic, a drum mic, or a vocal mic.

Condenser Microphones:

These are especially useful for capturing vocals and high frequencies. In most cases, people prefer to use these for studio applications. These capacitor mics are used in studios for their detail and accuracy. The mics consist of a lightweight diaphragm suspended by a fixed plate. Hence, the sound waves create pressure against the diaphragm that enables it to move.

As these come with a thin diaphragm and increased sensitivity, people prefer to use them to pick up delicate sounds. These require a source of power. You can get this available in the form of phantom power due to which you may use a 9v battery.

These are ideal to use to capture acoustic guitars, drum overheads, or vocals. People don't like to use these for louder sounds like a guitar or bass amplifier. In such cases, a dynamic microphone is an ideal choice.

Dynamic Microphones:

This type of microphone is perfect to use to boom sounds or louder environments. People prefer to use microphones for live use as it is capable of handling loud sounds and decreased sensitivity.

In this case, a wire coil is used for amplifying a signal that is picked up by the diaphragm. It is the reason why the output of a condenser microphone is greater than a dynamic mic.

Dynamic ones are ideal for live sound because they are incredibly tough. In case you drop your dynamic mic mistakenly, the probability of damaging the mic is lower compared to a condenser mic.

This type of mics doesn't require batteries or phantom power. Besides, the cost of a dynamic mic is less compared to condenser mics. A dynamic mic needs very much low maint. Whether you practice a reasonable level of care, it can maintain the performance.

Pros and Cons of Condenser microphone:

Benefits:

The advantages of condenser microphone are:

  • This is available in a small size. 
  • The mic comes with a flat frequency response. 
  • The weight of the mic is lower than the dynamic mic for lighter diaphragm assembly. 
  • It is compatible with a high range of frequencies for a quick-moving diaphragm. 
  • This type of mic is perfect to capture sounds of audio instruments and vocals. 
  • The microphone provides high sensitivity.

Drawbacks:

The disadvantages of a condenser microphone are:

  • These need the power of voltage to operate. 
  • It is capable of handling certain maximum input signal levels. 
  • This mic is harder to use than a dynamic microphone. 
  • These are more affected by extreme temperature and humidity conditions than dynamic mics. 
  • The cost of this mic is much more than a dynamic microphone. 
  • The cheaper ones create a small magnitude of noise.

Applications:

The applications of the condenser mic are as follows:

  • Studio and voiceover vocal mics, I.e. large-diaphragm FET and tube mics. 
  • Instrument mics 
  • Wireless lavalier mics that are mainly for the miniature pre-polarized FET mics. 
  • Consumer devices requiring mics 
  • Shotgun mics in film and video

Side Address v. Front Address:

A lot of large-diaphragm mics are there that are designed especially to pick up sound ( one or both sides of the mic) instead of picking only one end of the mic.

There are multiple small-diaphragm mics that come with a pencil shape. These are capable of picking up sound from the end, not from the side. As a result, these are “end-address,” rather than being “side-address.” Mics of both types are handy. If you know the location of the focal point for your mic, it helps to visualize the polar pattern in a better way. It allows you to aim at the audio source more precisely.

Polar Patterns:

It is the three-dimensional space that you can see around the mic capsule. Here, the capsule is very sensitive to sound. Cardioid is the type of microphone polar pattern used most commonly.

There are a few polar patterns available in condenser microphones:

Hypercardioid:

This type of microphone is more directional compared to cardioid microphones. The reason is that these are less sensitive at the pick-up pattern sides despite them picking up a little audio from the rear.

Supercardioid:

Compared to the previous hypercardioid ones, these mics are a little less directional. However, this type of mic has the capability of providing a slightly smaller rear lobe.

Omnidirectional:

This type of microphone can pick up audio in all directions. Thus, the pattern becomes perfect to measure mics.

Bidirectional: 

This kind of mics is also called “figure-eight.” These Mics are capable of picking up audio from both fronts as well as back.

Conclusion:

In this article, we have let you know what is a condenser microphone. It is available at a different range of prices. If you are looking for top-quality condenser mics to use, then you are required to invest thousands of dollars. The essential thing that you should remember is that you should know your recording needs. If you use this properly, then you can experience its various uses. Thus, it can easily offer high-quality results without doing any effort.

Saturday 29 January 2022

Bose Quietcomfort Earbuds: The Best ANC Earbuds

Bose Quietcomfort Earbuds is equipped with Acoustic Noise Canceling™ technology. It comes with hidden microphones that help to monitor the environmental sounds. Thereafter, it generates the opposite signal to cancel that noise. It has taken decades for the manufacturers to make such a product using which you can get advantages within a quick time. The product is designed with the world’s most useful noise canceling and high-fidelity audio. It comes with StayHear™ Max tips that provide additional comfort to the user. As soon as you remove distractions, the music takes center stage. From Roller skating, street art, to woodworking— you can hear each sound very clearly.

Bose Quietcomfort Earbuds Noise-Canceling Technology

You can assure that the experience is not available in any other wireless earbud. This product comes in two new fashion-forward colors that are Stone Blue and Sandstone. It takes the earbud to another level than others.

With the help of controllable noise-canceling technology, it helps to remove distractions as required. The way it makes you feel the sound looks like you are not wearing earbuds. These wireless earbuds feature exclusive Volume-optimized Active EQ that offers high-fidelity audio. 

It helps to provide both types of sound— full and balanced at any volume. Each surface of the product consists of soft silicone that allows you to wear earbuds throughout the day. The product comes with a capacitive touch interface rather than having buttons. In this case, you are required to only tap the surface for controlling the content as you do with your mobile.

As soon as you take out the earbud of the ear, the music, podcast, or video running on your mobile will stop automatically so that you won't miss a second. The product is IPX-4 rated which makes it protected from sweat, water, and the weather. These QuietComfort Earbuds support iOS and Android devices. Besides, it is compatible with Bluetooth 5.1 that offers a strong and reliable wireless signal. 

But this signal is available within 30 feet of the paired device. Just charge your device once and you can run up to six hours of playback time. It has a wireless charging case that offers a maximum of 12 more hours of listening time. You can use the charging case with any Qi-certified wireless charging mat. However, this feature is not available. The product comes with Triple Black or Soapstone and comes in a compact design.

Bose Quietcomfort Earbuds Features:

Play, store, charge:

You can hear sounds for a maximum of 360 minutes. As soon as you are done, you need to pop these for an additional 2 full charges. Use the USB-C or any Qi-certified wireless charging mat to recharge the charging case.

IPX4 Rating:

The device comes with an IPX4 rating that makes it very durable. It comes in a special design and is available with acoustic mesh. This feature keeps the moisture and debris out of the product.

Seamless, And Reliable Connectivity:

The earbud begins connecting your device when you take these out of the case. Besides, the Bluetooth® 5.1 allows you to enjoy consistent signals within thirty feet.

Clearer Conversations:

The mics come in small sizes and are hidden in the earbud. These microphones work at a time to reject noise. Thus, it makes sure that you will get a clear sound always despite you being outside or taking calls in a busy home. The main function of the Mics is to focus on the sound of your voice. It filters out most sounds as well as wind near you.

Tap, Touch, Swipe Controls:

The seamless capacitive gestures help you to control the product with ease. Now, you can easily adjust the sound by changing the volume, flip between noise canceling and Aware Mode, and answering calls. You need to do all the things with easy swipes and taps. Take out the earbuds when you complete listening to sounds. The Earbuds are capable of sensing the movement and pausing the music automatically.

Stay Connected:

The Bose Quietcomfort Earbuds always tracks the previous seven devices with which it was connected. Therefore, if you want, you are capable of swapping between them easily without reconnecting. In this case, you can take the help of the Bluetooth button for scrolling through your paired devices.

Comfortable:

The surface consists of soft silicone, not made of hard plastic. Besides, there are StayHear™ Max tips that help to keep the product in place while you move. This StayHear™ Max tip is available in three sizes. There is an umbrella-shaped nozzle that is used to spread the pressure across the whole ear. It offers stability, comfort, as well as passive noise blocking. Besides, it contains an extended fin that is tucked into the ridge of your ear to provide more security.

High-Fidelity Audio:

The Bose Quietcomfort Earbuds comes with awesome innovations and exclusive acoustic architecture that allows you to hear the preferred things in high-fidelity audio. It doesn't matter how soft or loud you play the sound. In most cases, the bass is not available while you decrease the volume. Thus, it leaves the music sounding small. It is the volume-optimized Active EQ that helps you to do so. The Active EQ increases highs and lows to maintain the balance consistently.

From Quiet to Aware:

When you turn the device into Quiet Mode, the Acoustic Noise Canceling™ technology will mute the entire sound near you. Thus, it allows you to hear more precisely and feel each detail of the music. However, sometimes, it becomes vital to hear your surroundings. Then, you are required to switch the mode to Aware Mode quickly. In this case, you have to tap two times on the left earbud.

Activesense™ Technology:

The product comes with an Aware Mode feature with ActiveSense technology. It enables you to listen to music and your surroundings clearly at a time. When the sound of your surroundings is very loud, it manages the amount of noise cancellation. Thus, it allows you to hear everything at a pleasant and balanced volume.

Support iOS and Android:

It offers easy set-up as well as custom settings via the Bose Music app.

Pros:

  • Best noise cancellation 
  • Awesome sound quality 
  • Easy to use 
  • Comfortable 

Cons:

Big carrying case

Friday 14 January 2022

What is Wi-Fi 6E?

What is Wi-Fi 6E?

Nowadays, it is very typical to have Wi-Fi 6 hardware. Therefore, users may have a version 6 network. But the demand of people is something new: Wi-Fi 6E that can decrease congestion.

The Federal Communications Commission allows using the 6 GHz band without a license on April 23, 2020. However, some of the 6E hardware devices were available in the market from 2021. All countries have not created the same decision, though. It is the reason why 6E is experiencing regulatory barriers in some nations.

What is Wi-Fi 6E?

Wi-Fi 6E is a standard that you can operate on the 6 GHz band. But its earlier generations use 2.4 GHz and 5 GHz radio bands.

This 6 GHz spectrum works similarly to version 6 is used over 5 GHz. The difference between them is that the latest device provides extra non-overlapping channels. As per Wi-Fi Alliance, there are "14 additional 80 MHz channels and seven additional 160 MHz channels." As the channels won't overlap with each other, it helps to decrease congestion.

Devices compatible with the 6 GHz spectrum can be version 6 devices. No previous devices are available using version 5 (802.11ac) standards or something else. The devices that used the 6 GHz channels will speak a similar language. Besides, these may use version 6’s new congestion-busting features.

What are the benefits of Wi-Fi 6E?

The 6GHz band provides more than 1Gbps internet speed. Besides, this increased spectrum indicates lower latency. When you play online games, video calls, or virtual computing sessions, you need a response from the keyboard, voices, or mouse click of less than one millisecond instantly.

If you have quicker connectivity, you can experience the benefits, especially for home network capacity. Whole-home gigabit coverage and multi-gigabit capacity indicate that normal users get the chance finally to enjoy next-generation computing experiences. This network allows you to enjoy virtual reality gaming in your home. Besides, you can even participate in augmented reality business presentations while other family members watch Netflix or YouTube. You won't see any lack of bandwidth here.

How to Get Wi-Fi 6E:

You can feel that it is a must-have technology soon. Some nations like the US, Brazil, and Korea have opened up the technology already. But multiple countries haven't opened up the wireless spectrum for commercial use yet.

The 6E devices support the previous versions, but if you are willing to use the new 6GHz channels, the technology should have a 6E router and a 6E client device. It indicates that you should have computers, phones, smart home devices, etc. Whether you use a new version 6 router, you still have to upgrade this to a version 6E model.

Should You Upgrade to Wi-Fi 6E?

The answer is No. The 6E routers have hit the market very recently. Prices will decrease in the forthcoming year. Plenty of 6E-compatible devices is there that can be attached to the router.

However, it is not helpful if you want to expand the work-from-home capabilities to your backyard. Whether you are searching for a new router in the market, you will require something future-proof.

Over 6 GHz Needs New Devices:

Suppose you are using multiple version 6 devices and a 6E-enabled router. But it doesn't matter as no device will communicate over the 6E. These devices will take the help of version 6 on the 5 GHz or 2.4 GHz channels.

The pre-CES 2020 of the Wi-Fi Alliance acknowledges the 6E. It indicates the 6 GHz as "an important portion of unlicensed spectrum that may soon be made available by regulators worldwide."

When Will the Wi-Fi 6E Hardware be Available?

The headquarters of FCC is in Washington, DC. At the beginning of this year, the hardware has become more common. When it was the last quarter of 2021, you could purchase the Asus band's routers and mesh networking systems.

Netgear, TP-Link, and other manufacturers announced more of the 6E routers. The hardware is compatible with Android devices such as the Samsung Galaxy S21 Ultra and Google Pixel 6. However, no Apple device is compatible with it.

A company like Intel is now promoting the hardware, known as "GIG+." Intel is now including the feature in all its devices. As a result, most Intel-powered laptops allow using the 6E.

When it comes to technology, you will always find something new. 6E is the latest one when it comes to Wi-Fi. You should know one thing is that version 6 is not the newest update for network speed.

But if you use this, you will find extended battery life and less wireless congestion. Besides, the latest hardware is not widespread also. For future proof, a 6E router can be purchased. However, to use wireless devices, this 6E is not compatible yet.

Tuesday 29 January 2019

Tips to Convert Videos to the Best Format

Do you want to convert your video to the ‘best’ format for a particular device or platform? On the surface that may sound easy, but how do you identify the best format – and what should you look for?
While there are several ways that you could find the best format and convert your videos to it, these tips should help make it a lot easier:

  • Make sure the format has hardware support

    For a video to be played, the format needs to be supported so that the device or platform it is viewed on is able to decode it. However that decoding can take place either using software or hardware.

    The problem with software decoding is that it requires a lot of processing power – especially for high quality videos. That is why as a rule the ‘best’ format should always have hardware support.

  • Factor in the compression

    Part of the video format (i.e. the video codec) will dictate the type of compression that is used to encode and store the video. Newer formats normally have more efficient types of compression, meaning that they can compress the same quality of video to a smaller file size than older formats.

    As you can imagine this is an important factor because the ‘best’ format should compress the video to the smallest file size possible while maintaining its quality. However it is complicated by the fact that it takes time before devices have hardware support for newer formats built-in.

  • Try working backwards based on how the video will be used

    Instead of trying to identify the best format based on its hardware support and compression, you could work backwards based on how the video will be used. For distribution formats such as MP4 with H.264 are the best option, and the same goes for online videos.

    In general MP4 with H.264 is a ‘safe’ format for most devices, but you could check if HEVC is supported seeing as it has better compression rates.

See how these tips can help you to convert your video to the best format? Once you’ve identified what it is, all you need to do is use a video converter to switch your video to that format. For example you could use Movavi Video Converter to convert QuickTime to MP4, AVI to FLV, MPG to MKV, and so on.

Regardless of how you approach it, following these tips should help you to end up with a format that has the best possible compression while still supported by the hardware of the device it will be played on. That is as good as it gets, and should allow you to enjoy high quality videos without taxing your processor (or storage) too much.

Saturday 1 September 2018

Memory Processing Unit Could Bring Memristors to the Masses

Memristors

Memristors: Computers of the Future

Today’s world is all about doing things fast. We want our phones to work faster, our computers and even our toasters. So, scientists are continually on the lookout for the next big thing that will make computer and the like run faster. One such thing discovered by scientists is a memristor. If you’ve not heard of it, then its no wonder as the term was coined just recently or with the discovery itself.

Memristors not only make your computer or phone work faster but will also cut energy consumption,like you wouldn’t believe. Memristor is a way of arranging advance computer parts on a chip so that it performs faster and with less energy consumption.

Where will Memristors be used? 


Memristors, according to scientists, will improve performance especially in those low power environments such as your smartphone. It can also be used to make an already efficient thing even more efficient like in the case of supercomputers.

How do Memristors work? 


Semiconductors in the industry make things fast by ensuring faster processing but when it comes to receiving and sending data, that is the part that takes time as these semiconductor processors have to work with other parts to do it.

Memristors is a solution to this problem. Named as combination of memory and resistors, which you may have already figured, memristors can process and save data in memory all in the same place, which will significantly speed up calculations and such.

How are Memristors different from Traditional means: 


Traditional means use bits of 1s and 0s but in the case of memristors, they work on a continuum.
The team behind memristors use smaller blocks to break down a large mathematical problem which improves the flexibility and efficiency of the system. These smaller blocks are called “memory processing units” and can be useful in implementing machine learning and artificial intelligence algorithms.

They can also be used in areas of simulation such as in predicting the weather. Mathematical problems in the form of rows and columns are directly imposed on the grid of memristors. Operations that multiply and sum the rows and columns in the table are then done simultaneously.

Whereas in the case of a traditional processor, it would perform mathematical calculations such as the sum and multiplication of rows and column individually taking a lot of time and energy. While with memristors all this happens in one step itself.

Using Memristors in Practical problems: 


Many science and engineering problems are very difficult to solve because of their complex forms and numerous variables needed in their models. Memristors can be used to simplify such problems and model them in the correct manner to get the right answer taking up much less time and with that energy.

When it comes to partial differential problems, solving them exactly is near impossible. To get an approximate value itself is the job for supercomputers. These types of problems involve loads of data and getting a memristor to sit in a supercomputer and perform these calculations will save up a lot of time and get results much faster.

Tuesday 17 April 2018

This Fire Detecting Wallpaper Can Turn an Entire Room into an Alarm

Fire Detecting Wallpaper

Fire Detecting Wallpaper: The wallpaper of the future!

Have you ever heard of wallpaper that can not only detect fire in the area that it is in but also helps in preventing its spread? Well neither have I until now that is. Researchers have come up with wallpaper that can detect fire and which is also fire resistant. Made of a material that is found in bone teeth and hormones, yes you read right, this fire detecting wallpaper, may actually stop the spread of flames and also alert you to the fact that your house is on fire.

Those colorful, beautiful wallpaper that you find in stores today are actually highly flammable made of such materials such as plant cellulose fibers and synthetic polymers, these wallpapers will help a fire spread in the nick of time making recovery of anything near impossible. Researchers of the fire detecting wallpaper have swapped out those flammable materials for something strange yet environmentally friendly.


The strange material in fire detecting wallpaper:

Fire detecting wallpaper is made of a material commonly found in bone teeth and hormone, (God knows where they got the idea from), this material known as hydroxyapatite is fashioned into long and by that I mean really long nanowires to give it high flexibility.

This hydroxyapatite material, that is used to make the fire detecting wallpaper, actually helps the wallpaper in preventing the spread of flames.

Making the fire detecting wallpaper “smart”:

Researchers didn’t just make the fire detecting wallpaper preventive of fire, they also wanted to make it “smart”. To do this they added sensors to the fire detecting wallpaper made from drops of graphene oxide in an inky mixture.

This graphene oxide acts in two ways, at room temp, it acts as an insulator, that is it blocks the flow of electricity while under high temperature say when there is a fire, it makes the fire detecting wallpaper conductive and this completes a circuit that sound an alarm.

Researchers also boast that this alarm on the fire detecting wallpaper can last for a prolonged period of more than five minutes.

So to fit it all in one nice package, the fire detecting wallpaper is not only non-flammable but also high in temperature resistance and has an automatic fire alarm.

The wallpaper that you find in today’s stores are highly flammable and won’t do nothing when it comes to a fire in the house while the fire detecting wallpaper has a high flexibility, it can also be processed into various shapes, made in different colors and made with a commercial printer.

But all this won’t come cheap, the fire detecting wallpaper, because of their materials come at a very steep price tag. They may be environmentally friendly but not really pocket friendly, making you think that you’d rather take your chances when and if there is a fire with normal wallpaper.

The next thing on the agenda, therefore, for scientists is that they now are looking for more cost effective ways of making fire detecting wallpaper, that will be easy on a person’s wallet.

Tuesday 20 February 2018

The Next Generation of Cameras Might see Behind Walls




Single Pixel Camera/Multi-Sensor Imaging/Quantum Technology

 

Users are very much taken up with the camera technology, which has given an enhanced look to the images clicked. However these technological achievements have more in store for the users. Single-pixel cameras, multi-sensor imaging together with quantum technologies would bring about great achievements in the way we tend to take images.

The updated camera exploration has been moving away from increasing the number of mega-pixels to merging camera data with computational processing. It is a radical new approach wherein the incoming data may not seem like an image. It tends to be an image after a sequence of computational steps which involves complex mathematics together with modelling on how light tends to travel through the scene or the camera.

The extra layer of computational processing tends to eliminate the chains of conservative imaging systems and there may be an instance where we may not need camera in the conservative sense any longer. On the contrary we would utilise light detectors which few years back would never have been considered for imaging.

 However, they would be capable of performing incredible results like viewing through fog, inside the human body as well as behind the walls.

Illuminations Spots/Patterns

 

The single pixel camera is one of the examples that depend on a simple source.The usual cameras tend to utilise plenty of pixels – tiny sensor features in order to capture a scene which is probably illuminated by an individual source.

However one can also manage thing in a different manner, capturing information from several light sources with an individual pixel. To achieve this one would need a controlled light source such as a simple data projector which tends to illuminate the scene a single spot at a time or with a sequence of various patterns.

For every individual illumination spot or pattern one can then measure the quantity of light reflected thereby adding all together in creating the ultimate image. Evidently the drawback of taking a photo in this way is that one will have to send plenty of illumination spots or pattern to obtain an image – one that would take only one snapshot with a regular camera.

However this type of imaging would enable in creating otherwise impossible camera, for instance that which tends to work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

Quantum Entanglement 

 

These types of camera could be utilised in taking images through fog or thick snowfall. They could also imitate the eyes of some animals and mechanically increase the resolution of an image based on what is portrayed. There is also a possibility of capturing images from light particles which have not interacted with object needed to be photographed.

This would have the benefit of the idea of `quantum entanglement’ which two particles can be connected in a way meaning that whatever tends to occur to one can occur to the other even though they are apart at a long distance.

 Single pixel imaging is considered as one of the simplest innovation in future camera technology and depends on the traditional concept of what forms an image. Presently we are observing a surge of interest for methods wherein lot of information is utilised though out-dated techniques tend to gather only a small portion of it.

It is here that multi-sensor approaches involving a number of detectors pointing at the same scene could be utilised. One ground-breaking example of this was the Hubble telescope that produced images made from a mixture of several different images taken at various wavelengths.

Photon & Quantum Imaging


However, one can now purchase commercial version of this type of technology like the Lytro camera that tends to accumulate information regarding light intensity and direction on the similar sensor producing images, which could be progressed after the image has been taken. The next generation camera will possibly seem like the Light L16 camera featuring ground-breaking technology based on over 10 various sensors.

Their data are connected through a computer with a provision of 50Mb, refocus able and re-zoomable, professional-quality image. The camera tends to appear like a very thrilling Picasso interpretation of a crazy cellphone camera. Researchers have been working hard on the issue of seeing through fog, beyond walls as well as imaging deep within the human body and brain. All these techniques depend on linking images with models explaining how light tends to travel through or around various substances.

Another remarking method which has been achieving ground is based on artificial intelligence to `learn’ in recognising objects from the data and these methods have been inspired by learning process in the human brain which probably likely to play a major role in the forthcoming imaging system.

Individual photon and quantum imaging technologies have been developing to the extent that they can take image with extremely low light levels as well as videos with exceptionally fast speed attaining a trillion frames per second. This is adequate to capture images of light travelling across a scene.

Monday 12 February 2018

The Signs Your Child Might Have Screen Addition, Revealed

Children Engaged in Devices – Screen Addiction

With the progress in technology it is not surprising to see children engaged in deviceswhich have turnedthem into screen addiction. Though the approach of making them to start at an early age has been criticised by paediatric experts and adolescent researchers, several of the apps made available for download for children below five years of age in the Apple app store has shown that many parents as well as app developers have been ignoring the warnings with regards to the devices.

This exposure to screens for children comprising of video games, televisions, computers and tablets could be the reason for the addition trend which has been increasing – screen addiction. While this possibility has been considered by parents in the past by asking `how much screen time is too much? It seems that they had phrased the query wrongly.

As per a latest research published in the journal Psychology of Popular Media Culture, how children utilise the devices not on how much time they tend to spend on them, seems the strongest predictor of emotional or social issue lined with screen addiction’. Actually it does not really matter if a child tends to spend an hour or five, gazing at a screen, but would not suggest five hours duration.

An All-Consuming Activity

According to the new study there is more to it than the number of hours spent with the screen. What really matters whether usage of screen tends to cause issues in the other areas of life or it has become an all-consuming activity.

Now the query lies in how precisely could one tell if the child is addicted to screens. One needs to identify the warning signs like – should the screen time interfere with the daily activities, tends to cause conflict for the child or in the family or seems to be the only activity which brings some happiness to the child.

these gestures are displayed by the child, it could be essential to take action since screen addiction is connected to issues related to relationships, emotion and conduct. However considering it from the positive side, it could be most likely alright to keep them entertained with games on iPad for some time.

The television has been replaced by tablets and phone in soothing the children and keeping them busy. For instance, it has been revealed that one out of three kids tend to utilise gadgets much before they can even speak.

Kids utilising these devices at their tender age could have a substantial effect on the mental health of these toddlers.

Technology Addiction – Influence Behaviour/Sleeping Pattern


Dr Richard Graham, London-based Consultant Adolescent Psychiatrist and Clinical Psychologist Dr Jay Watts have stated that technology addiction could have an influence on the behaviour and sleeping pattern of a child. Five signs had been highlighted in an interview with MailOnline stating that one should observe if the child seems to be hooked.

 Moreover they had also emphasized on the importance of taking a digital detox in order to resolve the obsession. Dr Graham from the Capio Nightingale Hospital which is a mental health hospital located in central London had commented that when people tend to feel an uncomfortable sense of withdrawal when they are not online, is a known fact that the relationship with technology is not handled in a proper manner.

Dr Watts added that parents presently tend to struggle with understanding how crucial social media is to the present generation, the modern day playground is virtual. He added that when electronic devices began to have more importance over behaviour than anyone else or thing and when children seemed to get upset when they were deprived of the technology, it is at that point of time that one needs to begin changing things.

In the case of children the main issue is about the way they tend to get addicted to technology and the way they feel when using it.

Unhealthy Independence 

Those kids who tend to portray any indications of severe distress and agitation when deprived of the technology could be considered as unhealthy dependency. It could be somewhat a condition similar to a drug user and this unhealthy dependency could mean that the child gets agitated when they are deprived on the use of technology.

Dr Graham clarifies that the addiction could be apparent itself in other behaviour pattern. The influence of technology could affect the sleeping pattern of the child; interfere with meal times together with eating habits making the youngster to act up during play time. Dr Graham further stated that addicted children could also tend to be secretive as well as defensive regarding their devices and the usage of them and also debate with parents on a regular basis.

Moreover, children addicted to technology could also refrain or ignore real-life activities, refusing to go to locations where they would not be in a position to use their devices like the cinema.Dr Watts mentioned that it is quite guaranteed that parents are under the misconception that their kid has been spending much time on smartphone or online.

Restrict Time Spent on Usage of Technology 


The main concern is to talk to other parents at school or to observe if a child is more preoccupied than the others. If there seems to be a real difference, one needs to speak to the child regarding cybersafety but also study what could be on the mind of the child which could be addictive within and how this addiction could be helpful in avoiding in the real world.

 It seems essential to restrict the children on the time spend on the usage of technology in order to prevent the formation of unhealthy dependence according to Dr Graham. Techniques comprises of ensuring prolonged periods wherein the youngsters are absorbed on the `real world’ and play time with the other kids.

Forming a determined routine time allowance could be an excellent place to begin with. It could also be essential to ensure that adults switch off their phone or keep it on silent mode while having meals and while spending quality time with family and friends since examples given by the parents could be fruitful and meaningful.

Monday 18 December 2017

Small Earthquakes at Fracking Sites May Be Early Indicators of Bigger Tremors

Fracking
7 fears about fracking: science or fiction?

The extraction of shale gas with fracking or hydraulic fracturing has revolutionized the production of energy in the United States, but this controversial technology, banned in France and New York State, continues to generate criticism and protests.

The detractors of the technique, which consists of injecting water and chemical additives at high pressure to fracture the rock containing the hydrocarbons, warn about the possible contamination of water, methane leaks and earthquakes, among other risks.

The British Royal Academy of Sciences, the Royal Society, said in its 2012 report that risks can be effectively managed in the UK "as long as the best operational practices are implemented," Richard Selley, professor at the University of Emeritus of Imperial College in London and one of the authors of the report.

But others, who have contrary opinions, are equal of strict. For example, regarding the possibility that fracking poses a risk of methane leakage, William Ellsworth, a professor of geophysics at Stanford’s School of Earth, Energy & Environmental Sciences. It is not a matter of determining if the wells may have leaks, but the question must be, what percentage has leaks.

In the middle of an intense and growing controversy about fracking, Stangford University Researchers investigated what science says up to now.

Can it cause earthquakes?

Two of them occurred in 2011 in England and led to the temporary suspension of the exploration with fracking.

The first, which occurred in April of that year, near the city of Blackpool, reached 2.3 on the Richter scale and was registered shortly after the company Cuadrilla used hydraulic fracturing in a well.

On May 27, after resumption of fracturing in the same well, seismicity of 1.5 was recorded.

The network of monitors of the British Geological Society, BGS, captured both events, which were not felt by the local inhabitants.

The company Cuadrilla and the government commissioned separate studies.

"Both reports attribute the seismic events to the fracturing operations of Cuadrilla," said the Royal Society, the British Academy of Sciences, in its joint report with the Royal Academy of Engineers on hydraulic fracturing, published in 2012.

Earthquakes can be unleashed mainly by high pressure injection of wastewater or when the fracturing process encounters a fault that was already under stress. However, the Royal Society said that activities such as coal mining also produce micro-organisms. The suspension of fracking in the United Kingdom was lifted in December 2012, following the report of the Royal Society, which ensured that fracking can be safe "provided that the best operational practices are implemented.

In the United States, a study published in March 2013 in the journal Geology linked the injection of wastewater with the 5.7 magnitude earthquake in 2011 in Prague, Oklahoma. The wastewater injection operations referred to in the study were conventional oil exploitation. However, seismologist Austin Holland of the Oklahoma Geological Survey said that while the study showed a potential link between earthquakes and wastewater injection, "it is still the opinion of the Oklahoma Geological Survey that those tremors could have occurred naturally."

Another study published in July 2013 in the journal Science and led by Nicholas van der Elst, a researcher at Columbia University, found that powerful earthquakes thousands of kilometers away can trigger minor seismic events near wastewater injection wells.

The study indicated that seismic waves unleashed by the 8.8 earthquake in Maule, Chile, in February 2010, moved across the planet causing tremors in Prague, Oklahoma, where the Wilzetta oilfield is located.

"The fluids in the injection of sewage into wells are bringing existing faults to their limit point," said Van der Elst.

Can fracking contaminate the water?

At the request of the US Congress, the Environmental Protection Agency of that country, Environmental Protection Agency, EPA, is conducting a study on the potential impacts of hydraulic fracturing on water sources for human consumption.

A final draft of the report will be released at the end of 2014 to receive comments and peer review. The final report "will probably be finalized in 2016," the EPA confirmed.

In 2011, Stephen Osborn and colleagues at Duke University published a study in the journal of the US Academy of Sciences, according to which the researchers detected contamination of methane water sources near fracking exploration sites in the Marcellus formation. in Pennsylvania and New York.

The study did not find, however, evidence of contamination by chemical additives or the presence of high salinity wastewater in the fluid that returns to the surface along with the gas.

For its part, the Royal Society, the British Academy of Sciences, said that the risk of fractures caused during fracking reaching the aquifers is low, as long as gas extraction takes place at depths of hundreds of meters or several kilometers and wells and the tubing and cementing process are built according to certain standards.

A case cited by the Royal Society in its 2012 report is that of the town of Pavillion, Wyoming, where fracking caused the contamination of water sources for consumption, according to an EPA study. Methane pollution was attributed in this case to poor construction standards and shallow depth of the well, at 372 meters. The study was the first of the EPA to publicly link hydraulic fracturing with water pollution.

However, as in the Duke University study, there were no cases of contamination by the chemical additives used in hydraulic fracturing.

We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, below the aquifer.

How to control the use of chemical additives?

Trevor Penning, head of the toxicology center at the University of Pennsylvania recently urged the creation of a working group on the impact of fracking with scientists from Columbia, John Hopkins and other universities.

Penning told that in the United States "it is decided at the level of each state if companies have an obligation to publicize the list of additives they use."

The industry established a voluntary database of used additives, on the fracking focus site. Penning explained that the additives used in the fracking fluid can be very varied and of many kinds, such as surfactants, corrosion inhibitors, biocides etc.

In toxicology they work on the basis that no chemical is safe, but that is the dose that makes the poison. Additives that could cause concern if they exceed safe levels are substitutes for benzene, ethylene glycol and formaldehyde.

"The potential toxicity of wastewater is difficult to assess because many chemical additives used in hydraulic fracturing fluid are undisclosed commercial secrets," Penning added.

The scientist also told that "the potential toxicity of wastewater is difficult to evaluate because it is a complex mixture (the additives can be antagonistic, synergistic or additive in their effects)".

Anthony Ingraffea, professor of engineering at Cornell University, warned of the impact of the September 2013 floods in Colorado, where only 20,000 wells are located in one county. "A good part of the infrastructure was destroyed, which means that the ponds with sewage tanks with chemical additives are now in the water courses and there are leakages of damaged gas pipelines." "The clear lesson is that infrastructure for fracking in floodplains should never be built.

What is done with wastewater?

These waters are what is known as flowback or reflux water, that is, injected water, with chemical additives and sand, which flows back when the gas starts to come out.

Approximately 25% to 75% of the injected fracturing fluid returns to the surface, according to the Royal Society. These wastewater is stored in open-pit tanks dug into the ground and covered (open pits), treated and reused or injected at high pressure into rock formations. The danger of leakage of wastewater is not unique to the extraction of shale gas, but is common in many industrial processes, notes the Royal Society.

“The wastewater may contain radioactive materials of natural occurrence, Naturally Ocurring Radioactive Materials, NORM, which are present in the shale rock in quantities significantly lower than the exposure limits," says the Royal Society report.

Can it exhaust water resources?

The use of water in large quantities in fracking operations is a cause of concern for some. "For natural gas, for example, fracking requires millions of gallons of water (around 2 to 5 million, or even more than 10 million, that is, from 7 to 18 or up to 37 million liters) for fracturing, which is several times more than conventional extraction requires, "John Rogers, senior energy analyst and co-manager of the Energy and Water Initiative of the Union of Concerned Scientists, Union of Scientists Aware, told.

"The extraction of shale gas by fracking consumes on average of 16 gallons of water per megawatt-hour, while conventional gas extraction uses 4. That is, fracking requires 4 times what conventional extraction requires, "said Rogers.

"That amount of water is less than what is involved in the extraction of coal, but the use of water is very localized and can be very important in the local scene, in terms of what would be available for other uses."

The Water-Smart Power study of the Union of Aware Scientists points out that about half of the hydraulic fracturing operations in the United States occur in regions with high or extremely high water stress, including Texas and Colorado.

Melissa Stark, global director of new energies at Accenture consultancy and author of the report "Shale gas water and exploitation", admits that the extraction of shale gas with hydraulic fracturing uses a lot of water (about 20 million liters per well), but notes that "it does not use more water than other industrial processes, such as irrigation for agriculture. The volumes required may seem large, but they are smaller compared to other water uses for agriculture, electric power generation and municipal use," he told.


Can there be methane leaks?
Anthony Ingraffea, professor of engineering at Cornell University in the United States, says that it is not about determining if wells can leak, but the question must be, what percentage has leaks?

Ingraffea analyzed the situation of the new 2012 wells in the Marcellus formation in Pennsylvania, based on the comments of the inspectors, according to records of the Pennsylvania Department of Environmental Protection.

According to Ingraffea, the inspectors registered 120 leaky wells, that is, they detected faults and leaks in 8.9% of the gas and oil exploration wells drilled in 2012.

A study published in September 2013 by the University of Texas, sponsored among others by nine oil companies, ensured that while methane leaks from shale gas extraction operations are substantial - more than one million tons per year - they were less than the estimates of the US Environmental Protection Agency.

However, the Association of Physicians, Scientists and Engineers for a Healthy Energy in the USA, of which Anthony Ingraffea is president, questioned the scientific rigor of that study, noting that the sample of 489 wells represents only 0.14% of wells in the country and also the wells analyzed were not selected at random "but in places and hours selected by the industry".

Some reported images of tap water that catches fire if a match is approached could be explained by the previous presence of methane.

"We must not forget that methane is a natural constituent of groundwater and in some places like Balcombe, where there were protests, the oil flows naturally to the surface," Richard Selley, professor emeritus of Imperial Petroleum Geology.

"We must remember that when a well is drilled and the aquifer area is crossed, three steel rings are placed, surrounded by cement, beneath the aquifer," added Selley.

How does global warming impact?

Between 1981 and 2005, US carbon emissions They increased 33%. But since 2005 they dropped by 9%. The reduction is due in part to the recession, but according to the US Energy Information Administration, Energy Information Administration, EIA, about half of that reduction is due to shale gas.

Globally, coal provides 40% of the world's electricity, according to the International Energy Agency, International Energy Agency. Advocates of shale gas extraction say it is cleaner than coal and can be a transition fuel, while expanding the use of renewable sources such as solar or wind energy.

In Spain, for example, renewable energies "are bordering 12% and there is an objective of the European Union so that by 2020 20% of European energies are renewable," said Luis Suarez, president of the Official College of Geologists of Spain, ICOG.

But others point out that the gas extracted in the process of hydraulic fracturing is methane, a gas much more potent than carbon dioxide as a greenhouse gas.

According to the Intergovernmental Panel on Climate Change, IPCC, a molecule of methane equals 72 of carbon dioxide after 20 years of emission, and 25 molecules of carbon dioxide at 100 years.

Robert Howarth and colleagues at Cornell University estimated that between 4 and 8% of the total methane production of a well escapes into the atmosphere and adds that there is also emission from the reflux waters that flow along with the gas to the atmosphere. surface after fracturing.

But this analysis is controversial. Lawrence Cathles, also of Cornell University, says the high potential for methane heating in 20 years must be counteracted by the fact that methane has a much shorter life in the atmosphere than CO2.

Robert Jackson of Duke University in North Carolina says that instead of worrying about fracking emissions themselves we should concentrate on leaks in the distribution chain. "Only in the city of Boston we found 3,000 methane leaks in the pipes," Jackson told to New Scientist magazine.

Wednesday 4 October 2017

Biological Clock Discoveries by 3 Americans Earn Nobel Prize

Nobel Prize
The discoverers of the 'internal clock' of the body, Nobel Medicine 2017

The winners are Jeffrey C. Hall, Michael Rosbash, and Michael W. Young

US scientists Jeffrey C. Hall, Michael Rosbash and Michael W. Young today won the 2017 Nobel Prize in Medicine, "for their discoveries of the molecular mechanisms that control the circadian rhythm," according to the jury of the Karolinska Institute in Stockholm, responsible for the award. The prize is endowed with nine million Swedish crowns, about 940,000 euros.

Thanks in part to his work, today it is known that living beings carry in their cells an internal clock, synchronized with the 24-hour turns of the planet Earth. Many biological phenomena, such as sleep, occur rhythmically around the same time of day, thanks to this inner clock. Its existence was suggested centuries ago. In 1729, the French astronomer Jean-Jacques d'Ortous de Mairan observed the case of mimosas, plants whose leaves open during the day into the sunlight and close at dusk. The researchers discovered that this cycle was repeated even in a dark room, suggesting the existence of an internal mechanism.

In 1971, Seymour Benzer and his student Ronald Konopka of the California Institute of Technology took a momentous leap in research. They caught vinegar flies and induced mutations in their offspring with chemicals. Some of these new flies had alterations in their normal 24-hour cycle. In some, it was shorter and in others, it was longer, but in all of them, these perturbations were associated with mutations in a single gene. The discovery could have earned the Nobel, but Benzer died in 2007, at age 86, for a stroke. And Konopka died in 2015, at age 68, of a heart attack.

The Nobel, finally, was taken to Hall (New York, 1945), Rosbash (Kansas City, 1944) and Young (Miami, 1949). The three used more flies in 1984 to isolate that gene, baptized "period" and associated to the control of the normal biological rhythm. Subsequently, they revealed that this gene and others self-regulate through their own products - different proteins - generating oscillations of about 24 hours. It was "a change of paradigm", in the words of the Argentine neuroscientist Carlos Ibáñez, of the Karolinska Institute. Each cell had a self-regulating internal clock.

The scientific community has since established the importance of this mechanism in human health. This inner clock is involved in the regulation of sleep, in hormone release, in eating behavior and even in blood pressure and body temperature. If, as occurs in people working in shifts at night, the pace of life does not follow this internal script, can increase the risk of suffering different diseases, such as cancer and some neurodegenerative disorders, says Ibanez. The syndrome of fast time zone change, better known as jet lag, is a clear sign of the importance of this internal clock and its mismatches.

The Karolinska researcher sets an example with a 24-hour cycle, in which the internal clock anticipates and adapts the body's physiology to the different phases of the day. If the day begins with deep sleep and a low body temperature, the release of cortisol at dawn increases blood sugar. The body prepares its energies to face the day. When night falls, with a peak blood pressure, melatonin, a hormone linked to sleep, is secreted.

These inner rhythms are known as circadian by the Latin words circa, around, and dies, day. The scientific community now knows that these "around the clock" molecular dashes emerged very soon in living things and were preserved throughout its evolution. They exist in both single-cell life forms and in multicellular organisms such as fungi, plants, animals, and humans.

At the time of its discovery, Hall and Rosbash were working at Brandeis University in Waltham, and Young was researching at Rockefeller University in New York. Its recognition follows the tonic of the Swedish awards. Men have won 97% of Nobel prizes in science since 1901. In the category of Medicine, statistics improve slightly: 12 of the 214 women are awarded the prize: 5.6%.

Monday 18 September 2017

Engineers Developing Methods to Construct Blood Vessels Using 3D Printing Technology

3D Printing Technology
From time to time new and interesting news about 3D printing technology in the field of health arise. In the near future, this technology will allow fabrics to be created on demand to repair any organ affected by an illness. There are a lot of medical advances commencing day to day and 3D printing technology is one among them which is really an astonishing factor in the field of medical science.

However, in spite of the promising of these and other advances, to date, it has only been possible to create fine tissues of living cells in the laboratory using 3D printing technology. When we tried to create tissues with a thickness greater than several layers of cells, those in the intermediate layers died from lack of oxygen and the impossibility of eliminating their residues.

They did not have a network of blood vessels to transmit oxygen and nutrients to each cell. Therefore the challenge was served because if a network of blood vessels were artificially created for this purpose using 3D printing technology, larger and more complex cell tissues could be developed.

To solve this problem, the team led by Professor Changxue Xu of Industrial, Manufacturing and system engineering and with his colleague Edward. E. Whitacre Junior college of Engineering has used a 3D printing technology specially adapted for this purpose with three different types of bio-inks. The first head extrudes a biotin of the extracellular compound, the biological material that binds the cells in the tissue. The second extrude a type of biotin which contains extracellular tissue and living cells.

An alternative to more complex installations

The creation of model blood vessels to aid in the study of diseases, such as strokes, can be complicated and costly in addition to consuming a lot of time. And the results can not always be truly representative of a human vessel. Changxue Xu's research has designed a new method to create models of veins and arteries using 3D printing technology that is more efficient, less expensive and more accurate. Changxue Xu and his team have created vascular channels using 3D printing technology.

An important advance is the ability to establish multiple layers of cells in the channels. Normally, when these microfluidic vascular chips are made, they only have one layer of cells. But the blood vessels within the body are composed of three to four different types of cells. The innermost cells, the endothelial cells, are those that come into contact with the blood, but the other layers of the cells help the internal cells. If there is an injury or a blood clot, there is an entire reaction that takes place between these cells.

3D printing technology has now made a difference in manufacturing. "We can use 3D printing technology to create the mold and use that mold to inject any gel and cells in whatever shape we want," says Changxue Xu. The difficulty so far was that much of the work has usually been done in "clean rooms," rooms that are environmentally controlled to prevent contamination and ultra-disinfected. Changxue Xu has a room like that, so the work has to be done at other universities.

Tuesday 5 September 2017

Supercapacitive Performance of Porous Carbon Materials Derived from Tree Leaves

carbon

Converting Fallen Leave – Porous Carbon Material

An innovative system of converting fallen tree dried leaves to porous carbon material which could be utilised in producing high tech electronics have been found by researchers in China. Researchers have defined in a study printed in the Journal of Renewable and sustainable energy, on the procedure of converting tree leaves into a system of integrating into electrodes as active resources. Initially the dried leaves are ground into powder and thereafter heated to 220 degrees Celsius for about 12 hours which formed a powder comprised of small carbon microspheres.

The carbon microspheres are then said to be preserved with a solution of potassium hydroxide and heated on gradually increasing the temperature in sequences from 450 to 800 degrees Celsius. Due to the chemical treatment it tends to corrode the surface of the carbon microspheres which tends to make it tremendously permeable.

The concluding production which is a black carbon powder is said to have a great surface area owing to the existence of several small holes which tend to have been chemically carved on the surface of the microspheres. The great surface area provides the ultimate produce with unusual electrical properties.

Permeable Microspheres



Led by Hongfang Ma of Qilu University of Technology in Shandong, the detectives followed a succession of standard electrochemical test on the permeable carbon microspheres in order to enumerate their possibility for utilisation in electronic devices.

The current-voltage curves for these materials showed that the element tends to make exceptional capacitor. Additional tests indicated that the materials had in fact been super capacitors having precise capacitances of 367 Fards/gram.

 These were said to be over thrice the value seen in some of the graphene super capacitors. Capacitor is said to be an extensively utilised element which tends to store energy on holding a charge on two conductors, which are detached from each other with the support of an insulator.

Super capacitor tend to store 10 to 100 times the energy as an ordinary capacitor and has the tendency of accepting and delivering charges much quicker than a usual rechargeable battery. Hence super capacitive materials have the potentials for an extensive selection of energy storage essential in particular in computer technology as well as hybrid or electric vehicles.

Enhance – Electrochemical Properties



The roadsides of northern China are said to be scattered with deciduous phoenix trees which produce abundant fallen leaves during autumn and these leaves are usually burnt in the colder climate, aggravating the air pollution issue of the country.

The investigators in Shandong, China, had recently found the new system of resolving this issue by means of converting waste biomass into porous carbon materials which could be used in energy storage technology. Besides tree leaves, the team together with the others have also succeeded in changing potato waste, corn straw, pine wood, rice straw as well as other agriculture wastes into carbon electrode materials.

Professor Ma together with her colleagues expects to enhance more on the electrochemical properties of porous carbon materials by augmenting the preparation procedure and enabling fixing or adjustment of the raw materials.

Wednesday 12 July 2017

iPhone 8 to ditch fingerprint sensor for face scanner, reports say

iPhone 8

iPhone 8 – Refurbished Security System

The upcoming iPhone 8 of Apple would be featuring a refurbished security system wherein the users can unlock the device by utilising their face instead of their fingerprints. The 10th anniversary of iPhones is estimated in having a radical redesign that would comprise of a security system which will scan the faces of the users in order to check who could be using the device.

As per Bloomberg, it is said that the 3D scanning scheme would be replacing the Touch ID as a means of verifying payments, log in to apps as well as in unlocking the phone. It could function at various angles and hence the iPhone has the tendency of getting unlocked by merely looking at it, if the same is flat on the table and also held up right. It has been reported that the scanning system has been designed for the purpose of speed and precision and can scan the face of the user and unlock the device within a few hundred milliseconds.

Since it tends to analyse 3D instead of 2D images, it is likely to be capable of differentiating between a persons’ face and an image of the person. Presently available in Galaxy S8 of Samsung in strengthening the security of the device, Apple could also utilise the eye scanning technology.

Face Scanning Technology

Bloomberg had reported that the face scanning technology could secure more than the Touch ID, first released in 2013 on the iPhone S5 since it tends to draw on more identifiers. Apple has claimed that its fingerprint scanner tends to have only a 1 in 50,000 chance of being unlocked by a stranger’s fingerprint. According to an analyst having reliable track record, Ming-Chi Kuo, the iPhone 8 is said to feature an edge-to-edge OLED screen having the maximum screen-to-body ratio than any smartphone prevailing at the moment.

Apple would probably remove the Home button as well as the Touch ID scanner in order to make provision for the display. Kuo has also predicted that Apple would be releasing three new phones in September, namely the iPhone 8, iPhone 7S and iPhone 7S Plus. The iPhone 8 would be featuring the most vivid redesign among the three, having a 5.2-inch size screen retained in a device which would be the same size as the iPhone 7. Besides that it would also have less colour options and will be available with a glass front with steel edges towards the back.

New Chip Dedicated to Processing Artificial Intelligence

A well-linked Apple blogger, John Gruber had mentioned that the top iPhone could be named as `iPhone Pro’ recommending that the cost could be $1,500 or higher. The remaining two devices would be featuring LCD screens and will be available in sizes of 4.7-inch and 5.5-inch. Like the present iPhone 7, these devices would probably have a Home button together with Touch ID.

It is also said that the three phones would be having a Lightning port together with an embedded USB-C equipped with storage of 64GB or 256GB, if the predictions of Kuo tends to be accurate. Moreover they would be available with a new chip that is dedicated to processing artificial intelligence and the same is being verified presently.

Monday 10 July 2017

Watching Cities Grow



Great Resolution Civilian Radar Satellite

Major cities in the world have been increasing and as per the estimates of United Nations, presently half of the population of the world tends to be living in cities. Towards 2050, the figure is expected to mount to two thirds of the population of the world.

 Professor for Signal Processing in Earth Observation at TUM, Xiaoxiang Zhu has informed that this growth has given rise to high demands on building and infrastructure safety since destruction events could threaten thousands of human lives at once. Zhu together with her team had established a method of early detection of probable dangers for instance; subterranean subsidence could cause the collapse of buildings, bridges, tunnels or even dams.

The new system tends to make it possible in noticing and visualizing changes as small as one millimetre each year. Data for the latest urban images tends to come from the German TerraSAR-X satellite which is one of the great resolution civilian radar satellite in the world. Since 2007, the satellite, circulating the earth at an altitude of approximately 500 kilometres tends to send microwave pulses to the earth and collects their echoes. Zhu has explained that at first these measurements were only in a two dimensional image with a resolution of one meter.

Generate Highly Accurate Four-Dimensional City Model

The TUM professor worked in partnership with the German Aerospace Centre – DLR and was also in charge of her own working team. The DLR tends to be in control of the operation and use of the satellite for scientific purposes.

The consequence of the images is restricted by the statistic that reflections from various objects which are at an equivalent distance from the satellite, will layover with each other and this effect tends to decrease the three-dimensional world to a two-dimensional image. Zhu had not only created her own algorithm that tends to make it possible in reconstructing the third and also fourth dimension, but also set a world record at the same time.

 Four dimensional point clouds having a density of three million points for each square kilometre had been reconstructed. This rich recovered information gave rise to generate highly accurate four-dimensional city models.

Radar Measurements to Reconstruct Urban Infrastructure

The trick was that the scientists utilised images taken from slightly various viewpoints. Every eleven days, the satellite tends to fly over the region of interest but its orbit position does not always seem to be precisely the same. The researchers utilise these 250 meter orbital variations in radar tomography to localize each point in three-dimensional space.

This system utilises similar principle used by computer tomography that tends to develop a three-dimensional view of the inner area of the human body. Various radar images taken from different viewpoints have been linked in creating a three-dimensional image. Zhu states that since this system processes only poor resolution in the third dimension, additional compressive sensing system which makes it possible for improving the resolution by 15 times is applied.

Scientists could utilise the radar dimensions to restructure urban organization on the surface of the earth with great accuracy, from TerraSAR-X, for instance the 3D shape of individual buildings. This system has already been utilised in generating highly precise 3D models in Berlin, Paris, Las Vegas and Washington DC.

Friday 7 July 2017

Hot Electrons Move Faster Than Expected

 Hot Electrons

Ultrafast Motion of Electrons


A new research has given rise to solid-state devices which tend to utilise excited electrons. Engineers and scientists at Caltech have for the first time, been in a position of observing directly the ultrafast motion of electrons instantly after they have been excited by a laser. It was observed that these electrons tend to diffuse in their surroundings quickly and beyond than earlier anticipated.

This performance called as `super-diffusion has been hypothesized though not seen before. A team headed by Marco Bernardi of Caltech and the late Ahmed Zewail had documented the motion of electrons by utilising microscope which had captured the images with a shutter speed of a trillionth of a second at a nanometer-scale spatial resolution and their discoveries had appeared in a study published on May 11 in Nature Communications.

 The excited electrons had displayed a diffusion rate of 1,000 times higher than earlier excitation. Though the phenomenon had lasted only for a few hundred trillionths of a second, it had provided the possibility for operation of hot electrons in this fast system in transporting energy and charge in novel devices.

Assistant professor of applied physics and materials science in Caltech’s Division of Engineering and Applied Science, Bernardi had informed that their work portrayed the presence of fast transient which tends to last for a few hundred picoseconds at the time when electrons move quicker than their speed of room temperature, indicating that they can cover longer distance in a given period of time when operated with the help of lasers.

Ultrafast Imaging Technology


He further added that this non-equilibrium behaviour could be employed in novel electronic, optoelectronic as well as renewable energy devices together with uncovering new fundamental physics. Nobel Laureate Ahmed Zewail, the Linus Pauling Professor of Chemistry, professor of physics as well as the director of the Physical Biology Centre for Ultrafast Science and Technology at Caltech, colleague of Bernardi had passed away on 2nd August 2016.

The research had been possible by scanning ultrafast electron microscopy, which is an ultrafast imaging technology initiated by Zewail, with the potential of creating images with picosecond time with nanometer spatial resolutions. The theory and computer models had been developed by Bernardi which clarified the tentative results as an indicator of super-diffusion.

Bernandi has plans of continuing the research by trying to answer the fundamental questions regarding the excited electrons, like how they equilibrate among themselves as well as with atomic vibrations in material, together with applied ones like how hot electrons could increase the efficiency of energy conversion devices such as solar cells and LEDs.

Super Diffusion of Excited Carriers in Semiconductors


The paper has been entitled `Super Diffusion of Excited Carriers in Semiconductors’. Co-authors comprise of former postdoc Ebrahim Najafi of Caltech, who is said to be the main author of the paper and a former graduate student, Vsevolod Ivanov. The research has been supported by the National Science foundation, together with the Air Force Office of Scientific Research, the Gordon and Betty Moor Foundation as well as the Caltech-Gwangju Institute of Science and Technology – GIST, program.

Saturday 1 July 2017

Sensor Solution: Sensor Boutique for Early Adopters

Sensor Boutique
It is known that a very individual fraction of infrared light is absorbed by every chemical substance. This absorption can be used for recognising substances with the help of optical methods, which is almost like the concept of a human fingerprint.

To elaborate this concept, when the infrared radiation, that falls within a certain range of wavelength, are absorbed by molecules, they are animated to a higher level of vibration, in which they rotate and vibrate in a typical and distinctive pattern or rather in a “fingerprint” pattern. These patterns can be used for identifying specific chemical species. Such kind of a method is used, let’s say, for example, in the chemical industry but also has its uses in the health sector or in criminal investigation. A company often needs an individually tailored sensor solution if it plans a new project.

EU-funded pilot line called MIRPHAB (Mid InfraRedPhotonics devices fABrication for chemical sensing and spectroscopic applications) support companies that in search for a suitable system and help in the development of sensor technology and measurement technology in mid-infrared (MIR). Participating in this project is the Fraunhofer Institute for Applied Solid State Physics IAF.

Pilot line for ideal spectroscopy solutions


A company has very individual needs if it is looking for a sensor solution, for example, if it has to identify a particular substance in a production process. This begins with the substances that have to be recorded to the number of sensors required up to the speed of the process of production.Considering most of the cases, a custom-made solution that suits all does not suffice and various suppliers are required for the purpose of developing the optimal individual solution.Here is where MIRPHAB comes into picture and proves to be very useful.

Leading European research institutes and companies belonging to the MIR environment have collaborated to provide customers with a custom-made and best suited offers made from a single source. Parties that are interested can get in touch with a central contact person, who can then make a compilation of the best solutions possible from the MIRPHAB members component portfolio as per the modular principle.

EU funding has supported MIRPHAB in the development of the individual MIR sensor solution within the framework, in order to fortify the European industry in the long run and increase in its leading position in chemical analysis and sensor technology. This considerably lessens the investment costs and as a result also reduces the entry point for companies in the MIR area.

Companies that have previously faced high costs and development efforts are now looking at a high-quality MIR sensor solution as an object of interest due to its combination with the virtual infrastructure which is a development caused by MIRPHAB.Also, MIRPHAB provides companies access to the latest and modern technologies, enabling them with an added advantage as an early adopter compared to the competition.

Custom-madesource forMIR lasers


The Freiburg-basedFraunhofer Institute for Applied Solid State Physics IAF along with the Fraunhofer Institute for Photonic Microsystems IPMS situated in Dresden, is providing a central component of the MIRPHAB sensor solution. The Fraunhofer IAF is presenting the new technology of quantum cascade lasers that emanate laser light in the range of MIR. In this type of laser, the range of the wavelength of the emitted light is spectrally extensive and can be adapted as per requirement during manufacturing. To select a particular wavelength within the broad spectral range, an optical diffraction grating has to be used to choose and then coupled back into the laser chip. The wavelength can be adjusted constantly by turning the grating. This grating is created at the Fraunhofer IPMS in a scaled-down form in so-called Micro-Electro-Mechanical-System or MEMS technology.Thus it is then possible to oscillate the grating up to one kilohertz of frequency. This further enables the tuning of the laser source’s wavelength up to a thousand times per second over a large range of spectrum.
The Fraunhofer Institute for Production Technology IPT in Aachen also has involvement in MIRPHAB in order to make the manufacturing of lasers and ratings more proficient and to enhance them for pilot series fabrication.With the help of its proficiency, it changes the production of the quickly adaptable MIR laser into industrially applicable manufacturing processes.

Process exploration in actuality

Currently, there are many applications in the field of spectroscopy that are still in the category of visible or near the range of infrared and use comparatively feeble light sources. MIRPHAB provides solutions has the concept of infrared semiconductor lasers as a foundation. These have comparatively higher intensity of light thus allowing the scope for completely new applications. This results in a recording of up to 1,000 spectra per second with the help of the MIR laser source which, as an example, provides for the real time programmed monitoring and control of biotechnological processes and chemical reactions. Thus, MIRPHAB’s contribution is considered to be important and vital to the factory of the future.

Tuesday 27 June 2017

Space Robot Technology Helps Self-Driving Cars and Drones on Earth

Support Robots to Navigate Independently
 
The significance of making units of self-driving cars together with grocery delivery through drone could be revealed through an improbable source – autonomous space robots.

An assistant professor of aeronautics and astronautics, Marco Pavone has been creating technologies to assist robots in adjusting to unknown as well as altering environments. Pavone had been working in robotics at Jet Propulsion Laboratory of NASA before coming to Stanford and had maintained relationships with NASA centres together with collaboration with the other departments at Stanford. He views his work in space and Earth technologies as complementary.

 He commented that in a sense, some robotics techniques which tend to have been designed for autonomous cars could be very useful for spacecraft control. Similarly the algorithms which he and his students devised to assist robots make decisions and assessments on their own with a span of a second could help in space exploration as well as they could improve on driving cars and drone from the Earth.

One of the projects of Pavone tends to centre on supporting robots to navigate independently in bringing space debris out of orbit, delivering tools to astronauts and grasp spinning, speeding objects out of the vacuum of space.
 
Gecko-Inspired Adhesives
 
There is no boundary for error while grabbing objects in space. Pavone informed that in space when you approach an object, if you are not very careful in grasping it at the time it is contacted, the object would float away from you. Bumping an object in space would make recovering it very difficult.

Pavone had teamed up with Mark Cutkosky, a professor of mechanical engineering, who had spent the last decade perfecting gecko-inspired adhesives, in order to resolve the grasping issue.

 The gecko grippers support a gentle approach as well as a simple touch in order to `grasp’ an object, enabling easy capture and release of spinning, unwieldy space debris. However the delicate navigations needed for grasping in space is not an easy job. Pavone had stated that one have to operate in close proximity to other objects, spacecraft or debris or any object one might have in space that needs advanced decision making potentials.

 Pavone together with his co-workers developed systems which enabled space robot to independently respond to such flexible situations and competently grab space objects with their gecko-strippers.
 
Perception-Aware Planning
 
The subsequent robot could move as well as grab in real time, updating its decisions at a rate of several thousand times a second. This kind of decision-making technology is said to be beneficial in solving navigation issue with drones that are Earth-bound.

 A graduate student Benoit Landry had stated that for these types of vehicles, navigating at high speed in proximity to buildings, people together with the other flying objects seems difficult to perform. He focused that there seems to be a delicate interplay between making decisions and environmental perception. He added that in this perceptive, several aspects of decision making for independent spacecraft tend to be directly significant to drone control.

Landry together with Pavone have been working on `perception-aware planning’ that enables drones to consider fast routes as well as to `see’ their surroundings besides improved estimate on where they are. The work is presently being extended towards handling of interactions with the humans, a main section to organize autonomous system like the drones and self-driving cars.

 



Reduced Gravity Atmospheres
 
Landry had also mentioned that the background of Pavone at NASA had been a good complement to the academic work. When a robot is said to land on a small solar system body type an asteroid, added challenges tend to come up.

 These atmospheres seem to have total different gravity than the Earth. Pavone had stated that if one were to drop an object from waist-height, the same would take a couple of minute to settle to the ground. Ben Hockman, a graduate student in the lab of Pavone, had worked on a cubic robot known as Hedgehog, in order to deal with low-gravity atmospheres such as asteroids.

 The robot passed through uneven, rugged and low-gravity territories by hopping rather than driving like the traditional rovers. Ultimately, Pavone and Hockman desired Hedgehog to be capable of navigating and carrying out tasks without being obviously told how to perform it by a human located millions of miles away. Hockman had mentioned that the prevailing Hedgehog robot is said to be designed for reduced gravity atmospheres though it could be adjusted for Earth.

It would not hop quite that far since we tend to have more gravity though it could be utilised to cross more rugged territories where wheeled robots are unable to go. Hockman viewed the research that he had been doing with Pavone as core scientific exploration adding that science attempts to answer the difficult questions we don’t know the answers to and exploration seeks to find whole new questions we don’t even know yet how to ask.