Saturday 13 August 2022

Google Fiber News: What is next?

Google Fiber News: What is next?

Have you been following Google fiber news? If yes, then you must know how busy they are. It has generated the network in all towns and surrounding regions, from North Carolina to Utah. Its motive is to connect clients in West Des Moines to turn lowa into the first ever state in five years. In addition, it will begin construction in neighboring Des Moines. Moreover, it recently announced not creating a network in Mesa, Arizona. They have been focusing on delivering the most effective gigabit internet service for the last several years to their clients through relentless refinements.

What is Google Fiber?

It is a top-speed broadband internet service using fiber optic wires to provide gigabit speeds to homes and businesses. These help to share information via light. Thus, it can offer much quicker speeds than traditional cable, DSL, or dial-up connections.

The high-speed network at gigabit speeds is no more a bold idea. Communities from various nations are willing to expand access to gigabit internet. Their team has spent several months traveling across the country. Besides, they have made plenty of conversations with cities that want the ultimate network speed to provide their residents and business owners.

They will be even busier than before. It is because these talks to city leaders in these states have a motive to bring fiber-to-the-home service to their communities.

  •  Arizona ( it will begin in Mesa as announced in July) 
  • Colorado 
  • Nebraska 
  • Nevada 
  • Idaho

Their growth depends on these states for a few upcoming years with continued expansion in recent metro areas. Besides, they prefer to talk to communities willing to create their fiber networks. You can see the model work in Huntsville and West Des Moines effectively.

They are excited to expand the geographic reach once again by offering a better network to more people in more places. In addition, they will provide more information about new towns, quicker speeds, and redefined customer service.

It will be its first major expansion after it spun out as an independent Alphabet Inc (GOOGL.O) unit. Dinni Jain, the chief executive of the company (February 2018), said in his first interview that they were ready to add a bit more speed after over four years of sharpening operations.

Expanded from seventeen to twenty-two metro areas across the US, it can provide projects to launch in Mesa, Arizona, and Colorado Springs, Colorado. These rely on the company's findings of where speeds lag.

What Did Jain Say About Google Fiber News?

He said that it has been trying to produce the whole nation. But according to them, they don't have any motive not to build the entire nation. Therefore, it did not comment on Fiber's financial results or fundraising plans.

A few Alphabet subsidiaries are raising funds outside to independently last the value, being shut down or subsumed by other entities. It can face the same choices because the expansion can materialize over the next three to five years.

Its plan is available as Alphabet. In addition, several companies reduce the speed of hiring. Besides, these can shutter a few fringe projects for a global recession rise.

Jain said that their objective is to create businesses that will get success in their right, and it is when you should know that Google fiber news. In addition, the company will not depend on dipping into "a rich parent's wallet. Google started to take on network service stalwarts, including Comcast Corp (CMCSA.O) and AT&T Inc (T.N) in 2010, while Larry Page and Sergey Brin co-founders said about the tiredness of waiting on Congress.

In this regard, you need to know about the competitor's scenario to fit into its gigabit per second offerings. Names of some launch sites are Austin, Texas, Los Angeles, and areas under consideration.

He also said in his prior role as Time Warner Cable's chief operating officer that they were paranoid. Google separated the core business, delivery drones, and anti-aging solutions. It needs to invest a huge amount of money in annual losses on construction. They spend thousands of dollars to experiment with new ways to ground these optic cables and subsidize some services.

The Bottom Line:

It has decreased expansion for the last few years to West Des Moines, Iowa, and within metropolitan areas. Wall Street cheered about transparency and cost control. Besides, he honed methods and dumped failed techno to save time, such as taping cables to sidewalks. In addition, it built more the previous year compared to the earlier few years.

He also said that they must go from the spirit and culture of uniqueness to one of operational excellence. Burying fewer deep trenches than others must save time, whereas streamlining prices and setting up to limit customer help calls must hold down costs. He also said that people contacted it a third less than he saw at the same companies. In addition, he also described sign-ups as more than his expectation before joining. It is all about Google fiber news which our article describes.

Frequently Asked Questions:

  • Q. Is Google Fiber coming back?

According to the announcement from Google, it might expand into South Salt Lake, Utah. On July 26, 2021, the company announced that construction was underway. It is expected to be finished by the first half of 2022. The company published a blog on December 28th, 2021, reflecting back on 2021.

  • Q. Why did Google Fiber fail?

Its installers laid Fiber with the help of "shallow trenching" in Louisville. This team cited the experimental construction processes which are used in Louisville. People believe that it was the reason behind this. We know the deployment technique as "nano trenching," allowing it to deploy Fiber at enhanced speed and lower cost.

  • Q. Is it shut down?

If you are one of the customers who are not willing to upgrade will lose your Fiber TV service in April 2022. But your network service will continue. The Chromecast devices enable customers to see the network on the PC and attach to television sets through an HDMI port.

Monday 8 August 2022

Sony WI-C100 Review

Sony WI-C100 is a wireless Bluetooth earphone that will take your music wherever life brings you. The model can provide up to 25hrs of battery life and offer high-quality listening. Besides, it can easily make you feel comfortable because of its simple usage. The headphones can provide a simple piece of mind with an IPX4 Splash resistance design.

Features of Sony WI-C100:

Exceed your expectations:

The model is an excellent wireless in-ear headphone. You can get exceptional sound quality, and in addition, it can provide a rich listening experience. Besides, it has a splash-proof design offering peace of mind while you're out and about. These support the Sony | Headphones Connect app that you may use to manage sound to suit your tastes.

Battery life:

It is possible to experience up to 25 hours of non-stop music. If your earphone runs low on power, a 10-minute quick charge can provide you with up to one hour of playback.

Create your music sound more natural:

If a genuine music source is compressed, it may lose the high-frequency elements, which can add detail and richness to a track. DSEE or Digital Sound Enhancement Engine helps to restore these to create a high-quality sound nearer to the actual recording.

Sound quality:

You can change your sound to your personal preference. In this case, you can get different presets to match a genre of music you're listening to or generating. Besides, you can save your custom presets with the help of the equalizer feature on the Sony | Headphones Connect app.

Well-balanced tuning:

Its well-designed sound tuning is especially from low to high frequencies. In addition, the vocals are very clear and natural, ensuring that the headphones can match any music genre.

No problem with Splashes and sweat:

It comes with an IPX41 water resistance rating. Therefore, the headphone can easily handle splashes and sweat and allow you to continue moving to the music.

Suitable for use:

The product weighs light, and it is simple to port. Besides, it comes with a flexible neckband, making it simpler to slip into your bag while you're moving.

Voice Assistant compatible:

You need to hit the button twice on the headphones allowing you to attach it to your Smartphone's voice assistant. Thus, you may look for directions, play music, communicate with contacts, etc.

Quick Pairing:

The fast Pair process allows you to pair your earphone easily with your Android™ devices. With a single click of pop-up guidance, you can act fast, effortless Bluetooth® pairing with your Android™ devices. In addition, it enables you to find where you left your headphones. Hence, you can find it by ringing them or checking the previous location on the mobile.

Icon for Swift Pair:

You can attach the headphone to the computer easily with Swift Pair. Now, you may use it to pair your earphones with the Windows 11 and Windows 10 computers via Bluetooth®. Pop-up pairing guidance is available on Windows 11 and 10 devices while you choose the pairing mode.

Immersive:

You can immerse yourself in your favorite sound at a live concert or with the artist recording in a studio. If you use 360 Reality Audio, your music experience will become very immersive, and the earphones allow you to enjoy 360 Reality Audio.

BRAVIA XR and Wireless Transmitter WLA-NS74:

Users can have a thrilling Dolby Atmos® experience after using the model. Besides, it can provide 360 Spatial Sound, which can adapt to the ears with BRAVIA XR and the WI-C100 headphones using wireless transmitter WLA-NS7.

Sustainability:

These come with an excellent stylish design. Besides, it is designed with the environment in mind and uses plastic, comprising less than five percent of the individual packaging material. As a result, it will reflect Sony's commitment and help to decrease the environmental impact.

Easy to use buttons:

The Sony WI-C100 earphone is simple to operate. Besides, the buttons allow you to play and pause, skip tracks, adjust the volume, and receive calls. Moreover, the buttons are redesigned with a more pronounced shape to make them simpler to press.

Clear hands-free calling:

It is possible to continue conversations freely with clear hands-free calls. Hence, the high-quality built-in microphone helps you to do so.

Pros:

  • Hands-free calling 
  • Simple to operate buttons 
  • A wireless transmitter is available 
  • Immersive experience 
  • Well-balanced tuning 
  • Great sound quality

Cons:

  • Thin wires

Sunday 10 July 2022

MLGO: A Machine Learning Framework

MLGO

When people started having modern PCs, they wanted to know how to compile smaller codes quickly. If you have better code optimization, it can decrease the operational cost of big data center applications. You should know that its size is the most dependable factor to phone and embedded systems or software. Ensure that the compiled binary needs to fit in tight code size budgets. You can find headroom squeezed heavily with complicated heuristics, impeding maintenance and improvements.

What is MLGO?

MLGO is a machine learning framework for computer optimization.

Know More About MLGO:

According to a recent research, ML offers more chances for compiler optimization, and it helps to exchange complicated heuristics with ML policies. However, as a compiler, adopting machine learning is a challenge.

It is why MLGO, a Machine Learning Guided Compiler Optimizations Framework, is here. This one is the first industrial-grade general framework used to integrate ML techniques in LLVM systematically.

In this case, you need to know that LLVM is an open-source industrial compiler infrastructure. You can use it to build mission-critical, high-performance software. Besides, it uses RL to train neural networks. Thus, you can make decisions that can exchange heuristics in LLVM. Reinforcement learning is the full form of RL. You can find here two MLGO optimizations for LLVM. The first one decreases code size with inlining. Besides, the second one helps to improve code performance with register allocation (regalloc). You can get both available in the LLVM repository.

How Does MLGO Work?

Inlining can decrease the code size. It makes decisions that remove the redundant code. We have given here an example. The caller function foo() calls the callee function bar(). It is known as baz().

Inlining both callsites will return a simple foo() function to minimize the code size. You can see many codes calling each other. Thus, these comprise a call graph.

The compiler traverses the graph during the inlining phase. Then, it decides whether it should inline a caller-callee pair or not. This one is a sequential process because earlier inlining decisions will change the call graph. As a result, it will affect the later decisions and the final outcome. The call graph foo() → bar() → baz() requires a "yes" decision on both edges. It helps to decrease the code size.

A heuristic decided inline / no-inline before MLGO. But the more time passes, it becomes hard to improve. The framework substitutes the heuristic with an ML model.

Hence, the compiler seeks advice from a neural network during the traversal of the call graph. It takes the advice to know if it should inline a caller-callee pair by feeding in relevant features from the graph. After that, it will execute them sequentially until the entire call graph is traversed.

The framework trains the decision network with RL. In this case, it uses policy gradient and evolution strategies algorithms . Thus, it can gather information and help to improve the policy. The compiler consults it for inline / no-inline decision-making while inlining. Sequential decision refers to state, action, and reward. When the compilation finishes, it makes a decision. Thereafter, the log is passed to the trainer to update the model. It continues repeating until a satisfactory model appears.

The policy is embedded into the compiler. Thus, it helps to offer inline / no-inline decisions during compilation. Unlike the training scenario, you don't find the policy creating a log. However, you can see the TensorFlow model embedded with XLA AOT. It helps to transfer the model into executable code. Thus, it can avoid TensorFlow runtime dependency and overhead. In this case, it decreases the additional time and memory cost of the ML model.

You can see the policy on a big internal software package with 30k modules. It is generalizable if you apply it to compile other software. Thus, it can achieve a 3% ~ 7% size reduction. Time is also essential for the generalizability across the software.

As the compiler and software are getting developments, the policy must retain good performance for a reasonable time.

Register-Allocation (for performance)

The framework helps to improve the register allocation pass. Thus, it can improve the code performance in LLVM. Register Allocation helps to assign physical registers to live ranges.

When the code executes, different live ranges are finished at different times. Thus, it can free up registers for use. In the instance, you can see every "add" and "multiply" instruction needs all operands. It gets the result in physical registers. It allocates the live range x to the green register. This task completes before live ranges in the blue or yellow registers. When x is completed, you can see the green register. Then, it will assign to live range.

While allocating live range q, you don't find any registers. Therefore, the register allocation pass must decide which one it can evict from its register to create space for q. We know it as the "live range eviction" problem. It is the decision why you should train the model to replace the original heuristics. It helps to evict z from the yellow register and assign it to q and the first half of z.

You can see the unassigned second half of live range z. Now, you can see the eviction of the live range t, and it is split. The first half of t and the final part of z prefer to use the green register. You can see an equation of q = t * y, where z is unavailable. It means z is not assigned to any register. As a result, you can get its value available in the stack from the yellow register. After that, the yellow register gets reloaded to the green register. You can see a similar thing happening to t. It can add additional load instructions to the code and degrades performance. The register allocation algorithm wants to decrease any type of issues. You can use it as a reward to guide RL policy training.

The policy for register allocation gets training on a big Google internal software package. It can experience 0.3% ~1.5% improvements in QPS. The term stands for queries per second.

The bottom line:

MLGO is a framework to integrate ML techniques in LLVM, an industrial compiler. It is a framework you can expand to make it deeper and broader. If you want to make it deeper, you should add more features. Then, you need to apply better RL algorithms. But if you're going to make it broader, you should apply it to more optimization heuristics.

Sunday 3 July 2022

Solid-state LiDAR Switches

Solid-state LiDAR Switches

Google released its first autonomous cars in 2010. During this time, the spinning cylinder has gotten the most fame and attention by standing out uniquely. It is a car's light detection and ranging (LiDAR) system. This system is suitable with light-based radar. In addition, the solid-state LiDAR system helps cars to avoid obstacles by offering cameras and radar in a combination. Thus, it helps cars to drive safely. Let's know about solid-state LiDAR.

Since then, people have started using affordable chip-based cameras and radar systems. It is because light detection and ranging navigation systems are mechanical devices that can cost a lot of money, especially for autonomous highway driving.

However, the new type of high-resolution solid-state LiDAR chip makes all things easier. Ming Wu, a professor of electrical engineering and computer sciences and co-director of the Berkeley Sensor and Actuator Center at the University of California, produced it. In the journal Nature, you can find this new design on Wednesday, March 9.

The technology is based on a focal plane switch array (FPSA). This array is a semiconductor-based matrix of micrometer-scale antennas. It can collect light similarly to sensors found in digital cameras. However, you may not find the resolution of 16,384 pixels impressive, and it is when you compare it with pixels found on mobile cameras.

Design of solid-state LiDAR:

You can see its design in megapixel sizes. According to Wu, it uses the same complementary metal-oxide-semiconductor (CMOS) technology to make processors. As a result, you can find a new generation of strong and reasonable 3D sensors. You can use it for drones, autonomous cars, robots, and even mobiles.

LiDAR barriers:

The technology captures reflections of light that its laser emits. Besides, it measures the required time for light to go back or change in beam frequency. Thus, it maps the environment. In addition, it can clock objects' speed moving around it.

The systems come with strong lasers, and these help visualize objects hundreds of yards away, even if they are available in the dark. Besides, they can create 3D maps with high resolution, and it is lucrative for a car's artificial intelligence. Using 3D maps in high resolution, we can differentiate vehicles, bicycles, pedestrians, and other hazards. Wu also said that their motive is to illuminate a very large area. But trying such a thing doesn't allow light to travel enough distance. Therefore, if you want to maintain light intensity, it is essential to decrease the areas illuminated with laser light, which is when you need to use the FPSA.

This switch array has a matrix of small optical transmitters, antennas, and switches. These help to power on and off them rapidly. Thus, it helps to channel all laser power via a single antenna at a time.

MEMS switches of solid-state LiDAR:

Generally, silicon-based LiDAR systems need thermo-optic switches. These depend on big changes in temperature so that they can develop tiny changes in the refractive index and bend. Thus, it can redirect laser light from one to another waveguide.

Thermo-optic switches come in large sizes. Besides, these are power-hungry. While jamming excessively onto a chip, it can create so much heat. Thus, it allows you to operate itself accurately. It is one of the reasons why FPSAs are limited to 512 pixels or less.

In this case, Wu's solution is lucrative. Therefore, it is better to replace it with microelectromechanical system (MEMS) switches.

According to him, the construction is like a freeway exchange. He added that if you are a light going from east to west, you need to turn to 90 degrees when we lower a ramp, and it allows you to move to the South from the North.

MEMS switches help to route light in communications networks. If you want, apply it to the system. Besides, these come in a smaller size than thermo-optic switches. In addition, they use far less power and switch faster.

While powering on a pixel, a switch emits a laser beam. In addition, it helps to capture the reflected light. Every pixel is the same as 0.6 degrees of the array's 70-degree field of view. In this case, FPSA helps to generate a 3D picture of the world by cycling rapidly through an array. When you mount a few in a circular configuration, it helps to generate a 360-degree view around a vehicle.

Mobile cameras of solid-state LiDAR:

The professor wants to boost the FPSA resolution and range before the commercialization of his system. He said that they face challenges to make optical antennas smaller. But, the switches come in large sizes, and they can be made a lot smaller.

Conclusion:

The professor also wants to boost the solid-state LiDAR's range by only 10 meters. He added that the number could reach 100 meters or even 300 meters. He used cameras in vehicles, robots, vacuum cleaners, surveillance equipment, biometrics, and doors. In addition, there are multiple potential applications also. Xiaosheng Zhang, Kyungmok Kwon, Johannes Henriksson, and Jianheng Luo of UC Berkeley are the names of the co-authors.

Sunday 19 June 2022

VoxLens

VoxLens

In recent times, interactive visualizations have changed the way of our lifestyle. For instance, we can know the number of coronavirus infections in every state. But sometimes, people using screen readers can't access graphics.

Besides, people who use software programs even can not access them. The software program scans the contents of a PC display. Thus, the program makes it available through a synthesized voice.

There are plenty of Americans who use screen readers. You can find them used for different purposes. For instance, they use it to complete or partial blindness. Besides, they use it to learn about disabilities or motion sensitivity. 

VoxLens:

VoxLens is a JavaScript plugin that has one additional line of code. It enables you to interact with visualizations. Do you use the plugin? If yes, you can gain a high-level summary of the information described in a graph.

Besides, you can hear a graph translated into sound. In addition, you can use voice-activated commands if required. It enables you to ask particular questions regarding data.

You can find data visualizations available on the Web. Experts and non-experts can use it to explore and analyze simple and complex data. In addition, they help people to extract details effectively.

They use the human mind to detect and interpret visual patterns in this case. But the visual nature of data visualizations may disenfranchise screen-reader users, and these users might not see or recognize visual patterns. Screen-reader users use a screen reader to read the contents of a PC display.

VoxLens is an open-source JavaScript plugin that can offer screen-reader users a multi-modal solution. We have given three modes. You can use this solution with three interactive modes.

(1) Question-and-Answer mode: In this case, the mode allows you to interact with the visualizations yourself.

(2) Summary mode: Hence, the plugin describes the summary of details contained in the visualization.

(3) Sonification mode: It helps to map the data in the visualization to a musical scale. Thus, you can interpret the data trend if you're a listener.

The sonification feature of this plugin is open-source. In addition, it supports other libraries and allows you to customize them. Moreover, it decreases the burden on visualization creators.

You can apply accessibility features to the data visualizations. Furthermore, it lets you insert a single line of JavaScript code during visualization creation.

Even screen-reader users can explore information using the plugin as they want. In these cases, they don't have to depend on visualization creators. Besides, they also do not need to process data in their minds.

In recent times, the plugin is suitable for visualizations that need JavaScript libraries. D3, chart.js, or Google Sheets are a few examples of it. The team wants to expand other famous visualization platforms. According to the researchers, people may find the voice-recognition system frustrating to use.

Screen-reader users usually can't access the data visualizations. However, the data visualization comes with a few normal accessibility functions like alternative text or a data table. In addition, they need to remember and process more details mentally. Seeking the maximum or minimum value in a chart is one example.

What did Katharina Reinecke say?

He was the co-senior author and UW associate professor in the Allen School. According to him, it is a very big agenda for them. He also added that they thought of people first while creating technology. It is for those who come with similar abilities as we do and who are like us. D3 helps to understand the information in an improved way. It is essential for us to start thinking more about technology like how to make, etc.

Major contribution:

These are a few of the major contributions:

  The plugin helps to improve online data visualizations' accessibility. It means that screen-reader users can access online data visualization.

  Using the javascript plugin, you can explore these both holistically and in a drilled-down manner. If you want, find design and architecture, functionality, commands, and operations.

  In addition, the plugin boosted the accuracy of extracting details by 122%. Moreover, it reduced interaction time by 36% compared to not using VoxLens.

What is Voxlens?

VoxLens is an open-source JavaScript plugin that needs only a single line of code. Besides, it also uses voice-activated commands for screen-reader users.

Design:

We present the design and implementation of the Javascript plugin. Thus, it helps to improve the accessibility of online data visualizations.

You can find it made with the help of a user-centered iterative design process.

Holistic exploration comes with overall trend, extremum, labels, and ranges for each axis. On the flip side, drilled-down interaction offers to examine individual data points. It combines "vox" ("voice" in Latin) and "lens."

In addition, the plugin lets you explore, examine, and extract information from online data visualizations. But recently, it has been compatible with two-dimensional single-series data.

Limitations & Future Work:

      The plugin is now limited to two-dimensional data visualizations with a single data series.

      Future work uses n-dimensional data visualizations to study the experiences of screen-reader users. In addition, it can extend the functionality based on the findings.

      Moreover, the javascript plugin is only fully functional on Google Chrome. The reason is that the browser doesn't allow it to use the Web Speech API's speech recognition feature.

      We hope to use alternatives in the future of the Web Speech API. As a result, it can provide cross-browser support for speech recognition.

Conclusion:

If you are willing to assess the performance of VoxLens, look at the task-based experiments that we conduct. Besides, you should see the interviews with screen-reader users. According to the results, we also have proof that screen-reader users considered it a "game-changer." Thus, it can offer new ways to interact with online data visualizations. As a result, you can save both time and effort.

If you want, take the help of open-sourcing code for it and sonication solution. The reason is that they can improve the accessibility of online data visualizations continuously. They also help to research by guiding in the future to make data visualizations accessible.