Monday, 8 July 2019

OLED vs. QLED TV: Which is Right for you, and why does it Matter?

OLED vs. QLED TV: Which is Right for you, and why does it Matter?
These are golden days for TV technology. As the technology is advancing every day, the demand is also increasing. So, OLED vs. QLED TV, which one is right for you? Let’s find out.

If you have casually looked for a TV, you have likely been overwhelmed with some acronyms like, 4K, HD, UHD, HDR, and so on. All the major TV brands like Sony, LG, Samsung, Panasonic and other use them.

Two most prevalent terms you will encounter everywhere in today’s world of high-end TVs are OLED and QLED. Because of the indistinguishable term, many people get confused between two technologies. They are actually extensively diverse from each other.

OLED vs. QLED TV: Let’s talk in detail! 


What is OLED? 


Let’s start with the basic: OLED stands for Organic Light Emitting Diode. OLED TVs are a structurally different technology than that of a normal LED-LCD TVs. The OLEDs take form of individual pixels emitting their own light when fed electricity. There’s no need for the LCD screen to display image - hence the term emissive display. There are three micro pixels in each OLED pixel.

What is QLED? 


QLED (Quantum dot Light Emitting Diode) is the most advanced version of conventional LED-LCD technology. The problem with LED-LCD is backlight color, which vary widely from set to set.QLED solve that by using the quantum dots which acts like a filter when put in between the LED backlight and the LCD screen to enhance performance. These quantum dots are the reason for getting a better, more vivid and more saturated color.

More on OLED vs. QLED TV


OLED:Pros


  • OLED is a technology which involves a carbon film is placed between two conductors that pass a current through to emanate a light. 
  • It is divergent to a traditional LCD TV, which depends on a separate backlight to generate light. 
  • There are lots of attempts made to eradicate the issue of light bleeding from bright pixel to those around it, but no TV with a backlight has managed to do it. 
  • OLED panels are lighter and thinner than a typical LCD-LED structure. 
  • OLED TV viewing angles are much wider and have quick response time.
The prime disadvantage of an OLED TV is that the manufacturing cost is significantly more expensive than the alternatives. LG, the only producer of OLED panels, are selling their parts to other companies to reduce the cost.

QLED: Pros


  • Every pixel of QLED emits its own light, thanks to quantum dots. 
  • These quantum dots have the potential of giving off bright and vibrant colors. 
  • The current QLED TV does not release its own light; a backlight is passed through them.
Samsung is the only manufacturer that produces QLED technology. As Samsung stated, the next-gen quantum dots will have the ability of emitting their own light. It will give the TV the ability to light up and turn off each pixels just like OLED. 

OLED vs. QLED TV: Which is better for you? 


As we are not aware how much time it will take for the next-gen QLED, as of now OLED has better image quality. In a standard LED TV sets, the LEDs never turns off totally, so they use dark grays to replicate black. OLED pixels are fast and precise in changing color that makes gaming, watching sports or action movies more realistic and factual. The viewing angles are wider in an OLED TV, but if you have rooms with adequate ambient light, then QLED TVs will not be an inferior choice. And most importantly, it is less expensive.

Tuesday, 2 July 2019

Artificial Intelligence: See What you Touch and Touch What you See

Artificial Intelligence: See What you Touch and Touch What you See
Since we were small till now and till the end, feeling is and will always be a sort of language to us. The ability to touch and understand what we’re touching is something that we’ve taken for granted since we were born. While this is something we don’t really think about, a robot that has been programmed to touch or see can do one of them but not both together. So, to bridge that gap researchers at MIT’s Computer Science and Artificial Intelligence Laboratory or CSAIL for short have come up with an AI tech that can learn to see by touching and vice versa.

GelSight to Help with seeing by Touching and Touching by Seeing: 


The system up at CSAIL works by creating tactile signals from visual inputs and helps in predicting which object and what part of such an object is being touched. This they did by using a tactile sensor called GelSight, curtesy of another group at MIT.

How GelSight Works: 


The team at MIT used a web camera to record nearly 200 objects being touched. This was done not once or twice but nearly 12000 times. Once that was done, the 12000 videos were broken down into static frames. These frames became a part of a dataset known as “VisGel”. VisGel includes more than 3 million visual/tactile images.

By Using AI, the robot or GelSight learns what it means to touch various objects as well as different parts of those objects. Allowing the robot to blindly touch things, they use the dataset they’ve been given to understand what they’re touching and identify the object. This according to researchers will greatly reduce the data that is needed for manipulating and grasping objects.

The Work to Equip Robots with More Human Like Attributes: 


MIT’s 2016 project uses deep learning to visually indicate a sound or sounds and to also enable a robot to predict responses to physical forces. Both these projects are based of of datasets that don’t help in guiding seeing by touching and vice versa.

So as mentioned earlier, the team had come up with VisGeldataset and one more other thing. This other thing is Generative Adversarial Networks or GANs for short.

GANs use the visual or tactile images to generate other possible images of the thing. It basically uses two things known as “Generator” and “Discriminator”. These two compete with each other in that while one comes up with images of real- life objects to fool the discriminator, the other has to call the bluff. Once and if the discriminator calls it, the generator learns from it and tries to up the game.

Learning to See by Touching: 


As humans we can see things and know exactly how they would feel had we touched it. To get machines to do the same thing, they first had to locate the position of the likely touch and then understand how that exact location would feel had they touched it.

To do this the reference images were used. This allowed the system to see objects and their surroundings. After that the robot arm or GelSight came into use. While the GelSight was being used, it took how various areas felt when touched into its database. While touching things and because of VisGel, the robot knew exactly what it was touching and where.