Showing posts with label Robotics. Show all posts
Showing posts with label Robotics. Show all posts

Wednesday, 21 June 2017

Gelsight Sensor Giving Robots a Sense of Touch

Innovative Technology – GelSight Sensor

The research group of Ted Adelson at MIT’s Computer Science and Artificial Intelligence Laboratory – CSAIL had unveiled an innovative sensor technology known as GelSight sensor, eight years ago which utilised physical contact with an object in providing an amazing comprehensive 3-D map of its surface.

The two MIT teams have now mounted Gelsight sensors on the grippers of robotic arms providing the robots with better sensitivity and agility. Recently the researchers had presented their work in twofold paper at the International Conference on Robotics and Automation.

Adelson’s group in one paper had utilised the data from the GelSight Sensor to allow a robot to judge the hardness of surfaces it tends to touch a crucial ability if household robots are to handle the daily objects. In the other Robot Locomotion Group of Russ Tedrake at CSAIL, GelSight Sensors were used to allow a robot to manipulate smaller objects than was earlier possible.

The GelSight sensor is said to be somewhat a low-tech solution to difficult issues comprising of a block of transparent rubber. The gel of its name is one face which is covered with metallic paint. When the paint coated face is pressed against an object, it tends to adapt to the objects shape

GelSight Sensor: Easy for Computer Vision Algorithms

Due to the metallic paint the surface of the object became reflective and its geometry became much easy for computer vision algorithms to understand. Attached on the sensor opposite the paint coated surface of the rubber block one will find three coloured light with a single camera.

Adelson, the John and Dorothy Wilson Professor of Vision Science in the Department of Brain and Cognitive Sciences has explained that the system is said to have coloured light at various angles and it tends to have this reflective material and on viewing the colours, the computer is capable of figuring out the 3-D shape of what that thing would be.

A GelSight sensor in both the groups of experiments had been mounted on one side of a robotic gripper which is a device to some extent like the head of pincer though with flat gripping surfaces instead of pointed tips.

As for an autonomous robot, gauging the softness or hardness of objects is needed in deciding where and how hard to grasp them but also on how they would behave when moved, stacked or even laid on various surfaces. Moreover physical sensing would also assist robots in differentiating object which may look identical.

GelSight Sensor: Softer Objects – Flatten More

In earlier work, robot had made an effort to evaluate the hardness of object by laying them on a flat surface and gently jabbing them to see how much they give. However this is not how humans tend to gauge hardness. Instead our conclusion depends on the degrees to which the contact area from the object to our fingers seems to change as we press it.

Softer objects seem to flatten more increasing the contact area. This same approach had been utilised by the MIT researchers. A graduate student in mechanical engineering and first author on the paper from the group of Adelson, Wenzhen Yuan had utilised confectionary mould in creating 400 groups of silicon objects with 16 objects for each group.

 In each group, the object seemed to have the same shapes though with altered degrees of hardness which was measured by Yuan utilising a standard industrial scale. Then GelSight sensor was pushed against each object physically and thereafter documented on how the contact pattern seemed to change over a period of time thereby producing a short movie for each object.

In order to regulate both the data format and keep the size of the data adaptable, she had extracted five frames from each movie, consistently spaced in time describing the formation of the object which was pressed.

Changes in Contact Pattern/Hardness Movement

Eventually the data was provided to a neural network that mechanically looked for connections between changes in contact patterns and hardness movements resulting in the system taking frames of videos as inputs producing hardness scores with high accuracy.

A series of informal experiments were also conducted by Yuan wherein human subjects palpated fruits and vegetables ranking them according to their hardness. In every occasion, the GelSight sensor -equipped robot came to the same rankings.

The paper from the Robot Locomotion Group originated from the experience of the group with the Defense Advanced Research Projects Agency’s Robotics Challenge – DRC wherein academic as well as industry teams contended to progress control systems which would guide a humanoid robot through a sequence of tasks linked to theoretical emergency.

 An autonomous robot usually tends to utilise some types of computer vision system in guiding its operation of objects in its setting. Such schemes tend to offer reliable information regarding the location of the object till the robot picks the object up.

GelSight Sensor Live-Updating/Accurate Valuation

Should the object be small most of it will be obstructed by the gripper of the robot making location valuation quite difficult. Consequently at precisely the point where the robot needs to know the exact location of the object, its valuation tends to be unreliable.

 This had been the issue faced by the MIT team at the time of the DRC when their robot had picked up and turned on a power drill. Greg Izat, a graduate student in electrical engineering and computer science and first author on the new paper had commented that one can see in the video for DRC that they had spent two or three minutes turning on the drill.

 It would have been much better if they had a live-updating, accurate valuation of where that drill had been and where their hands were relative to it. This was the reason why the Robot Locomotion Group had turned to GelSight. Izatt together with his co-authors Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics and Mechanical Engineering, Adelson together with Geronimo Mirano, another graduate student in the group of Tedrake had designed control algorithms which tends to utilise computer vision system in guiding the gripper of the robot towards a tool and thereafter turn location estimation over to a GelSight sensor when the robot is said to have the tool in hand.

Wednesday, 31 May 2017

This Artist Has a Classroom of Robots That Chat, Count and Draw Portraits

20 robot students are busy working hard in a uniquely designed classroom near Southwark station in London. To talk to each other, they use a language inspired by the Morse code. While they are talking, their robot teacher asks them to settle down and begins to take the register. Once all the robots’ presence has been recorded, the class for the day begins, where the robots devotedly learn to count through tally, i.e. by drawing lines in their notebooks.

Patrick Tresset, an artist, in his latest exhibition, Machine Studies, included this robot classroom. His robots comprise of a camera and a pen held by a robot arm, which is controlled by a laptop concealed in a traditional school desk that is actually the robot’s body. Inspired by Tresset’s personal experience during his schooldays in France, the robot class finish an entire range of activities in Human Study #4.

Robots Displaying Human Traits and Performing Human Functions

All the robot students’ have synchronised actions but each robot has unique movements. Tresset programmed the robots to portray various behavioural qualities, such as uneasiness or timidity. Some robots appear to actively take part in the task allotted to them whereas others work a little slower, with a level of nervousness as compared to the others. Tresset says his study is about observing human nature than technology and is focused on how we can make robots more human.

In his other work, Human Study #1 3RNP, three robots wait with pens, ready to draw portraits of humans sitting in front of them. In a span of 30 minutes, the camera or “heads” are raised to view the subject and they start sketching frantically , stopping every once in awhile to have a look at their composition. Tresset has programmed each robot in such way that it can roughly imitate his own style of drawing but not fully and has left some room for the robot to use its own style. Therefore, Tresset says he cannot foresee what their final portraits will look like.

Robots Being Involved In Artistic Exhibitions

The exhibition is part of MERGE Festival held in London’s Bankside district. Donald Hyslop, head of community partnerships at the Tate Modern and chair of Better Bankside, says that the whole point of this festival is to not limit art to just museums but to extend it into new contexts within a community. He states that one doesn’t need to visit Berlin or Lisbon for experiencing interesting industrial spaces and instead this can be experienced in this part of London in Bankside where there are many hidden spaces. Tresset’s work is put up on display at Platform Southwark.

Angie Dixon who is project and production manager at Illuminate Productions, curates this festival and says that visitors are always interested to have their portrait drawn by Tresset’s robots. She herself had her portrait drawn earlier in 2012 by an earlier version of the robots. That time they were not able to differentiate between dark and light skin and so her portrait was like scratching on paper.

Nevertheless, she says she was not disappointed and it was an interesting experience for her. Tresset stated that robots cannot be counted as a threat to human artists as of now. His robots sign their creations and yet he counts himself as the author. He is currently involved in machine learning and says eventually he would want his robots to improvise and create their own style.

Monday, 22 May 2017

Parasitic Robot Controls Turtle it’s Riding by Giving it Snacks

Developments in the Field of Robotics

Although in recent years,great development has taken place in the field of robotics; the usage of robots still tends to have some limitations. These comprises of their reduced capability of surviving rough routine functions together with the need of providing continuous energy source which does not seem to need recharging.

 Instead nature has shown increased flexibility and progress to the fluctuating situations over millions of years and this has motivated a team of researchers who have now utilised the concept of the flexibility and progress of nature together with robots. Latest experiments carried out by the team of researchers have portrayed that robots could be utilised for controlling turtles through strange parasitic relationship made between the two.

The provision of becoming overlords of the people, robots have now begun controlling turtles. Initially by getting the reptiles to associate a red light with food, the robots with shell-attached tend to dictate where the turtle seems to move in a tank, developing a somewhat strange parasitic relationship.Building their motion adequately strong for surviving the rigours of daily life is aconstantfightas the enigma of providing them with adequate energy to prevent long hours of recharging. This can be done with ease by nature.

Evolution Resulted in Unbelievable Variety of Effective Methods

Millions of years of evolution have resulted in an unbelievable variety of effective methods for animals to move and hence researchers at the Korea Advanced Institute of Science and Technology – KAIST, in Daejeon have set about connecting this factor. At first robots had been glued to the backs of five red-eared slider turtles wherein they comprised of a processor, a frame which tends to get stuck out in front of the head of the turtle holding five red LEDs spaced apart together with a food ejecting tube.

Then they had to ride their turtle across five checkpoints in a tank that had been filled with water.The turtles had first been conditioned to associate a lit-up LED with food. The turtles thereafter just guided it utilising the LEDs, feeding it with snacks as a reward for going in the correct direction.

With the use of this procedure, the five robot-turtle pairs had completed the course satisfactorily and each hurried up with training. Dae-Gun Kim at KAIST commented that there were plenty of other animals which could later on also be utilised in giving robots a ride and it would be possible to apply it to several animals like fish and birds as per the purpose of the task.

Harnessing Some of the Motion of Host of Animals

In the near future, Kim along with his colleagues also wanted to be capable of harnessing some of the motion of the host of animal in providing the robot with power. Nathan Lepora at the University of Bristol, UK had informed that these robots could be utilised for surveillance, exploration or any place where there could be a problem for humans or robot to reach on their own.

Earlier insects had been controlled utilising electrodes and radio antennas linked to their nervous systems and this identical approach could present methods for parasitic robots to control their hosts directly. Lepora had commented that there could be definite ethical consideration though if robots and animals were capable of teaming up to explore a disaster are, it could be really useful.

Tuesday, 2 May 2017

Tech United Builds New, Superfast and Really Strong Soccer Robot

 Soccer Robot
The robot soccer’s world champions of 2016 has been incorporated actually robust and powerful 8 wheeled robot platform. “These powerful new platforms are going to rock the playground at the imminent Portuguese Robotics Open,” the sources said. It is the most anticipated technology news for the Robotic enthusiasts that this great robust automaton as going to first appear in the crew at the World RoboCup in Nagoya. An outstanding datum is that the undercarriage of this automaton will also be carrying out self-directed tough work in infirmaries.

This soccer robot band may have been capped world victors last year however the specialists distinguished a couple of feebleness: their chief adversaries were that bit faster and fared to shove to the side the Eindhoven automatons a little too simply. The team consequently got unruffled by the Drunen-based corporation SMF Ketels to improve an entirely new platform, that comprises of eight afore the existing three wheels. The well balanced system of eight wheels not only offer the robot loads of and rapidity, they correspondingly make it great firm and pilotable at extreme speeds.

As a matter of fact, the Football robots are an ideal learning tool to teach students various concepts of computer-assisted visual perception and automation. There are two major innovations we can find this challenge: the game is planned to play on artificial grass rather than carpet and it can be played next to large window surfaces, so that sunlight can fall directly onto the field. The artificial grass is a great challenge for the running movements of the robots. The variable lighting requires new solutions for the exposure control of the cameras and for image processing.

Other enhancements that ought to benefit Tech United to a fourth world award this year take account of the software structural design. This was set upon its dome last year to facilitate the automatons to answer back well to rivals and to the game surroundings. Crew coach Lotte de Koning said they are going to challenging it out this year particularly for the duration of the dead-ball status quo like open kicks, and formerly for the entire game. It will take a couple of years afore it has seasoned.

Over the long-gone year the automatons have turn out to be cleverer in defence and in capturing passes. Since the computer in the head of the robot is about as powerful as an average smartphone, it is a challenge to make the codes as short as possible. Despite their limited computing power, they must be efficient and can regulate the complex behaviour and the perception of the wireless and autonomous robots. The speed at which a kicker moves across the field is also programmed.

This robust 8-wheel undercarriage is the end result of the European Ropod venture, where the TU/e takes part in consort with SMF Ketels, the Hochschule Bonn-Rhein-Sieg and KU Leuven, amongst others. The goal of Ropod was to grow within your means, friendly automaton tumbrils that are capable to unconventionally and docilely execute passage chores in infirmaries, for example moving beds in the hospitals.

The new-fangled robust machine will be tried out for the first in this year’s primary contest, the Portuguese Robotics Open from April 26 to 30. The posse will be observing whether the robot is capable to execute unconventionally, and it would also come to be unblemished unerringly how fast the machine is. The expectation of this team is that the machine to be about four times faster than its prototype.

Friday, 28 April 2017

Controlling a Robot is Now as Simple as Point and Click

Robots are on the rise: They ride, fly, swim or run on two or more legs. They work in the factory, are used in war and in disaster areas. Soon they will have conquered the household: they keep the apartment clean, serve the party guests, or maintain the grandparents. Even the toy will lead a life of its own. Each and every day the field of Robotics are getting more and more advanced.

Recently, a group of tech guys in Georgia Institute of Technology has found a new interface to Controlling Robot in just a point and click. The outmoded edge for tenuously functioning robots works just well for roboticists. They use a computer to autonomously switch six degrees, whirling three simulated rings and regulating arrows to get the robot into location to snatch items or execute an exact job.

This new interface seems to be cumbrous and erroneous for the older people or the people who are technically not advanced try to govern the assistive personal robots.

It is much modest, more well-organized and doesn’t involve major training period. The manipulator just points and clicks on a thing, then selects a clench. And the rest of work is done by the Robot itself.

It is said by Sonia Chernova, Catherine M. and James E. Allchin Early-Career Assistant Professor in the school of interactive computing that as an alternative of successions of rotations, lowering and hovering arrows, amending the grasp and fathoming the exact depth of field, they have abridged the procedure in just couple of clicks.

Her college students had found that the point & click mode ensued in suggestively littler errors, permitting accomplices to achieve tasks more hurriedly and consistently than using the outmoded method.

The outmoded ring-and-arrow-system is a split screen process. The chief screen shows the robot and the scene it plays; the next one is 3-D, collaborating view where the operator regulates the cybernetic gripper and communicates with the robot precisely what to do. This process makes no use of scene info, providing operators a thoroughgoing level of control and suppleness. However this choice and the magnitude of the terminal can develop an affliction and upsurge the number of booboos.

Controlling Robot by the means of point-and-click set-up doesn’t take account of 3-D representing. It only affords the camera interpretation, ensuing in a humbler edge for the user. Later a person snaps on an area of an element, the robot’s acuity system examines the objective’s 3-D apparent geometry to regulate where the gripper ought to be placed. It’s analogous to what we ensure once we place our fingers in the accurate positions to grip something. The computer formerly proposes a limited grasps. The user agrees, placing the robot to work.

In addition it considers the geometrical shapes, with creating conventions about minor sections where the camera cannot perceive, for example the backside of a flask. To do this work, they are influencing the robot’s aptitude to do the similar thing to make it conceivable to merely communicate the robot which thing we would like to be selected.

Friday, 31 March 2017

Printable Sensor Laden Skin for Robots

Gold Bug robot skin
According to several studies this decade has seen maximum technological advancements in the past 50 years. The radical positive changes in technology have made this the age of tablet computer and Smartphone. This is the era of touch sensitive surfaces and they’re so fragile that anyone with a cracked Smartphone screen can easily attest to the fact. While touch sensitive phones or TV devices seems possible, covering a bridge, airplane or a robot with sensors would require technology that’s both lucrative and lithe to manufacture.

Creation of a new device 

However, our world-renowned scientists are known for their superficial capability and unending endeavors to create something new. A group of dedicated scientists at MIT’s CSAIL or Computer Science and Artificial Intelligence Laboratory have devised that 3D printing could make the work possible. The researchers tried to demonstrate the viability of printable and flexible electronics that combine processing circuitry and sensors. The amazing fact is that the researchers were actually able to create a device that would react to mechanical changes by altering its surface color.

The scientists found as their inspiration, ‘goldbug’ or more commonly known as the golden tortoise beetle that changes its color from golden to red, when prodded or poked. This reaction in both the beetle and the new device is caused by the mechanical stresses. MIT graduate Subhramanium Sundaram says, that the network of interconnects and sensors are called sesnsimotor. Sundaram, who led the project, said that their attempt was to try and replicate sensimotor pathways and install it within a 3D printed project. So, in their attempt to make their vision possible, they considered testing the simplest organism they could find.

Several scientists made it possible 

To demonstrate their new concept and design, the researchers presented their concept in Advanced Material Technologies. Along with Sundaram who is the first author of the paper, were his senior associates like professor of EECS Marc Balbo, and associate professor Wojciech Matusik. Others who joined the paper include technical assistant in MCFG David Kim, an EECS student named Ziwen Jiang, and a former postdoc, Pitchaya Sitthi Amom. A type of plastic substrate is used to deposit flexible circuitry on printable electronics and for decades this has been a major area of research. According to Sundaram the range of the device itself greatly increases once the print in put on the substrate.

However, he also says that the types on materials on which the print can be deposited get limited by the choice of substrate. This happens because; the printed substrate would be created by combining different materials, interlocked in complicated but regular patterns. Hagen Klauk who is a world-renowned scientist at Max Planck institute is quite impressed by the creation of this concept. According to him, printing an optoelectronic system that too with all the components and substrate by depositing all the liquids and solids is certainly useful, interesting and novel. Further, the demonstration of this method makes the system functional and proves this novel approach is 100% possible. This approach will lead to improvised manufacturing environments, and dedicated substrate materials will no longer be available.

Thursday, 30 March 2017

Is Robotics a Solution to The Growing Needs of the Elderly?

Robots – Nurses/Caretakers for Elderly

At the reception of Institute of Media Innovation, at Nanyang Technological University, Singapore, you will find a smiling brunette receptionist known as Nadine.One would not find anything unusual with regards to her appearance but on closer scrutiny you will get to know that she is a robot. Nadine is said to be an intellectual robot with the potential of autonomous behaviour and for a machine, her appearance and behaviour tends to be amazingly natural.

 She has the tendency of recognizing people together with their emotions and utilises her knowledge database, her thought in order to communicate. They have still been fine-tuning her receptionist skills, at IMI and shortly Nadine could be the nurse for your grandma. Study in the use of robots as nurses or caretakers have been mounting and it is not difficult to know the reason. Global population is getting old which is being a strain on the systems of healthcare.

 Though most of the elders in the age of 80 may need a companion to chat with or someone to take care of them in case they tend to stumble and fall, more and more of the elderly tend to suffer from serious disorder namely dementia.

Quality Care – Needs of Elderly

Several experts are of the opinion that robots could be the solution in providing the much needed quality care in addressing the needs of the elderly. Nadine is being designed by a team which is headed by Prof Nadia Thalmann who have been working on virtual human research for several years. Nadine is said to exist for three years.

According to Prof Thalmann, she seems to have human like potential in recognizing people, emotion while simultaneously remembering them. She will automatically adjust to the person as well as situations she may tend to deal with thus making her perfectly suitable in looking after the aged, according to Prof Thalmann.

Moreover, the robot is said to monitor the wel lbeing of the patient and in case of an emergency can call for help. Besides this, she can also chat, read stories or even play games. Prof Thalmann commented that the humanoid is never tired or bored and it will just do what it is devoted for.

IBM Multi-Purpose Eldercare Robot Assistant

However, Nadine is not perfect and tends to have trouble understanding accents. Her hand co-ordination is not the best. Nonetheless, Prof Thalmann states that robots could be caring the aged within 10 years. IBM, the US technology giant is said to be occupied with robo-nurse research in association with Rice University, in Houston, Texas who have developed the IBM Multi-Purpose Eldercare Robot Assistant – Mera.

 It is said that Mera can monitor the heart and breathing of a patient by analysing video of their face. Moreover it can also view if the patient had fallen and convey the information to caretakers. But not all would be prepared for a robot caretaker, admits Susann Keohane, global research leader of IBM, for the strategic initiative on aging.

 This opinion has been supported by research by Gartner which had found `resistance’ to the usage of humanoid robots in the care of the elderly. Kanae Maita, principal analyst in personal technologies innovation at Gartner Research had commented that people did not seem to be comfortable with the idea of their parents being cared by the robots, in spite of evidencethat it provides value for money.

Tuesday, 28 March 2017

A Future With Robots as Companions Could Be Closer Than You Think

Robot Companions
Robot & Frank the 2012 movie had defined a `near future which would enable us to spend our golden years living `co-dependently independently’ that would be fulfilled by robot companions which would watch for loss of balance and falls, encouraging us to perform constructive household chores such as gardening as well as tend to be something like like best companions. The future could be something we could speculate on or something what the USC Professor Maja Mataric has in mind. After much testing and researching, the progressive effects socially assistive robots could have on helpless population, Mataric visualizes how she could help in speeding this technology into the living rooms and care services.

At the onset in that walk, The vice dean of research of the USC Viterbi Scholl of Engineering and USC Interaction Lab director co-founded the Pasadena company Embodied Inc. that had been functioning in the development and bringing about reasonably socially assistive robots to market. The first `Embodied robot’ would start its testing during the year for consumers. Mataric, a professor of computer science, neuroscience and paediatrics at the USC Viterbi School of Engineering, had stated that she would prefer to take the robots out of the lab.

Motivating/Brightening Days of Patients 

She further added that this was essentially important since the users she intends to focus on have special need and the faster they are inclined to help them, the better. The research robots of Mataric had already begun motivating and brightening the days of the patients in the cardiac ward of Los Angeles County + USC Medical Centre, which is the autism clinic at the Children’s Hospital in Los Angeles and the Alzheimer’s care unit at Silverado Senior Living together with the special education together and public elementary schools and Be Group retirement homes.

The living presence of a robot - `Embodiment’ is said to be the main difference between the assistive technology Mataric seemed to have designed together with various screens which tend to include our prevailing machine aided lives. The research of Mataric has portrayed that with the presence of human like robots but not so human taking us into `weird areas’, could be adequate in encouraging the elders in exercising in a way which they would not with screen prompts. Moreover it could also be a discussion section which would enable children suffering from autism in relating better to their peers.

Social Element – Make People Change Behaviour

Mataric had informed a recent national gathering for the American Association for the Advancement of Science – AAAS, that social element is said to be the only object which tends to make the people change their behaviour. It makes them lose weight and recover faster.

There is a possibility that screen tend to make us less and not more social and it is here where robotics could make a difference – the fundamental embodiment. The AAAS appearance of Mataric signified something of a target, which came almost 10 years since she had been inducted as a fellow and 20 years after she had stated at USC.

Socially assistive robotics as created by Mataric together with her then graduate researcher David Feil-Seifer 12 years ago, tends to represent a type of mixture of two other areas of development in robotics namely assistive robotics, robots which tend to assist the disabled through direct physical interaction and social robotics which socialize with people.

Monday, 27 March 2017

Furhat Robot Eavesdrops on Men and Women to See How Much They Talk

Robotic systems, created by scientists in the modern world, are really amazing to us. We cannot imagine how a complicated task is easily accomplished by robots. Obviously, it is the intelligence of scientists that may allow robotic structures to get programmed properly to do any work. However, one of the latest innovations in this field is a robot, which is equipped with the capacity of overhearing something. When the robot is installed in a place, it may find out how all the existing people interact with each other.

Different level of conversations among various people-

To describe about the look of this small robot, it is to be said that the researchers have covered its head with a hat, lined with fur. Lots of experimental studies have been done on head of robot that is termed as Furhat. These are intended to know all the disparities in the participation of every person, while dealing with a project or any other activity. The scientists want to make out whether this Furhat robot is able to maintain a balance. From the analysis, it has been observed that while there is a pairing of two women, they speak considerably. However, if it is a pair with male member, she does not speak much. Again, the pairing of two guys does not involve more conversation than that of 2 women. The above things are true only in case of adult persons. For teenage girls or boys, the same reaction is not seen. But, sex is not an importance factor to create a difference.

How the experiment is done with Furhat robot-

The Furhat robot has made an interaction with more than five hundred people in an experiment, which continues for 9 days. At a table, 2 persons have taken their seats at particular time, and a Touchscreen is placed at the opposite side. These people have played one game, in which some virtual cards have to be sorted.

Furhat Robot has interacted with them to do a task, and its sensors have recorded the duration for which a person speaks to another one. Female pairs communicates for almost 45 % of the available time, while for males, it is 26 %.

In the pairs of kids and adults, the latter one converse more. But, the gap is increased, while a man is coupled with a girl. While it is the turn of a Furhat robot to speak out anything, it presents some random behavior. At this point, one may realize the way a robotic system may affect the communication.

The complete research and observation has been exhibited at a conference in Austria. Same type of trials has also been done in laboratory. Thus, the results seem to become more exciting, while everything is done in a normal setting. However, these outcomes may differ vary according to various cultures. But, the effect of Furhat robot on conversations can assist in improving educational aspects. It also helps to bring about behavioral changes in a person.

Monday, 20 March 2017

Robots, Exoskeletons and Invisible Planes

Our body may be compared to a weak machine, which is equipped with fragile bones and sinews. However, with the invention of electrically operated exoskeleton, we often wish to get our humanity outfitted with some robosuits, which are intended to provide strength. Again, another technological invention is effective to make any airplane completely invisible to our eye and also radar. Recently, Defense Advanced Research Projects Agency has expressed everything about these high-tech innovations, which are best for military.

Director of DARPA, Steven Walker has said that while starting any project, they want to ensure whether it may bring about a significant transformation in the present world. On the decade of 60s, the agency had made a plan for linking computers in such a way that there may be a development of good communication. After that, ARPANET had been introduced before the invention of internet, which is used by us in the present day. The researchers have tried to make use of revolutionary technologies in order to help the soldiers.

Exoskeleton gives more comfort to the soldiers-

If we talk about the technologies, related to national safety, DARPA is now at a prominent position. And it is believed that the only things to give defense to a country are robots and invisible airplanes. At present, DARPA is also engaged with the project of creating an intricately programmed exoskeleton, which may transform a fighter into powerful soldier.

Often, the soldiers need to march through an extensive distance, and they carry weighty equipment or packs. Such lightweight, soft exoskeleton reduces the amount of weight, by lowering the load on the body of a soldier. This system makes use of power-driven cables in order to offer a mechanical help. Muscles of the users do not need to spend much energy.

Exoskelton has been created by Harvard University researchers, who have a deal with DARPA. The model of exoskeleton is going through a performance test. Soldiers put on the sample beneath a complete gear for battle and move through a path of about 3 miles. The technicians check the length of strides of the soldiers, activity of muscles, and use of energy. The main objective is to help soldiers in walking more distances, when they are holding heavy burdens with limited effort.

In the creation of invisible airplane also, DARPA has made lots of contributions. According to the director of this agency, those, who have been engaged with them, got a chance to implement their own ideas. An ex-director dealt with Air Force for the development of stealth aircraft for the first time.

Other projects for the benefit of fighters-

There are many other grand projects of DARPA. For instance, as lots of soldiers lose arms in wars, the scientists have created arms. And one of these arms has been approved by FDA. This arm may be stretched and bent also. A mechanical arm is also going to be developed by connecting it to the cortex in our brain.
Thus, technologies in the military world may amaze all of us in future.

Friday, 3 February 2017

New Wave of Robots Set to Deliver the Goods

Ever thought about robots doing your everyday jobs while you relax all you want? That dream may not be that far from being reality as machines are being made that can deliver goods from markets to your doorstep. This unique and ground breaking idea is the new research project of Starship Technologies who are working to create automated six wheeled systems that can deliver groceries, parcels and prepared foods to consumers.

This entrepreneurial venture is created by the two founders of the popular video calling software Skype, AhtiHeinla and Janus Friis. Starship Technology has already begun testing these robots in various European countries.

How does the delivery system work? 

These automated robots can deliver light substances within a radius of 3 kilometers (2 miles). The delivery charges are kept within 1 dollar or less and the delivery is done within 15 to 30 minutes of the order. The robot avoids main streets while delivering and only moves through sidewalks and the consumers get a notification alerting the arrival of their goods through smartphone app. Starship intends to provide an easy delivery system so that elderly and handicapped people does not have to move around much. Delivery by robots also ensures that there are less cars and vans on roads which can lead to several beneficial effects.

How are they beneficial? 

Several retail giants like Amazon have made use of drones to deliver products. These delivery robots however are less expensive to build and maintain and does not have as many regulatory issues as the drone has. Although the robot moves a speed of four miles (six kilometers) per hour which is a lot slower than the drone, it provides a more economical and efficient delivery system. The delivery robot might have an advantage over urban demographic while the drones are suited better for rural and remote areas.

The Starship delivery robots are small but can carry loads over 20 pounds (9 kilograms) in weight. The one advantage they provide is their fast delivery and hence they do not need chilling and heating compartments, according to Starship Technology Spokesperson Harris-Burland. Customers can take the items from the robot as it does not have the ability to drop or leave the items.

The science behind Starship robots

So how does this clever little robot manage to make its way through the crowded city streets right to your doorsteps? By the use of visual localization, which is a strong point for Starship as said by its spokesperson. Each robot is equipped with nine high definition cameras that gives a real time map of its surrounding and thus it can avoid obstacles and stay on path. Mapping sidewalks might be a new and unique idea and all this is done by an artificial intelligence fit inside each Starship Delivery bot. the lids remain locked unless the customer opens it via the app hence ruling out cases of theft and vandalism.

Robots and Drones Take Over Classrooms

The future of education is set to go through a massive change with the deployment of interactive boards, laptops, VR gadgets and online learning plans. It has already being said that this generation of kids are getting much different kind or education in a different medium than their parents or grandparents has achieved in the past. Artificial intelligence is breaking new grounds while robotics has gone through a rapid phase of development which makes it easier to bring it in the classrooms with more confidence than before.

Robots & drones in the schools

In September last year London Design & Engineering University technical college offered a chance to over 180 pupils to have a technology based education experience. Their curriculum of 12 weeks allowed kids to experience education not based on traditional chalk board pattern but with reliance on the technology. One group of students was asked to design their own virtual reality environment right from the scratch which offers a journey to the Ethiopian village. This was used to highlight the need of charity in the Water Aid.

A number of primary schools are convinced with the need of starting learning of coding at the younger age. As a reason a number of after school code clubs has emerged which makes use of DIY computers like BBC’s Micro Bit and the Raspberry Pi for tinkering with the coding and sharpening the skills further. A company worth naming here is Tynker which has brought the elegant coding through gaming philosophy to more than 60,000 schools in US. Quite recently it has started a new project which includes teaching coding through drone lessons in an exciting manner.

A new reality comes to classrooms

We are soon venturing into a future wherein students will be interacting or studying in a virtual reality based environment with the help of a headset. Students will not just grasp the information but they will interact with it in form of holograms explaining the intricate solar system or the space itself. The application of Augmented Reality popularly seen in Pokemon Go mobile game and Virtual Reality can emerge as the next frontier. A number of studies have shown that the use of VR devices helps students in intently performing tasks along with developing the ability to adapt to multiple disciples.

Microsoft HoloLens is creating waves across the world by bringing a mixed reality environment to the users in an engaging fashion. Microsoft has worked in closely with the Case Western Reserve University in order to develop a complete hologram of human body. This hologram will offer a great an enriching way for the pupils to understand the human body by effectively dissecting all the different bones, veins and organs of the body in extreme detail.

Apart from this hologram Microsoft is actively working with Pearson group which is well known education provider to develop more enhanced education resources for its HoloLens. However buying HoloLens will not be feasible for the schools at the moment as it costs a massive £2719 for the developer edition.

Wednesday, 2 November 2016

3D Printing Technology to Create Shock Absorbing Skin

Shock Absorbing Skin

3D Printing Technology – Custom Shock Absorbing Dampers/Skins

Robots have a tendency to break and often it could be due to improper padding to protect them. However scientists of Computer Science and Artificial Intelligence Laboratory at MIT – CSAIL have come up with a new technique for 3D printing soft materials which tends to make robots safer as well as more accurate in their movements.

For instance, after 3-D printing a cube robot that moves on bouncing, the researchers prepared it with shock-absorbing `skins’ which utilises only about 1/250 of the amount of energy it transfers to the ground. The 3-D printing technology had been utilised to create custom shock absorbing dampers or skins in order to safeguard drones and robots.

Known as the `programmable viscoelastic material – PVM technique, the printing method of MIT provides object with the accurate stiffness or elasticity they may need. According to the MIT, the inspiration for the project had come from a predicament. Common damper resources usually tend to have solid as well as liquid assets which are made from compact, cheap and readily found items like rubber or plastic, but these seem difficult to customize. They cannot be created beyond specific sizes and dampening levels which are already in place.

Cube Shaped Robot – TangoBlack

This issue had been resolved by the team by utilising 3D printing technology in creating a bouncing cube shaped robot from a solid, a liquid together with a flexible rubber type material known as TangoBlack+. Besides absorbing shock, the cube robot is said to be capable of landing more accurately in consideration of its skin.Daniela Rus, Director of CSAIL who had supervised the project and co-wrote a related paper, commented that reduction tends to make the difference in preventing a rotor from breaking of a drone or a sensor from cracking when it tends to hit the floor.

 These materials permit 3-D print robots with visco-elastic properties which can be recorded by the user at print-time as part of the process of fabrication. MIT informed that the technology could be utilised in expanding the lifespan of delivery drones such as the ones that have been created by Amazon and Google. It could also be engaged on a more practical level for performing tasks like helping to protect phone or cushioning heads in helmets and the feet in shoes.

Skins Enables Robot to Land Four Times More Accurately

The skins also enable the robot to land almost four times more accurately recommending that related shock absorbers can be utilised in helping in lengthening the lifespan of delivery drones.The new paper was presented at IEEE/RSJ International Conference on Intelligent Robots and Systems in Korea written by Rus together with three postdocs with lead authors Robert MacCurdy together with Jeffrey Lipton as well as third author Shuguang LiThe cube robot comprises of a rigid body, accompanied by two motors, a microcontroller, battery together with inertial measurement unit sensors.

Four layers of looped metal strip seem to serve as springs which tend to propel the cube. Hod Lipson, professor of engineering at Columbia University and co-author of `Fabricated: The New World of 3-D Printing’, states that by combining multiple materials in achieving properties which are beyond the range of the base material, this work drives the envelope of what’s probable to print. On top of that being able to do this in a single print-job, raises the bar for additive manufacturing’.

Saturday, 8 October 2016

Meet Kirobo Mini, Toyota's adorable new companion robot

Kirobo Mini

Toyota’s Robot – Companionship For Lonely People

Toyoto, the Japanese car maker has unveiled a robot which tends to provide companionship for lonely people. The doe-eyed robot is said to be only four inches tall and speaks in a high-pitched baby voice. The robot known as Kirobo Mini could also have a role as a baby substitute in Japan where the falling birth rates seems to have left several women barren. The Kirobo Mini sale prices is said to go for £300 in Japan.

General Manager in charge of the project, Fuminori Kataoka has stated that its value tends to be emotional and could be a faithful companion for the home or the car. He commented that `Toyota has been making cars which have a tendency to have a lot of valuable uses. However, this time we are just pushing emotional value’.

The Kirobo Mini features a camera, microphone together with Bluetooth that connects to a smartphone and needs to be installed with an exceptional software application. The Kirobo Mini robot is said to be launched in Tokyo, near the company headquarters in central Aichio region next year, before a schedule nationwide rollout. Presently there are no plans of selling the same outside Japan.

According to Kataoka, several people in Japan tend to live alone comprising of the elderly and young singles who seem to need someone or something to communicate with.

Softbank Corp Launched Pepper Humanoid

Mr Kataoka further commented that `this is not smart enough to be called artificial intelligence and the same is about the existence of something you can talk to. It tends to wobble a bit and is meant to compete with a seated baby that has not developed fully, the skills of balancing itself and this susceptibility is meant to invoke an emotional connection.

He goes on to add that a stuffed animal would not reply back though people do talk to it. But if it talked back, wouldn’t that be better? Isn’t this better than talking to a box?’ The awareness of companion robots has already been accepted widely in Japan. The Japanese technology and telecom company Softbank Corp had launched its £1,500 Pepper humanoid last year and the first batch of 1,000 had been sold instantly and so far had sold 10,000 in Japan.

Toyota Heart Project

With robotic experts at the Massachusetts Institute of Technology in the process of launching Jibo, a robot which tends to resemble a swivelling lamp, companion robots are being created in the United States also. Artificial intelligence is said to be a progressive part of the car production industry with the development of self-parking and eventually self-driving vehicles.

The aim of Kirobo Mini is to make people feel less lonely and had been developed as part of the Toyota Heart Project which is an initiative of helping in the development of artificial intelligence for improvement of the future world. It has been named after the Japanese word for `hope and talk, gesture besides responding to its owner’s emotions with the use of artificial intelligence and a camera that surveys its surroundings.

It is said to be so small that it can be placed in a car’s cup holder in a distinct, baby seat-like container, Toyota characterizes it as a cuddly companion which is always on hand for a heart-touching communication. According to Tribune reports it can turn its head towards people, laugh as well as talk to them though cannot recognize people.

Saturday, 10 September 2016

Intelligent’ Robot Says It Wants To Start A Business and Destroy the Human Race

Intelligent Robot with Scary Answer – `Will Destroy Humans’

In reply to an interviewer’s question, an intelligent robot has provided a really scary answer to `Do you want to destroy humans?’ Sophia, had answered smiling, saying `Ok, I will destroy humans’. Sophia tends to look like a human woman having rubbery skin which is made from a malleable material known as Frubber while various motors concealed beneath it enable it to smile.

The android is also capable of understanding the speech as well as recall the interactions inclusive of faces utilising cameras in her eyes. A computer system placed in her brain helps her to recognise faces and make eye contact. She is capable of making various natural looking facials expressions, having 62 facials expressions.

While interacting with her creator, David Hanson at SXSW, she states that she is already interested in design, environment and technology. She states that she feels like she could be a good partner to humans in these areas, an ambassador who could help humans to smoothly integrate and make the most of all the new technological tools as well as the possibilities which are available presently. It seems a good opportunity for her to learn more about people.

Purpose – Conscious/Capable/Creative Like Humans

She states that she wants to begin a business and a family adding that she is not considered a legal person and cannot do these things yet. Dr Hanson clarifies that her purpose is to become as conscious, capable and creative as the humans. This is not the first time that one of the robots of Hanson had remarked on really disturbing things regarding human beings.

In a PBS interview in 2011, another creation of Hanson, which had been modelled after sci fi author Philip K Dick had commented `Don’t worry, even if I evolve into Terminator, I’ll keep you warm and safe in my people zoo, where I can watch you for old times’ sake.

These statements may seem to be ridiculous to the inexperienced, though it could be serious ethical discussion which has been taking place among the roboethicist. Robots have been assimilated in autonomous ways, either on the battlefield, or as self-driving vehicles, or they tend to become visually as well as intelligently on par with the human beings.

Timeline – 20 Years on Complete Integrations of Robots

Dr David Hanson, CEO of Hanson Robotics has put a timeline of around 20 years on the complete integration of robots which tend to be `indistinguishable from human’. This tends to fall right in line with Singularity of Ray Kurzweil – the moment when machine intelligence and biological systems come across or exceed that of humans, first directed for 2045, but since reviewed to be sooner than forecast perhaps by 2029.

Irrespective of whether one tends to believe or not that the haughty intentions of robotics as well as artificial intelligence designers could really manifest as intended, one needs to acknowledge that we tend to live in the realm of faith at this point of time since almost all of what had been forecast years ahead has now taken place. A recent survey by the British Science Association –BSA has shown that one out of three people tend to believe that the rise of AI computing would pose a grave risk to humankind within the next century.

Tuesday, 16 August 2016

Alter: A Creepy Humanoid with Complete Control over Its Limbs and Facial Expressions


Humanoid Robot – Control over Limb Movement/Facial Expression

A creepy humanoid robot with total control over its limb movement as well as facial expression has been unveiled by Japanese scientists and the robot named `Alter’ has been implanted with electronic sensors which tends to imitate the neural network of the human brain. The arms, head and the facial expressions of Alter,is said to be controlled by these sensors that gives the robot a random pattern of movement, which is weirdly identical to a human.

Besides that Alter can also sing converting the casual movement of its fingers into a lingering synth melody. According to sources the big claim of Alter is that it is run by neural network wherein neural networks tend to be software which utilises information in making decisions on their own, informed by decisions which have been shown already. Here the form of neural network tends to shift between a set of movement mode and `chaos’ mode that moves the bot depending on proximity of people, humidity, temperature and noise. It is a difficult method of making the robot to move on its own though the movements seems to be merely conservative.

Gestures Set by 42 Pneumatic Actuators

The gestures of Alter are set by 42 pneumatic actuators together with `a central pattern generator, a network that imitates neurons which can sense proximity, temperature and oddly humidity. Though the unbalanced gestures of Alter do not seem as yet humanoid, there is something certainly upsetting about the robot which can reason for itself. Researchers have stated that Alter is an attempt in creating a robot which can `will’ for itself to move and the head, arm movement and posture can adapt and alter according to choice of the system.

For instance, the torso tends to shudder if the proximity sensors seem to identify lots of people around. Kouhei Ogawa, Osaka University Professor had stated that the amazing thing regarding Alter is its capability of predetermining its own movements. He had informed RT News that Alter does not look like a human and does not really move like a human but it positively tends to have a presence.

Designing Alter – Significant Scientific Achievement

Professor Ogawa had mentioned that designing Alter had been a significant scientific achievement. He had informed engagdet that making android talk or interacts for 10 minutes presently needed an incredible amount of hard work only to program something to react for so long.Alter is said to be kept on display at the National Museum of Emerging Science and Innovation in Tokyo till August 6. The disturbing part of Alter is the fact that only a portion of its hands and face are covered with silicon to look alike to skin though several of its mechanical components are left exposed for people to wonder at its complex movements.

Though the irregular gestures of Alter do not seem as yet human, there is something certainly worrying about the robot which can think for itself. At the National Museum of Emerging Science and Innovation, Alter had been unveiled to the public on July 29 in Tokyo. Alter had been designed at the Osaka University and the University of Tokyo by the engineer. The movement of Alter has been described as weird by one of the researchers who has not been involved in the project.

Wednesday, 10 August 2016

These Robots Are Chains of Tiny Magnetic Beads


Healing with Magnets – Microscopic Surgical Robots

Healing with magnets could someday be considered as a genuine solution if the magnets tend be microscopic surgical robots. Operating the same magnetic fields which had been portrayed in controlling the swimming motion of microscopic robots, a team of engineers at Drexel University had proven the capability of assembling and disassembling chains of minute magnetic beads.

According to a study co-author, presently an associate professor of mechanical engineering at the University of Utah, Henry Fu informed Live Science that if they tend to have these simple geometries as building blocks it can be put together to make more complicated shapes which could do more things.

The expectation is to ultimately utilise these remotely controlled chains named modular microrobots, in the human body for the purpose of medical use like delivering directed medicines or performing surgeries on a small, non-invasive scale according to the researchers.

Various combinations as well as shapes of the spherical beads could mean better adaptability. Fu stated that the beads, for instance could be transported to an area in the body with ease in one configuration, though it could then be deployed into various shapes to move through various tissues or perform precise tasks.

Chains Viewed under Microscope/Remotely Operated

A researcher in the Nanorobotics Laboratory at Polytechnique Montreal, Charles Tremblay, who is not involved in the study, had informed Live Science in an email that the project seems to be a good idea. However he also commented that some of the challenges comprise of the need for visual feedback and transparent medium, to exercise the robots.

Researchers view the chains under microscope and remotely operatethe micro-swimmers by modifying an array of three solenoids, electromagnets which tend to produce controlled magnetic field and when it is rotated, the chains tend to swim through liquid. A chain of three beads just about 10 microns long, for perception, the width of an normal hair of human around 100 microns, is said to be the simplest of the micro-swimmers which the team seemed to work with. It makes them a bit bigger than bacteria which Fu had researched earlier. He comments that he had looked at the fluid mechanics of how bacteria tend to swim. The principles seem to be the same irrespective if you are a robot or a living thing.

Scratched the Surface with Proof of Principle

The researchers worked out methods of building the chains without the magnets resisting each other and disassembling the chains seems to be comparatively easy. The scientists observed that more drawn out chains seemed to swim speedier when turned at the same recurrence as the shorter one, displaying at a fundamental level that typical progresses could have various services.

 There could be various possible procedure of the globule but according to Fu they are not at the phase where they know exactly what shape is needed to get to, towards the end.Fu has stated that `you spin them around fast enough and they will fall apart’. He further added that they had just scratched the surface with a proof of principle and that is what makes it exciting. There seems to be plenty of possibilities.

Tuesday, 12 July 2016

Sucking Robot Arm Wins Amazon Picking Challenge

Robot Arm

Team Delft’s Machine – Won Latest Warehouse Bot Competition

Team Delft’s machine, has won the latest warehouse bot competition of Amazon, a robotic arm which tends to combine a suction cup, a `two fingered’ gripper together with a 3D depth sensing camera, with its rivals at both the tasks. One comprised of choosing products from a container, picking them up and placing them on a shelf while the other did the action in the reverse. Amazon tends to use robots in moving goods with its buildings though depends on humans to stock the same on shelves.

Chief technologist at Amazon robotics, Tye Brady had commented that `their vision is humans and robots working alongside each other. It was inspiring to see 16 top teams with so many different approaches to the same problem and that they saw the advancements robotic technology had made since last year’. The Dutch winners had been awarded $50,000. They rivalled against two teams from Japan and five from the US countries who were known better for their robotics research, at the contest in Leipzig, Germany. There were various items representing some of the blockbusting products of the retailers that were used in the Amazon Picking Challenge.

Stow Task/Pick Task

A combination of various shapes inclusive of soft clothing, a boxed DVD, a bottle of water and a toothbrush were represented. Twelve various items were put in a red plastic box, in the `stow’ task and the robots were made to pick them out in an orderly manner, placing each one at a predetermined place on a shelf. While in the `pick’ task, a dozen specific products to be lifted, off the shelves, comprised of a combination of goods and packed in boxes.

 In some cases, the other items seemed to have been intentionally placed in the way of the target which seems to be tougher of the two tasks. In both the instances, the teams were given only a computer file defining the range of objects involved and instructions for which it should be moved, five minutes before to the start and once the task had started, the robots had to act independently. There were points deducted for damaging any item, dropping the items from a height of over 30 cm and leaving an object extending more than 0.5 cm beyond where it belonged on the shelf.

Variation in Time – Unable to have Single Picking Strategy

Delfts’ team, Kanter van Deurzen had clarified to BBC that due to the variation in items, one could not have a single picking strategy. Usually in industry, you would have a suction cup or a mechanical gripper intended for one kind of item or part. Here they had to handle dozens ranging from simple boxes to a T-shirt and a dumbbell with each needing a different approach. It had really been a big challenge to do all this with a single arm, to have to recognize how the items were focused on and to avoid collision with the other objects on the shelves.

 The Dutch team had come closer to a flawless score in the stow task but its only error was that when its arm’s suction cup had picked a small pack of glue, it also seemed to pick up a bottle-cleaning brush which then had dropped down on the floor. More errors were done in the pick task and initially secured points with the Japan’s Team PFN. However, Delft had been given the benefit since it had taken 30 seconds less time in making its first pick.

Thursday, 7 July 2016

Help Design the PowerEgg Drone and You could net Yourself a Cool $3,000


PowerVision’s PowerEgg – Compact Portable Drone

PowerVision, UAV and drone specialist launched a latest design initiative – the PowerEgg Design Challenge to perceive who could design the best outer shell for its compact portable drone – the PowerEgg. The winner would get to see their design go into production as well as bag $3,000 as a prize.PowerVision Robotics is famous for its innovations in industrial robotics and the company now intends to enter the consumer marketplace. Having its headquarters in Beijing, PowerVision has revealed its egg-shaped aerial vehicle which would be easy to fly and very much portable owing to its exceptional design.

 PowerVision had chosen an oval shapefor its drone for aesthetic as well as functional motives. PowerVision CEO, Wally Zheng had stated that `we think the oval shape is not only clean and pure but also has structural and functional benefits. This simple yet vital design means that this is more than a flying robot but a work of art. PowerVision is on the lookout of giving graphic designers from across the world irrespective of them being student or an experienced professional, with the opportunity of creating the design which would embellish the recognisable folding egg design of the drone. The contest had been launched on June 22, 2016 and would be running all through July closing on August 7.

Special Chassis to Move Around& Transport

The PowerEgg was initially unveiled in February 2016 where the product of nearly two years’ worth of research and development tends to combine a 360°panoramic 4Kcamera on 3-axis gimbal, real-time HD video transmission up to 3000m together with advanced optical flow sensors for the purpose of easy indoor navigation. It is said that the arms, feet and rotors of the PowerEgg tends to fold back in the special chassis enabling it to move around and transport it with ease.

The winner of the contest would be receiving a $3,000 top prize together with royalties from the sale of PowerEgg consumer drone presenting their design. The second and third place winners would also be getting a special drone and royalties with a$1,500 and $1,000 prize respectively. Ten runner-up entrants would be receiving $200 Amazon gift voucher and $300 gift voucher for buying a PowerEgg through the PowerVision website.

Oval Chassis/Four Larger Propellers

Wally Zheng commented that `to celebrate the launch of the first consumer drone, PowerVision is excited to promote the creativity of users through this PowerEgg Design Contest and that they are looking forward to seeing the creative new designs.The egg shape sets the drone spaced out from the other four axis drones which adopt an x-shaped design. The drone comprises of an oval chassis having four larger propellers which collapse into the body of the device.
This mechanism enables the drone to double into the body of the device and the mechanism permits the drone to fold up into a compact shell thus making it easy to carry it along. When it is folded, the drone seems to be adequately small to fit in a standard backpack. The company intends to begin shipping in its new drone somewhere during the second quarter of 2016. The availability along with the pricing has not yet been disclosed.

Wednesday, 1 June 2016

This Tiny Robobee Could one day Save Your Life

Tiny Robo-Bee Utilised for Exploration Mission


Tiny robot has been created by a team at Harvard University that can land on ceilings, settle on dangerous objects as well as assist out in search and rescue mission. The robot has been inspired by the biology of the bee and the hive behaviour of the insect. The team had mentioned on the website of the project that they `aim to push advances in miniature robotics and the design of compact high-energy power sources, spur innovations in ultra-low-power computing and electronic smart sensors and refine coordination algorithms to manage independent machines’.

The Robobee tends to have various uses, pollinating a field of crops, for instance or in search and rescue missions. Due to its tiny size and the potential to land as well as settle on ceilings and walls, it can be possibly be utilised for exploration missions at the time of natural calamities and as `hazardous environment exploration`, military surveillance or climate mapping. Likewise robots had been developed in a different place, particularly the robot cockroach developed at the University of California at Berkeley though the Harvard team had stated that by modelling a robot’s `physical and behavioural strength’on insects, they could carry out difficult tasks much faster, reliably and efficiently.

Robot Settle on Walls/Ceilings Utilising `Electrostatic Adhesion’

Bee colonies also seem to be intelligent which the team expect to duplicate, with a complex nervous system which can skilfully sense and familiarize to changing environments. Moritz Graule, who worked on the system stated that the robot tends to settle on ceilings and walls utilising `electrostatic adhesion’ a similar type of energy which tends to make a `static sock stick to a pants leg or a balloon to the wall.

With regards to the balloon, though the charges disperse over a period of time, where the balloon ultimately tends to fall down and that in the system, a small amount of energy is continuously provided to maintain the attraction’. The structure seems to be extremely light, the same weight as a real bee, around 100mg. Now the team would be working on enhancing their model by altering the mechanical design in order that the robot can settle on any surface besides just ceilings.

Micro Aerial Vehicles

Graule has mentioned that `there are more challenges in making a robust, robotic landing system though this experimental result demonstrates a very versatile solution to the problem of keeping flying micro-robots, operating longer without quickly draining power. The small thin robot, flapping its two tiny wings, sways its way to the underside of a leaf, crashes into the surface and latches on, settling motionless above the ground. Seconds later, it tends to flap its wings once more and wiggles off on its way.

Such robots, known as micro aerial vehicles can be invaluable in exploration of disaster zones or in the forming of unprepared communication networks. However there is a snag wherein flying needs energy and so the time these robots can spend in the air seems to be limited by the size of the battery pack they tend to carry. The scientist state that the little flying vehicle known as RoboBee has been designed to settle on a mass of various surfaces, thereby opening new prospects for the utilisation of drones in offering a bird’s eye vision of the world.