Showing posts with label Robot. Show all posts
Showing posts with label Robot. Show all posts

Thursday 5 July 2018

HAMR: Swimming robo roach makes a splash

HAMR

HAMR, the robotic amphibious roach

There have been a few insect-robots like the RoboBee and VelociRoACH that have been created in the past and the latest insect robot is the HAMR, an amphibious roach. The HAMR or the Harvard Ambulatory MicroRobot was created by Harvard University.

In 2013, the actual HAMR was created and it was tether controlled. It measures 4.4 cm long and moves at a speed of 8.4 body lengths per second. It was created by using 23 microscopic layers of material that was sandwiched together and the patterns were cut out using laser.

The latest version of the HAMR weighs 1.65 grams and features four foot-pads on the four legs. This robotic roach created by the scientists is amphibious in nature. It can walk on land as well as swim on the surface of the water. The HAMR is also designed to walk underwater as long as it is desired, enabling the robot to explore new environments.


Working of the HAMR


When the HAMR enters the water, the foot pads provide the surface tension-induced buoyancy to prevent it from sinking. There is a provision for flaps on the underside of the pads that helps it to move across the water. The HAMR moves its legs in a swimming motion at a frequency of up to 10 Hertz. When the robot moves across the surface of the water it can evade any of the obstacles present and also reduce drag. The HAMR has four pairs of asymmetric flaps and customized swimming gait that help it to swim on the surface of the water. There is an unsteady interaction created between the passive flaps of the robot and the surrounding water which enables the robot to swim forward and turn.

From the underside of the pads, the HAMR can apply electric current to the water. This is electrowetting whereby the voltage that is applied will reduce the contact angle between an object and the water surface. Due to this the surface tension breaks and the robot sinks.

The robot uses the same mechanism to walk along the bottom of the water just like it walks on land. There is a watertight coating of a polymer known as Parylene to prevent shorting of the HAMR.

In order to get onto the land the HAMR has to break through the surface. A water surface tension force which is almost twice the robotic weight pushes the robot down. There is also an induced torque that increases the friction on the hind legs of the robot. To overcome this difficulty, the scientists have stiffened the transmission of the HAMR and put soft pads to the front legs of the robot. This helps to increase the payload capacity and the friction gets redistributed to enable the robot to climb. The robot then walks up an incline and breaks out of the water surface.

There is a possibility of using technology that gets its inspiration from geckos such as the gecko-inspired adhesives or jumping mechanisms where the HAMR can come out of the water without the use of a ramp or a slope.
 

Wednesday 1 November 2017

Can We Teach Robots Ethics?

driverless car

Artificial Intelligence – Outperforming Humans

 
Artificial intelligence is outperforming human in various fields from driverless car to `carebots’ machine which are entering the dominion of right and wrong. Would an autonomous vehicle select the lives of its passenger over pedestrians? The challenges of artificial intelligence are not only technical but moral and tend to raise queries on what it could mean to be human. As we progress ahead with time with strong network of Internet of Things where several things will have the capabilities of supporting us in our everyday undertakings, there would be instances where these very things would begin to select for us. We are not used to the notion of machines indulging in ethical decision, however the day they will do this by themselves, is not very long. David Edmonds, of BBC asks how they would be taught to do the right thing. Here the example of driverless car is utilised which are projects to be integrated on the highways towards the next years with the potential of resolving trolley problem. Two kids tend to run across the street and there is no time to stop and brake it to a halt. The car is faced with the option of swerving left which could cause it to collide into the oncoming traffic from the opposite end.
 

Robots …. Could Cause Destruction

 
What options would the car tend to make and considering the element the car would opt for gives us thought for speculation on what type of ethics should be programmed in the car? How should the value of the life of the driver be compared to the passenger in the other cars? Or would the buyer make a choice on a car which is prepared to sacrifice its driver to spare the lives of the pedestrians? With every choice is the consequence that tends to raise the query on where does the error lie, could it be on the technology, manufacturer, firm or the driver in the car. Another example is the utilisation of autonomous weapons wherein this brings up the pros and cons on the topic. The obvious con with regards to cons is that robots should not be permitted, since they could cause destruction. We are not used to the notion of machine making ethical decision though it is said that when they tend to do the same by themselves is not far off. In the present scenario, this seems to be an urgent query.
 

Robots – Care for Elderly/Disabled

 
Self-driving cars have covered up millions of miles on road by indulging in autonomous decision which could affect the safety of others using the road. Serving robots for the purpose of providing care for the elderly as well as disabled have been developed by roboticist in Japan, Europe and the United States. A robot caretaker launched in 2015 dubbed as `Robear’ tends to be adequately strong to lift frail patients from their beds and if they can do that they could also be capable of crushing them. The US Army since 2000 had deployed thousands of robots with machine guns each of which had the potential of locating targets and focusing on them without the support of human contribution. Autonomous weapons, like driverless cars, are not science fiction and are weapons which tend to operate without the support of humans.

Friday 14 July 2017

Hybrid Driving-Flying Robots Could Go Beyond the Flying Car

Flying Robots – Significant Application in the Future


According to a latest study, groups of flying robot, whether they tend to be swooping in delivering packages or identifying victims in disaster zones, seems to have a range of significant applications in the future. The robots have the tendency to switch from driving to flying without colliding with each other and can also provide assistance beyond the traditionally flying car notion of sci-fi knowledge, as per the study.

The capability of flying as well as walking is mutual in nature, for instance several birds; insects together with other animals tend to do both the functions. Robots having the same flexibility have a tendency to fly over obstructions on the ground or drive under directly above obstacles.

However, according to study lead author Brandon Araki, a roboticist at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory together with his colleagues had stated in their new study that, presently, robots which tend to be good at a mode of transportation are generally bad at others.

Robot - `Flying Monkey’


Formerly the researchers had established a robot known as the `flying monkey’, which could run as well as fly and also clasp items. The researchers however had to program the paths which the flying monkey would take, which meant that it could not find safe routes on its own.

These researchers have now created flying cars which tend to fly and also drive through a simulated city-like setting which seems to have parking spots, no-fly zones and landing pads. Besides, the researchers had stated that these drones are inclined to move independently without bumping into each other. Araki had informed Live Science that their vehicles can find their own safe paths.

The scientists had taken eight four-rotor `quadcopter’ drones and had placed two small motors with wheels towards the bottom of each drone, in order to make them skilful for driving. In models, the robots could fly for about 295 feet – 90 meters or drive for 826 feet - 252 meters before the draining of their batteries. It was said that the roboticists established systems which ensured that the robots refrained from colliding with one another.

Driving – Efficient than Flying


All the drones had successfully navigated from a starting point to the ending point on collision-free path in tests conducted in a miniature town using daily materials like pieces of fabrics for roads and cardboard boxes for buildings.

According to the researches, with the addition of the driving device to individual drone gave rise to extra weight as well as slightly reduced battery life, reducing the maximum distance which the drone could fly by around 14%. Moreover, they also observed that driving seemed to be more effective than flying, compensating somewhat small loss in competence to flying owing to the extra weight.

Araki had commented that `the most important implication of the research was that vehicles which combine flying and driving have the potential to be both much more efficient as well as useful than vehicles which can only drive or only fly’. The scientists had warned that fleets of automated flying taxies are probably not coming anytime shortly.

He stated that their present system of drones certainly is not adequately tough to really carry people at the moment. Still these experiments with quadcopters help explore several ideas linked to flying cars.

Tuesday 4 July 2017

Harvard Scientists Use Simple Materials to Create Semi Soft Robots

Biologically Inspired Soft Robots


George Whitesides towards the start of the decade had assisted in rewriting the rules of what a machine could be with the improvement of biologically inspired soft robots and is now ready to rewrite it once again with the support of some plastic drinking straws. Whitesides together with Alex Nemiroski a former postdoctoral fellow in Harvard lab of Whitesides had been encouraged by arthropod insects and spiders and have developed a kind of semi-soft robot which is capable of standing and walking.

 The team has also developed a robotic water strider with the skill of pushing itself along the liquid surface. The robots have been defined in a recent paper published in the journal Soft Robotics. The new robots unlike the earlier generations of soft robots that could stand and walk uncomfortably by filling air chambers in their bodies are designed to be extremely quicker.

The researchers are expecting that the robots would finally be utilised in search operations, even though practical applications seems to be far away, in an event of natural calamities or in conflict zones. The Woodford L and Ann A. Flowers University Professor at Harvard, Whitesides stated that if one looks around the world, there are plenty of things like spiders and insects that are very agile.

Flexible Organisms on Planet


They can move rapidly, climb on various items and are capable of doing things which huge hard robot are unable to do due to their weight and form factor. They are among the most flexible organisms on the planet and the question was how we can build something like that.

The answer from Nemiroski was that it came in the form of one’s average drinking straw. He informed that it had all began from an observation which George had made that polypropylene tubes have an excellent strength-to-weight ratio. This gave rise to developing something which has more structural support than virtuously soft robots tend to have.

 That has been the building block and then they got inspiration from arthropods to figure out how to make a joint and how to use the tubes as an exoskeleton. After that there was a question of how far one’s imagination can go and once you have a Lego brick, what type of castle can one build with it. He added that what they built was a surprisingly simple joint.

Whitesides, with Nemiroski had started by cutting a notch in the straws enabling them to bend. The scientists then inserted short lengths of tubing which on inflation forced the joints to spread. A rubber tendon linked on either side then caused the joint to retract when the tubing flattened.

Microcontroller Run By Arduino


The team equipped with the simple concept, built a one-legged robot capable of crawling and moved up in intricacy as they added a second and later a third leg enabling the robot to stand on its own. Nemiroski stated that with every new level of system complexity they would have to go back to the original joint, making modifications in building it to be capable of exerting more force or to be capable of supporting the weight of larger robots.

Eventually when they graduated to six- or eight- legged arthrobots, enabling them to walk, became a challenge from the point of view of programming. For instance it was viewed at the way ants and spiders sequence the motion of their limbs and then attempted to figure out if the aspects of these motions were applicable to what they were doing or if the need for developing their own kind of walking tailored to these specific kinds of joints.

 Though Nemiroski together with his colleagues accomplished in directing simple robots by hand, by utilising syringes, they resorted to computers in controlling the sequencing of their limbs since the designs amplified by way of complexity. He informed that they had put together a microcontroller run by Arduino which tends to utilise valve together with a central compressor that enabled them the freedom to evolve their gait swiftly.

Motion of Joint – Binary – Simplicity of Valving System


Although Nemiroski along with his colleagues had been skilful in reproducing the distinctive `triangle’ gait of ants utilising their six-legged robot, imitating a spider-like gait, proved to be far riskier. He added that a spider has the tendency of modulating the speed which it extends and contracts its joints to carefully time which limbs are moving forward and backward at any point.

Nemiroski further added that however in our case, the motion of the joint is binary owing to the simplicity of our valving system. You either switch the valve to the pressure source to inflate the balloon in the joint and extend the limb or switch the valve to atmosphere in order to deflate the joint and thus retract the limb. In the case of the eight-legged robot, the gait compatible had to be developed with binary motion of the joints.

Though it was not a brand new gait but they could not accurately duplicate how a spider tends to move for this robot. Nemiroski stated that developing a scheme which can modify the swiftness of actuation of legs would be a useful objective for future exploration and would need programmable control over the flow rate supplied to each joint.

Academic Prototypes


Whitesides is of the belief that the techniques utilised in their development especially the use of daily off-the-shelf stuff can point the way toward future innovation, though it would take years before the robots make their way in the real world applications.

He stated that he does not see any reason to reinvent wheels and if one looks at drinking straws, they can make them all, effectively at zero cost together with great strength and so why not use them? They are academic prototypes and hence they tend to be very light weight though it would be quite easy to imagine building these with a lightweight operational polymer which could hold a considerable weight.

Nemiroski added that what is really attractive here is the simplicity and this is something George had been championing for some time and something which he grew to appreciate deeply while in his lab.

Wednesday 31 May 2017

This Artist Has a Classroom of Robots That Chat, Count and Draw Portraits

20 robot students are busy working hard in a uniquely designed classroom near Southwark station in London. To talk to each other, they use a language inspired by the Morse code. While they are talking, their robot teacher asks them to settle down and begins to take the register. Once all the robots’ presence has been recorded, the class for the day begins, where the robots devotedly learn to count through tally, i.e. by drawing lines in their notebooks.

Patrick Tresset, an artist, in his latest exhibition, Machine Studies, included this robot classroom. His robots comprise of a camera and a pen held by a robot arm, which is controlled by a laptop concealed in a traditional school desk that is actually the robot’s body. Inspired by Tresset’s personal experience during his schooldays in France, the robot class finish an entire range of activities in Human Study #4.

Robots Displaying Human Traits and Performing Human Functions


All the robot students’ have synchronised actions but each robot has unique movements. Tresset programmed the robots to portray various behavioural qualities, such as uneasiness or timidity. Some robots appear to actively take part in the task allotted to them whereas others work a little slower, with a level of nervousness as compared to the others. Tresset says his study is about observing human nature than technology and is focused on how we can make robots more human.

In his other work, Human Study #1 3RNP, three robots wait with pens, ready to draw portraits of humans sitting in front of them. In a span of 30 minutes, the camera or “heads” are raised to view the subject and they start sketching frantically , stopping every once in awhile to have a look at their composition. Tresset has programmed each robot in such way that it can roughly imitate his own style of drawing but not fully and has left some room for the robot to use its own style. Therefore, Tresset says he cannot foresee what their final portraits will look like.

Robots Being Involved In Artistic Exhibitions


The exhibition is part of MERGE Festival held in London’s Bankside district. Donald Hyslop, head of community partnerships at the Tate Modern and chair of Better Bankside, says that the whole point of this festival is to not limit art to just museums but to extend it into new contexts within a community. He states that one doesn’t need to visit Berlin or Lisbon for experiencing interesting industrial spaces and instead this can be experienced in this part of London in Bankside where there are many hidden spaces. Tresset’s work is put up on display at Platform Southwark.

Angie Dixon who is project and production manager at Illuminate Productions, curates this festival and says that visitors are always interested to have their portrait drawn by Tresset’s robots. She herself had her portrait drawn earlier in 2012 by an earlier version of the robots. That time they were not able to differentiate between dark and light skin and so her portrait was like scratching on paper.

Nevertheless, she says she was not disappointed and it was an interesting experience for her. Tresset stated that robots cannot be counted as a threat to human artists as of now. His robots sign their creations and yet he counts himself as the author. He is currently involved in machine learning and says eventually he would want his robots to improvise and create their own style.

Tuesday 2 May 2017

Tech United Builds New, Superfast and Really Strong Soccer Robot

 Soccer Robot
The robot soccer’s world champions of 2016 has been incorporated actually robust and powerful 8 wheeled robot platform. “These powerful new platforms are going to rock the playground at the imminent Portuguese Robotics Open,” the sources said. It is the most anticipated technology news for the Robotic enthusiasts that this great robust automaton as going to first appear in the crew at the World RoboCup in Nagoya. An outstanding datum is that the undercarriage of this automaton will also be carrying out self-directed tough work in infirmaries.

This soccer robot band may have been capped world victors last year however the specialists distinguished a couple of feebleness: their chief adversaries were that bit faster and fared to shove to the side the Eindhoven automatons a little too simply. The team consequently got unruffled by the Drunen-based corporation SMF Ketels to improve an entirely new platform, that comprises of eight afore the existing three wheels. The well balanced system of eight wheels not only offer the robot loads of and rapidity, they correspondingly make it great firm and pilotable at extreme speeds.

As a matter of fact, the Football robots are an ideal learning tool to teach students various concepts of computer-assisted visual perception and automation. There are two major innovations we can find this challenge: the game is planned to play on artificial grass rather than carpet and it can be played next to large window surfaces, so that sunlight can fall directly onto the field. The artificial grass is a great challenge for the running movements of the robots. The variable lighting requires new solutions for the exposure control of the cameras and for image processing.

Other enhancements that ought to benefit Tech United to a fourth world award this year take account of the software structural design. This was set upon its dome last year to facilitate the automatons to answer back well to rivals and to the game surroundings. Crew coach Lotte de Koning said they are going to challenging it out this year particularly for the duration of the dead-ball status quo like open kicks, and formerly for the entire game. It will take a couple of years afore it has seasoned.

Over the long-gone year the automatons have turn out to be cleverer in defence and in capturing passes. Since the computer in the head of the robot is about as powerful as an average smartphone, it is a challenge to make the codes as short as possible. Despite their limited computing power, they must be efficient and can regulate the complex behaviour and the perception of the wireless and autonomous robots. The speed at which a kicker moves across the field is also programmed.

This robust 8-wheel undercarriage is the end result of the European Ropod venture, where the TU/e takes part in consort with SMF Ketels, the Hochschule Bonn-Rhein-Sieg and KU Leuven, amongst others. The goal of Ropod was to grow within your means, friendly automaton tumbrils that are capable to unconventionally and docilely execute passage chores in infirmaries, for example moving beds in the hospitals.

The new-fangled robust machine will be tried out for the first in this year’s primary contest, the Portuguese Robotics Open from April 26 to 30. The posse will be observing whether the robot is capable to execute unconventionally, and it would also come to be unblemished unerringly how fast the machine is. The expectation of this team is that the machine to be about four times faster than its prototype.

Friday 28 April 2017

Controlling a Robot is Now as Simple as Point and Click

Robot
Robots are on the rise: They ride, fly, swim or run on two or more legs. They work in the factory, are used in war and in disaster areas. Soon they will have conquered the household: they keep the apartment clean, serve the party guests, or maintain the grandparents. Even the toy will lead a life of its own. Each and every day the field of Robotics are getting more and more advanced.

Recently, a group of tech guys in Georgia Institute of Technology has found a new interface to Controlling Robot in just a point and click. The outmoded edge for tenuously functioning robots works just well for roboticists. They use a computer to autonomously switch six degrees, whirling three simulated rings and regulating arrows to get the robot into location to snatch items or execute an exact job.

This new interface seems to be cumbrous and erroneous for the older people or the people who are technically not advanced try to govern the assistive personal robots.

It is much modest, more well-organized and doesn’t involve major training period. The manipulator just points and clicks on a thing, then selects a clench. And the rest of work is done by the Robot itself.

It is said by Sonia Chernova, Catherine M. and James E. Allchin Early-Career Assistant Professor in the school of interactive computing that as an alternative of successions of rotations, lowering and hovering arrows, amending the grasp and fathoming the exact depth of field, they have abridged the procedure in just couple of clicks.

Her college students had found that the point & click mode ensued in suggestively littler errors, permitting accomplices to achieve tasks more hurriedly and consistently than using the outmoded method.

The outmoded ring-and-arrow-system is a split screen process. The chief screen shows the robot and the scene it plays; the next one is 3-D, collaborating view where the operator regulates the cybernetic gripper and communicates with the robot precisely what to do. This process makes no use of scene info, providing operators a thoroughgoing level of control and suppleness. However this choice and the magnitude of the terminal can develop an affliction and upsurge the number of booboos.

Controlling Robot by the means of point-and-click set-up doesn’t take account of 3-D representing. It only affords the camera interpretation, ensuing in a humbler edge for the user. Later a person snaps on an area of an element, the robot’s acuity system examines the objective’s 3-D apparent geometry to regulate where the gripper ought to be placed. It’s analogous to what we ensure once we place our fingers in the accurate positions to grip something. The computer formerly proposes a limited grasps. The user agrees, placing the robot to work.

In addition it considers the geometrical shapes, with creating conventions about minor sections where the camera cannot perceive, for example the backside of a flask. To do this work, they are influencing the robot’s aptitude to do the similar thing to make it conceivable to merely communicate the robot which thing we would like to be selected.


Friday 31 March 2017

Printable Sensor Laden Skin for Robots

Gold Bug robot skin
According to several studies this decade has seen maximum technological advancements in the past 50 years. The radical positive changes in technology have made this the age of tablet computer and Smartphone. This is the era of touch sensitive surfaces and they’re so fragile that anyone with a cracked Smartphone screen can easily attest to the fact. While touch sensitive phones or TV devices seems possible, covering a bridge, airplane or a robot with sensors would require technology that’s both lucrative and lithe to manufacture.

Creation of a new device 

However, our world-renowned scientists are known for their superficial capability and unending endeavors to create something new. A group of dedicated scientists at MIT’s CSAIL or Computer Science and Artificial Intelligence Laboratory have devised that 3D printing could make the work possible. The researchers tried to demonstrate the viability of printable and flexible electronics that combine processing circuitry and sensors. The amazing fact is that the researchers were actually able to create a device that would react to mechanical changes by altering its surface color.

The scientists found as their inspiration, ‘goldbug’ or more commonly known as the golden tortoise beetle that changes its color from golden to red, when prodded or poked. This reaction in both the beetle and the new device is caused by the mechanical stresses. MIT graduate Subhramanium Sundaram says, that the network of interconnects and sensors are called sesnsimotor. Sundaram, who led the project, said that their attempt was to try and replicate sensimotor pathways and install it within a 3D printed project. So, in their attempt to make their vision possible, they considered testing the simplest organism they could find.

Several scientists made it possible 

To demonstrate their new concept and design, the researchers presented their concept in Advanced Material Technologies. Along with Sundaram who is the first author of the paper, were his senior associates like professor of EECS Marc Balbo, and associate professor Wojciech Matusik. Others who joined the paper include technical assistant in MCFG David Kim, an EECS student named Ziwen Jiang, and a former postdoc, Pitchaya Sitthi Amom. A type of plastic substrate is used to deposit flexible circuitry on printable electronics and for decades this has been a major area of research. According to Sundaram the range of the device itself greatly increases once the print in put on the substrate.

However, he also says that the types on materials on which the print can be deposited get limited by the choice of substrate. This happens because; the printed substrate would be created by combining different materials, interlocked in complicated but regular patterns. Hagen Klauk who is a world-renowned scientist at Max Planck institute is quite impressed by the creation of this concept. According to him, printing an optoelectronic system that too with all the components and substrate by depositing all the liquids and solids is certainly useful, interesting and novel. Further, the demonstration of this method makes the system functional and proves this novel approach is 100% possible. This approach will lead to improvised manufacturing environments, and dedicated substrate materials will no longer be available.

Thursday 30 March 2017

Is Robotics a Solution to The Growing Needs of the Elderly?


Robots – Nurses/Caretakers for Elderly

At the reception of Institute of Media Innovation, at Nanyang Technological University, Singapore, you will find a smiling brunette receptionist known as Nadine.One would not find anything unusual with regards to her appearance but on closer scrutiny you will get to know that she is a robot. Nadine is said to be an intellectual robot with the potential of autonomous behaviour and for a machine, her appearance and behaviour tends to be amazingly natural.

 She has the tendency of recognizing people together with their emotions and utilises her knowledge database, her thought in order to communicate. They have still been fine-tuning her receptionist skills, at IMI and shortly Nadine could be the nurse for your grandma. Study in the use of robots as nurses or caretakers have been mounting and it is not difficult to know the reason. Global population is getting old which is being a strain on the systems of healthcare.

 Though most of the elders in the age of 80 may need a companion to chat with or someone to take care of them in case they tend to stumble and fall, more and more of the elderly tend to suffer from serious disorder namely dementia.

Quality Care – Needs of Elderly

Several experts are of the opinion that robots could be the solution in providing the much needed quality care in addressing the needs of the elderly. Nadine is being designed by a team which is headed by Prof Nadia Thalmann who have been working on virtual human research for several years. Nadine is said to exist for three years.

According to Prof Thalmann, she seems to have human like potential in recognizing people, emotion while simultaneously remembering them. She will automatically adjust to the person as well as situations she may tend to deal with thus making her perfectly suitable in looking after the aged, according to Prof Thalmann.

Moreover, the robot is said to monitor the wel lbeing of the patient and in case of an emergency can call for help. Besides this, she can also chat, read stories or even play games. Prof Thalmann commented that the humanoid is never tired or bored and it will just do what it is devoted for.

IBM Multi-Purpose Eldercare Robot Assistant

However, Nadine is not perfect and tends to have trouble understanding accents. Her hand co-ordination is not the best. Nonetheless, Prof Thalmann states that robots could be caring the aged within 10 years. IBM, the US technology giant is said to be occupied with robo-nurse research in association with Rice University, in Houston, Texas who have developed the IBM Multi-Purpose Eldercare Robot Assistant – Mera.

 It is said that Mera can monitor the heart and breathing of a patient by analysing video of their face. Moreover it can also view if the patient had fallen and convey the information to caretakers. But not all would be prepared for a robot caretaker, admits Susann Keohane, global research leader of IBM, for the strategic initiative on aging.

 This opinion has been supported by research by Gartner which had found `resistance’ to the usage of humanoid robots in the care of the elderly. Kanae Maita, principal analyst in personal technologies innovation at Gartner Research had commented that people did not seem to be comfortable with the idea of their parents being cared by the robots, in spite of evidencethat it provides value for money.

Friday 3 February 2017

New Wave of Robots Set to Deliver the Goods

Robot
Ever thought about robots doing your everyday jobs while you relax all you want? That dream may not be that far from being reality as machines are being made that can deliver goods from markets to your doorstep. This unique and ground breaking idea is the new research project of Starship Technologies who are working to create automated six wheeled systems that can deliver groceries, parcels and prepared foods to consumers.

This entrepreneurial venture is created by the two founders of the popular video calling software Skype, AhtiHeinla and Janus Friis. Starship Technology has already begun testing these robots in various European countries.

How does the delivery system work? 

These automated robots can deliver light substances within a radius of 3 kilometers (2 miles). The delivery charges are kept within 1 dollar or less and the delivery is done within 15 to 30 minutes of the order. The robot avoids main streets while delivering and only moves through sidewalks and the consumers get a notification alerting the arrival of their goods through smartphone app. Starship intends to provide an easy delivery system so that elderly and handicapped people does not have to move around much. Delivery by robots also ensures that there are less cars and vans on roads which can lead to several beneficial effects.

How are they beneficial? 

Several retail giants like Amazon have made use of drones to deliver products. These delivery robots however are less expensive to build and maintain and does not have as many regulatory issues as the drone has. Although the robot moves a speed of four miles (six kilometers) per hour which is a lot slower than the drone, it provides a more economical and efficient delivery system. The delivery robot might have an advantage over urban demographic while the drones are suited better for rural and remote areas.

The Starship delivery robots are small but can carry loads over 20 pounds (9 kilograms) in weight. The one advantage they provide is their fast delivery and hence they do not need chilling and heating compartments, according to Starship Technology Spokesperson Harris-Burland. Customers can take the items from the robot as it does not have the ability to drop or leave the items.

The science behind Starship robots

So how does this clever little robot manage to make its way through the crowded city streets right to your doorsteps? By the use of visual localization, which is a strong point for Starship as said by its spokesperson. Each robot is equipped with nine high definition cameras that gives a real time map of its surrounding and thus it can avoid obstacles and stay on path. Mapping sidewalks might be a new and unique idea and all this is done by an artificial intelligence fit inside each Starship Delivery bot. the lids remain locked unless the customer opens it via the app hence ruling out cases of theft and vandalism.

Saturday 10 September 2016

Intelligent’ Robot Says It Wants To Start A Business and Destroy the Human Race


Intelligent Robot with Scary Answer – `Will Destroy Humans’


In reply to an interviewer’s question, an intelligent robot has provided a really scary answer to `Do you want to destroy humans?’ Sophia, had answered smiling, saying `Ok, I will destroy humans’. Sophia tends to look like a human woman having rubbery skin which is made from a malleable material known as Frubber while various motors concealed beneath it enable it to smile.

The android is also capable of understanding the speech as well as recall the interactions inclusive of faces utilising cameras in her eyes. A computer system placed in her brain helps her to recognise faces and make eye contact. She is capable of making various natural looking facials expressions, having 62 facials expressions.

While interacting with her creator, David Hanson at SXSW, she states that she is already interested in design, environment and technology. She states that she feels like she could be a good partner to humans in these areas, an ambassador who could help humans to smoothly integrate and make the most of all the new technological tools as well as the possibilities which are available presently. It seems a good opportunity for her to learn more about people.

Purpose – Conscious/Capable/Creative Like Humans


She states that she wants to begin a business and a family adding that she is not considered a legal person and cannot do these things yet. Dr Hanson clarifies that her purpose is to become as conscious, capable and creative as the humans. This is not the first time that one of the robots of Hanson had remarked on really disturbing things regarding human beings.

In a PBS interview in 2011, another creation of Hanson, which had been modelled after sci fi author Philip K Dick had commented `Don’t worry, even if I evolve into Terminator, I’ll keep you warm and safe in my people zoo, where I can watch you for old times’ sake.

These statements may seem to be ridiculous to the inexperienced, though it could be serious ethical discussion which has been taking place among the roboethicist. Robots have been assimilated in autonomous ways, either on the battlefield, or as self-driving vehicles, or they tend to become visually as well as intelligently on par with the human beings.

Timeline – 20 Years on Complete Integrations of Robots


Dr David Hanson, CEO of Hanson Robotics has put a timeline of around 20 years on the complete integration of robots which tend to be `indistinguishable from human’. This tends to fall right in line with Singularity of Ray Kurzweil – the moment when machine intelligence and biological systems come across or exceed that of humans, first directed for 2045, but since reviewed to be sooner than forecast perhaps by 2029.

Irrespective of whether one tends to believe or not that the haughty intentions of robotics as well as artificial intelligence designers could really manifest as intended, one needs to acknowledge that we tend to live in the realm of faith at this point of time since almost all of what had been forecast years ahead has now taken place. A recent survey by the British Science Association –BSA has shown that one out of three people tend to believe that the rise of AI computing would pose a grave risk to humankind within the next century.

Wednesday 1 June 2016

This Tiny Robobee Could one day Save Your Life


Tiny Robo-Bee Utilised for Exploration Mission

Robobee

Tiny robot has been created by a team at Harvard University that can land on ceilings, settle on dangerous objects as well as assist out in search and rescue mission. The robot has been inspired by the biology of the bee and the hive behaviour of the insect. The team had mentioned on the website of the project that they `aim to push advances in miniature robotics and the design of compact high-energy power sources, spur innovations in ultra-low-power computing and electronic smart sensors and refine coordination algorithms to manage independent machines’.

The Robobee tends to have various uses, pollinating a field of crops, for instance or in search and rescue missions. Due to its tiny size and the potential to land as well as settle on ceilings and walls, it can be possibly be utilised for exploration missions at the time of natural calamities and as `hazardous environment exploration`, military surveillance or climate mapping. Likewise robots had been developed in a different place, particularly the robot cockroach developed at the University of California at Berkeley though the Harvard team had stated that by modelling a robot’s `physical and behavioural strength’on insects, they could carry out difficult tasks much faster, reliably and efficiently.

Robot Settle on Walls/Ceilings Utilising `Electrostatic Adhesion’


Bee colonies also seem to be intelligent which the team expect to duplicate, with a complex nervous system which can skilfully sense and familiarize to changing environments. Moritz Graule, who worked on the system stated that the robot tends to settle on ceilings and walls utilising `electrostatic adhesion’ a similar type of energy which tends to make a `static sock stick to a pants leg or a balloon to the wall.

With regards to the balloon, though the charges disperse over a period of time, where the balloon ultimately tends to fall down and that in the system, a small amount of energy is continuously provided to maintain the attraction’. The structure seems to be extremely light, the same weight as a real bee, around 100mg. Now the team would be working on enhancing their model by altering the mechanical design in order that the robot can settle on any surface besides just ceilings.

Micro Aerial Vehicles


Graule has mentioned that `there are more challenges in making a robust, robotic landing system though this experimental result demonstrates a very versatile solution to the problem of keeping flying micro-robots, operating longer without quickly draining power. The small thin robot, flapping its two tiny wings, sways its way to the underside of a leaf, crashes into the surface and latches on, settling motionless above the ground. Seconds later, it tends to flap its wings once more and wiggles off on its way.

Such robots, known as micro aerial vehicles can be invaluable in exploration of disaster zones or in the forming of unprepared communication networks. However there is a snag wherein flying needs energy and so the time these robots can spend in the air seems to be limited by the size of the battery pack they tend to carry. The scientist state that the little flying vehicle known as RoboBee has been designed to settle on a mass of various surfaces, thereby opening new prospects for the utilisation of drones in offering a bird’s eye vision of the world.


Friday 1 April 2016

Microsoft 'Deeply Sorry' After AI Becomes 'Hitler Loving Sex Robot

Robot

Microsoft Compelled to Retire Chatbot Tay


Microsoft has extended an apology after an innocent Artificial Intelligence chat robot became a loving sex robot of Hitler, within a span of 24 hours after it had been introduced to society. The software giant had been compelled to retire the chatbot called Tay, an AI that was modelled to speak `like a teen girl’ after generating prejudiced and sexist tweets. The account of this illuminating though abandoned millennial focused project had been conveyed most briefly by The Telegraph newspaper in a headline stating `Microsoft deletes teen girl AI after it became a Hitler loving sex robot within 24 hours’.

The removal seems like a good move considering the transformation in question. Microsoft’s chat-bot Tay, big eyed, cute and artfully pixelated could represent the future. Chat-bots, AI powered phony people which tend to interact with customers through text messages are becoming a big focus across several industries. Chatfuel of San Francisco helped in creating bots for messaging-app Telegram also tends to work for Forbes and Techcrunch. According to Business Weeks, it recently received funding from one of the biggest Internet firm Yandex NV, of Russia.

Bots – Official Accounts for Chat Apps


Dmitry Dumik, founder of Chatfuel informed the magazine that `they are simple, efficient and they live where the users are, inside the messaging services. Forbes reported that a company, Outbrain, which uses behavioural analytics for determining which set of peculiar stories tend to appear low down on several news websites, will be talking to several publishers about building chat bots to send their news through text.

According to Forbes, these bots would become like official accounts for chat apps to whom one can text keywords like `sports’ or `latest headlines’ for bringing up stories. Artificial intelligence begins with human intelligence and the AI are naturally fed big data and the result of some of the world’s finest brain, Google’s AlphaGo system which learned from millions of moves is played by best players of the difficult board game. The bots then take in communication as well as data from the users in order to interact in an informed and helpful manner specific to the user.

Designed to Engage/Entertain People


The company mentioned in their announcement of the chat bot project, that Tay has been designed to engage and entertain people where they link with each other online via casual and playful conversation and Tay learns language and ideas through the interactions. The project is said to be focused at young millennial American between the ages of 18 to 24, Microsoft states `the dominant users of mobile social chat services in the U.S’.

The chat bot tends to interact with users through text message on Twitter and other messaging platforms. Microsoft recommended that users ask her to tell jokes, horoscope, and stories, play games and also comment on photos. The company has informed that the more one chats with Tay, the smarter she gets. On gaining more information which Tay took in from members of the public, the worse her character seemed to be and she got more precise in her dislikes - `I [bleep]ing hate feminist and they should all die and burn in hell, she tweeted on Thursday noon. Several minutes later, she widened her hatred, tweeting – `Hitler was right I hate the Jews’

Monday 7 March 2016

Watch Google’s Latest Robot Deftly Deal with Snowy Trail


Atlas – Upgraded Version of Humanoid – Designed to Operate Outdoor/Indoors


The robotics company, Boston Dynamics which Google had attained in its mad robotics spree about two years back seems to be good in making robots which can handle surprise or take a beating. Atlas, an upgraded version of its humanoid is designed to operate outdoors as well as indoors and is specialized for mobile manipulation. It is electrically powered and hydraulically activated.

It tends to use sensors in balancing its body and legs together with LIDAR and stereo sensors in its head in order to avoid obstacles, consider the territory, manipulate objects and help with the navigation. It is a five-foot-nine, 180-pound human which tends to walk upright, can pick things up and possibly produce some gasps of awe and concern.

Several of the robots are pretty good at repetitive task though tend to fail when something goes twisted. Atlas had been designed to deal with moderating conditions. One can watch it wander through the snow, maintaining its balance prior to falling. While indoors, a handler knocks a box out of its hands and Atlas tends to move over in order to pick it up. If another human throws it down with a stick, Atlas tends to push itself up.

Humanoid’s New Capability – Endure Abuse


However, what is most appealing is that the humanoid’s new capability is to endure abuse. For instance, when pushed from behind, the robot is capable in determining how to move back on its feet, which has been addressed as `impressive’ by MIT Technology review.

The capability of Atlas to recover swiftly from spills is owing to advances in its software as well as hardware with computer systems which tend to permit it to rapidly respond to unexpected situations as well as durable materials which permit swift movement.

One of the oldest firms among Google’s hodgepodge robotic groups, Boston Dynamics, mainly tends to work with the military though it has not reported of any important business deals from the time they joined Google. After a year of fairly rudderless track Google is said to have put the entire robotics group in Google X, in December, which is its hardware research incubator. The subsequent month X had revealed that it had hired Hans Peter Brondmo, to lead the unit, a tech veteran who had recently come from Nokia.

Atlas Can Function on Its Own


Though it may seem a bit odd, researchers have been putting their humanoid robot through some sequences of test which could aid it in dealing with moderating situations, something which robots do not seem to be good at. Founder and president of Boston Dynamics, Marc Raibert communicated to IEEE Spectrum that Atlas can function on its own with minimum input from its operators.

 He had mentioned that their long term goal was to make robots which have mobility, perception and dexterity together with intelligence similar to humans and animals or possibly beyond them and this robot is a step along the way. Robots seem too good at repetitive tasks though if the changes take place in even small variables within the analysis environment or in the task itself, the robot often seems to be incapable of figuring out what happened or how to handle it. This analysis seems to make Atlas better in dealing with problems when something tends to go awkward.

Monday 26 October 2015

How Robots Can Learn New Tasks by Observing

Robot

Robot Training Academies – Aid Industrial Robots to Perform Difficult Chores


Robot `training academies’ could aid industrial robots to learn performing difficult chores by first, observing how humans tend to do it. It could probably take weeks in reprogramming an industrial robot to accomplish a complicated new job which would make retooling modern manufacturing route expensive and slow.

The development could speed up if robots were capable of learning how to perform a new task by observing others first. This is the idea behind the project which is in process at the University of Maryland, wherein researchers are training robots to be attentive students.

Yezhou Yang, a graduate student in the Autonomy, Robotics and Cognition Lab at the University of Maryland states that it is a robot training academy. They have asked an expert to show the robot a task and let the robot figure out most parts of sequences of things which need to be done and then fine tune things to make it work. The researchers at a recent conference in St. Louis had demonstrated a cocktail making robot which tends to use the approaches they work on.

Two Armed Industrial Machine Robot – Rethink Robotics


A two armed industrial machine robot, made by a Boston based company known as Rethink robotics, had watched a person mix drink by pouring liquid from many bottles into a jug and then copied these actions, clutching bottles in the proper order prior to pouring the precise quantities into a jug.

Yang performed the task with Yiannis Aloimonos and Cornelia Fermuller, who were the two professors of computer science at the University of Maryland. The approach is said to comprise of training a computer system to link specific actions of the robot with video footage portraying the performance of people with various tasks.

 For instance, a recent paper from the group portrayed that a robot can learn how to pick various objects by utilising two differentsystems on observing thousands of instructional YouTube videos wherein one system studies to recognize various object while the other identifies various types of grip.

System of Learning Adopted – Advanced Artificial Neural Network


Observing many YouTube video could be time consuming, however the learning technique seems to be much more efficient than programing a robot in managing various different matters and enabling the robot to handle new object.

The system of learning adopted in gripping work tends to include advanced artificial neural network which has envisaged great progress recently and are now being utilised in several areas of robotics. Researchers are coordinating with many manufacturing companies comprising of electronics business as well as car makers in adapting the technology to be used in factories.

 The companies are looking out for ways to quicken the process wherein engineers reprogram the machines. Yang is of the opinion that at several companies it tends to make a month and a half or more to reprogram a robot. So what would be the current AI potentials that can be used in shortening this span to half?

The project tends to reflect two styles in robotics, while one is to find new approaches in learning and the other is robots working in closeness with people. The Maryland researchers like the other group wishes to connect actions to language in improving the ability of robots to analyse spoken or written instructions.

Wednesday 15 July 2015

Google Has Set Its Terrifying, Dreaming Image Robots on the Public


Deep_Dream
Google’s Images Recognizing Robots


Software engineers of Google has recently revealed the results of an experiment which looked at how computer can think, identify and understand objects, animals and people in images. Google has opened its images recognizing robots to all, enabling users to create strange and horrifying images from their very own images. It has released the somewhat horrifying, half amazing images created recently with pictures.

The company has made the `Deep Dream software available on code sharing website Github wherein users could download the same and run their own pictures through it. The software is said to operate on turning image recognising computers on themselves and by prompting the system to over interpret image, which they would otherwise pick out meaningless things, exaggerating them. For instance like turning clouds into bizarre llamas.

With regards to Google’s own image, it tends to transform thing into animals, with dogs being the favourite and eyes. There is also the possibility of overlaying everything with a swirly rainbow colouring. Google has stated that the technology could enable us in understanding where the creativity of human comes from and the same is being put to the test.

The Day Dream System


The `Day Dream’ system tends to feed an image via a layer of artificial neurons, asking an AI to improve as well as build certain features like edges. Over a period of time the pictures could get distorted which is morphed into something that is completely different or just a cluster of colourful random noise.

With the code for the system made available, user have the option of uploading an image of their choice and watch it metamorphose into a surrealistic image. On being fed with several pictures, the image recognition software developed by Google allows artificial neural network of computer to view shapes in images and creates strange, psychedelic and fantastic images which could be likened to entertainer art.

With immense deal of interest generated from the AI research published together with the generated images, Google decided to make the code for its algorithm available to the public. The source code needs to be hosted on a site and some software developers have hosted the same like Psychic VR Lab and Deep Neural Net Dreams where users can now upload a picture to these sites and run it through the algorithm in creating images of their choice.

Artificially Intelligent Neural Network of Google


Artificially intelligent neural network of Google comprises of 10 to 30 stacked layers of artificial neurons wherein each layer tends to look at images and detects various aspects like a corner or a shape and conveys information to the next layer till the final layer tends to formulate an answer.

At times the network understands shapes and decides to understand mild images such as clouds or faces as animals which portrays unusual effects layered over the images like plenty of creepy eyes staring back at the viewers, fantastic dog heads merged into objects and animals with striking touches. The engineers mentioned that `the techniques presented helps to understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture and check what the network has learned during training.
It also make us wonder whether neural networks could become a tool for artists, a new way to remix visual concepts, or perhaps even shed a little light on the roots of the creative process in general’.

Tuesday 30 September 2014

Soft Robotics 'Toolkit' Has All That a Robot-Maker Wants


Soft Robotics '
The researchers from numerous Harvard University labs in partnership with Trinity College Dublin unveiled a new resource which will provide both the aspiring as well as experienced researchers with academic raw materials required for designing, building and operating the robots made from delicate and flexible materials.

Soft robotics is rising as an increasingly vital field due to the introduction of laser cutters, low-cost 3D printing and other advances in the manufacturing technologies. Based on the principles taken from the traditional rigid robot design engineers are now working with more flexible materials in the use of the soft robotics which can help in whole range of tasks like minimally invasive surgery, physical therapy and rescue operations in hazardous locations.

The Soft Robotics Toolkit can be considered as an online treasure trove, which can be utilized by the users for fabrication, characterization, modeling, design and to control the soft robotic devices through various open-source plans, case studies, how-to videos and downloadable etc. This toolkit will be providing the researchers details to a level which is still not available in any academic research papers and it includes the 3D models, raw experimental data, bills of materials, multimedia step-by-step tutorials and few case studies of different soft robot designs.

According to Assistant Professor of Mechanical and Biomedical Engineering at the Harvard School of Engineering and Applied Sciences (SEAS) and a Core Faculty Member at the Wyss Institute for Biologically Inspired Engineering at Harvard University, Conor Walsh; the main aim of this toolkit is to press forward the field of soft robotics and at the same time allowing the researchers and designers to build on each other’s work.

By creating a common source of sharing the knowledge, the developers of toolkit are hoping to stimulate the development of new ideas, creations and methods. According to Walsh, the soft robotics is well-matched to shared design tools as many of the necessary components like valves, regulators and microcontrollers are all easily exchangeable between systems. Dónal Holland, a graduate student at Trinity College Dublin as well as visiting lecturer in engineering sciences at SEAS is among the lead developers of the toolkit and is more interested in this toolkit for education purposes. The toolkit has been developed to capture all the expertise in the education field and make is available for the students.

With the open-source software spurring far-flung improvement in computing, open design has the capability to remote partnership on common mechanical engineering projects, setting free the crowdsourced innovation in robotics and other fields. According to assistant professor of mechanical and manufacturing engineering at Trinity College Dublin and a coauthor of a paper in Soft Robotics, Gareth J. Bennett, Open design can also have a disorderly impact on the technological development similar to that of open source.

Almost all the materials available in this toolkit have been taken from chemist George M. Whitesides, Woodford L., Robert J. Wood, Charles River Professor of Engineering and Applied Sciences at SEAS and Ann A. Flowers University Professor. Apart from two other researchers helped establishing Harvard as a leader in soft robotics.

Wednesday 24 September 2014

Arrival of the Robot Flash Mob


Robot Flash Mob
Science has leapt up to a point where we get to find things reaching to unexpected turn of events. We found ‘Self-organisation of robots’. Sounds damn scary in the first instance but when you find that these non-fleshy robotic animals are made up and conjugated at a single place, you have got to say that it looks damn cool. Yes, a flash mob of around 1000 robots ought to be very cool in nature. There were swarm robots that were made to work and communicate among themselves but they could be made up to only 100 such robots. This time science has jumped to a greater extent to bring 1000 robots together to be able to arrange themselves in a certain way and perform a task.

Amazing features and concerned things:- 

  • The research was based mostly in Harvard School of Engineering and Applied Sciences first thought of such a massive flash mob and then came up with this feature.
  • This mob is based on the simple basic rule of connecting and gathering more to complete the task just the way the ants do. The basic rule of simple organisms joining parts to make a complex structure. The researchers did value the point and took ideas from the fact that things in biology are so simple and small to look at but still when they connect together, they do the seemingly pretty impossible tasks.
  • This team of robots are called‘Kilobots’, which are very simple in nature. These robots are so simple that they possess only two pairs of stiffed wire as legs and move via vibration with the use of infrared light for communication with the fellow robots. There is no system for assigning cameras to the robots for visualisation.
  • A two dimensional command is sent directly to the robots all at once to perform a task and they start arranging themselves in a specific way to complete the task.
  • Well, the problem of more than 100 robots at once was removed via the additional use of 4 four lieutenants at very crucial areas and the rest of the robots use edge detection to gather and orient themselves giving rise to flash mob kind of appearance.
Things still under consideration:-
  • The robots that are used re simpler than most conventional robots. Therefore, they are less reliable but with greater variability. Although, the scaling and auto- correcting mechanisms of the robots due to perfect designing overcome these problems very often.
  • What happened during the first robot flash mob was that the robots were able to successfully create three shapes- the letter K, a wrench and a starfish. Wrench took 6 hours to complete, on the other hand k and starfish took about 12 hours.
  • What had to be commended was that these robots even took much time to complete this orientation but they were successful in creating this 2-d dimension via themselves without external cues.
This gives light to the fact that eventually we will see a greater number of robots working together to perform a task and do it within lesser times with proper auto- orientation.


Sunday 21 September 2014

MIT Cheetah Robot Gets Enables through New Algorithm to Run and Jump


Cheetah Robot
Cheetah is known for its speed and agility. With the accelerated speed of up to 60 mph in few seconds it is known to be the fastest land predator on earth. It has the ability to pump its legs in tandem when in top speed, bounding until it is able to reach a full gallop.

At the moment the researchers from MIT have worked and developed an algorithm for bounding, which they have been able to successfully able to implement it inside a robotic cheetah. This robotic cheetah is a sleek, which is a four legged assembly of gears, batteries and electric motor that weighs almost similar to the feline counterpart.

The research team was able to take the robot for a test run on the Killian Court of MIT, where the robotic cheetah was able to across the grass with a stable clip. When the experiment was conducted indoors, the cheetah was able to sprint to 10 mph and still continued running after clearing the hurdle. According to the researcher, the current version of this robot is estimated to reach to a speed of 30 mph.

Bounding Key: 

The key to the algorithm was to program each leg to exert a specific force in split second at time of hitting the ground and to maintain speed. In general faster the speed the more force is required for propelling the robot faster.

According to Sangbae Kim, an associate professor at MIT mechanical engineering department, force-control approach applied to the robot is the same as that of sprinters race. As per him due to this approach the robotic cheetah is able to handle the rougher terrain which includes jumping across a grassy field.

He also stated that since most of the robots are heavily built they are unable to handle and control the force in high speed scenarios. This is what makes the MIT cheetah extraordinary as the force can be easily controllable. The cheetah is dynamic because of the electric motor which has been designed by Jeffrey Lang who is Vitesse Professor of Electrical Engineering and the custom design. These motors have been designed to be controlled by amplifier which was designed by David Otten who research engineer in MIT's Research Laboratory of Electronics.

Towards the main gait: 

The act of running has been parsed into various bio mechanically different gaits. The first model chosen by the researcher involves the actions similar to that of rabbits. In this the bounding has been categorized in terms of the front legs hitting the ground together which is followed by the movement of the hind legs.

According to Kim, bounding is the entry gait and galloping is the main gait. Once bounding has been reached, one can split the legs to get galloping.

In general when an animal bound, their legs will touch the ground before they start the cycling process within second’s gap. In bio mechanics, the percentage of time the legs is on the ground rather than the air is known as duty cycle. The speed of the animal determines the duty cycle.

According to Kim, their robotic cheetah can be silent as well as be efficient like an animal. The only thing one can hear is the legs hitting the ground. This should be expected from any legged robot in future.

Tuesday 18 February 2014

Robots Termites, Who Work By Coordinating

Robot Termites -1

Robot Termite -2

Robot Termite -3
Inspired by the example of termites, researchers have programmed robots to construct buildings without any central instructions. These robots are of complex shapes without central instructions. According to Eliza Grinnell of Harvard School of Engineering and Applied Sciences; these robots are of complex shapes work without central instructions.

Justin Werfel the first author of the studies says that all their research was inspired by termites, Werfel is one of a researcher at the Wyss Institute for engineering inspired by biology at Cambridge. He added that they discovered the amazing constructions that those small insects could do from that they have created programs and robots that could act very similar way that of the termites. The termites are working on local information rather than a central organization.

Termites can build structures of several meters without requiring a coordinated strategy. Instead, they use simple instructions provided by their peers and the environment to know where to put the next piece of the mound and finally build a mound adapted to their environment. This use of local information is called stigmergy. Justin Werfel and his colleagues have used it to design algorithms that reflect the behavior of termites, which they then applied to a group of robots building.

Each robot follows only a few simple rules: instructions are the same for any structure built by robots and traffic laws that apply to the specific structure. Equipped with sensors, robots are moving along a grid, lifting and depositing bricks. If they perceive a brick on their way, they carry it to the next free space. “The traffic can only go in one direction between two adjacent sites, which maintain a flow of robots and material movement in the structure," says Justin Werfel. If they did things without order, they would find themselves easily trapped in their building.

Researchers have therefore implemented security controls that allow robots to consider where there are already bricks and where there should be. To determine the rules, researchers start of the final structure. Its analysis to determine the number of rules that robots must follow. Once they are set, robots with the independent control have several advantages. “A robot may break but the rest continue “said the authors.” No critical element exists which can compromise the whole of its failure.".

According to the researchers, it is possible to consider using this type of robots to build structures for human use in a dangerous or difficult situation such as making shelter after an earthquake, underwater or even in another planet. An application in the shorter term could form sandbag dikes for controlling flood concluded by Justin Werfel.